Meta-Sequences: Introduction & Criteria

I have offered bounties to anyone who can identify a precedent, in mainstream philosophy, for an idea advanced by Eliezer Yudkowsky as his own. These bounties are in the service of a larger accounting:

Background & Motivation

LessWrong rationalists and mainstream philosophers are two tribes made up of generally intelligent & knowledgeable people, focused on answering many of the same questions, which are each broadly dismissive of the other’s intellectual production. To a naive observer, it is unclear whether or which of these mutual dismissals is warranted.

The rationalists’ claim is that mainstream philosophy is too overrun by junky thinking and bad incentives to reliably “get it right” with any regularity. They point to various institutional & social structures that might encourage dragging out debate unnecessarily, and believe that a few unaccounted-for biases (e.g. map/territory confusion) under-gird many of mainstream philosophy’s “confusions.” Members frequently cite “junky” thinking from philosophy, though this treatment is obviously far from systematic.

The claim of those defending mainstream philosophy is typically not itself advanced by mainstream philosophers, who, broadly speaking, have significantly less engagement with LessWrong-rationalists’ ideas than rationalists do theirs. Instead, a self-elected representative of this tribe will make a claim that Yudkowsky’s ideas are not meaningful contributions to philosophical discourse. The more moderate versions of this claim concede that he has made contributions to decision theory (in the form of “timeless” decision theory, or TDT) and to the nascent philosophy of AI. More aggressive versions claim his entire intellectual corpus is an unwitting reinvention, or merely confused.

Examples of claims in this vein:

  • “The only original thing in LW is the decision theory stuff and even that is actually Kant.” (src)
  • “Alright, I’ve read a bit more into Less Wrong, and I believe I finally have acquired a fair assessment of it: It’s the number 1 site for repackaging old concepts in Computer Science lingo & passing it off as new. And hubris. Also Eliezer Yudkowsky is a pseudointellectual hack.” (src)
  • “Eliezer Yudkowsky is a pseudointellectual and the sequences are extremely poorly written, poorly argued, and are basically poorly necromanced mid 20th century analytic philosophy.” (src)

It strikes me that discerning whether, or the extent to which, each camp is correct has important bearings on our understanding of autodidacticism and more traditional educational modes (where one is first “steeped” in the discourse’s approaches & beliefs before attempting to answer its unresolved questions). For example, when is it “cheaper” to reinvent rather than search for & discover? (And to what extent is the answer a function of philosophy’s signal-to-noise ratio & general accessibility?) In what ways might there be advantages to “starting blind,” similar to how we think of hillclimbing & the relative ability for someone at the foot of the hill, vs. at its peak, to “escape” a discourse’s current local maximum and find some other higher peak?

(Sidebar: I haven’t been fully clear on whether this project concerns Yudkowsky’s ideas or the ideas of LessWrong or the ideas of rationalists—an ambiguity which may seem more problematic to an outsider than it would to a member of LessWrong proper. Yudkowsky’s sequences are perceived as the backbone of the LessWrong style of thought, and limiting the inquiry to his writings is an imperfect but in my opinion reasonable proxy for understanding the rationalist community’s intellectual output as a whole. However, I may end up covering non-Yudkowsky rationalist ideas so long as they are perceived, by the LessWrong community, as being both original to it and meaningfully “right” or “useful.”)

A second motivation for this project is contingent on what I end up discovering. In 2011, Dave Chalmers comments on LessWrong:

As a professional philosopher who’s interested in some of the issues discussed in this forum, I think it’s perfectly healthy for people here to mostly ignore professional philosophy, for reasons given here. But I’m interested in the reverse direction: if good ideas are being had here, I’d like professional philosophy to benefit from them. So I’d be grateful if someone could compile a list of significant contributions made here that would be useful to professional philosophers, with links to sources. (The two main contributions that I’m aware of are ideas about friendly AI and timeless/updateless decision theory. I’m sure there are more, though. Incidentally I’ve tried to get very smart colleagues in decision theory to take the TDT/UDT material seriously, but the lack of a really clear statement of these ideas seems to get in the way.)

That no one took Chalmers up on his request was a missed opportunity for both communities to enter into discourse. Hopefully, this series can correct that.

Project

From here on out, I will work systematically through both the Sequences and the highlight reel of contemporary philosophy in order to understand their ideas’ relationships. I cannot be comprehensive in my reading of contemporary philosophy, as this is a lifetime project. Instead I will be highly reliant on the bounty system, and the recommendations of knowledgeable insiders more generally, for pointing me toward relevant texts. I will fill in any known gaps in my knowledge with reference to respected secondary sources, such as the Stanford Encyclopedia of Philosophy.

From the Sequences I’ll attempt to build a list of the ideas or concepts they present. (Subjective discretion as to what constitutes a concept or idea is inevitable; oh well.) I’ll then work through each item on the list, using bounties & my own research to understand and communicate each idea’s “status” in the mainstream philosophical community: whether it has been advanced in a similar form before, whether it is widely accepted or dismissed, and the contemporary stances around the idea (challenges, rebuttals, qualifications).

2 responses to “Meta-Sequences: Introduction & Criteria”

  1. […] to understand the extent to which these ideas are novel as opposed to reinventions of the wheel. Link to introductory post for context and […]

    Like

  2. […] tribes, which I wrote about on this blog, in “Mutual Hostilities”. I started work on a sequence to investigate whether, after a proper deep dive into philosophy of language, LessWrong’s ideas […]

    Like

Leave a comment

Create a website or blog at WordPress.com