From my 2020 “Year In Review“:
I launched a [research group with a] small community of friends, which has this year evolved into an online forum. Our mascot is the pfeilstorch. We’re interested in the inexact sciences, and moreover, in the rigorizing pipeline that got us from natural philosophy to biology, or alchemy to chemistry. This pipeline involves conceptual rigorizing, stamp collecting, taxonomizing, and engineering—but many fields, I feel, when faced with the natural incentives of science’s prestige, authority, and funding—have too hastily skipped the necessary steps, performing a cargocult of scientism. If I’m asked on an especially uncharitable day, I will answer that I believe these fields are best described as destructive and fraudulent, resembling the state and sophistication of premodern medicine, while profiting off the real predictive power and insight of the hard sciences through nominal and institutional association.
So far, most of our community’s publications have revolved around representation, communication, and interpretation—how people “read” and “write” scenes in mixed games of coordination and conflict. Representing these inter-agent interactions as games, and language games specifically, has been our most productive frame for making sense of players’ behavior.
In a game, a move derives its fundamental quality from a combination of (1) the history of moves which precede it, making up the “game state”; (2) the set of socially enforced constraints participating players must appear to adhere to; (3) the goals of players from macro to micro, instrumental to terminal. Note that in #2, we stress the appearance of adherence over adherence itself: since most social games are refereed primarily by other players, rather than natural law, the reality of a move often does not matter as much as its apparent reality. In poker, cheating players win by avoiding detection; in the hiring process, talented workers are regularly overlooked in favor of talented bullshitters. This is the “opticratic” quality of social activity—it is as much the appearance of merit as merit itself which leads individuals to win games. There are exceptions: domains where activity is rewarded irrespective of human assessment, but these are rare and diluted by opticracy—medicine, stock prices, science, and many other games where players butt up against hard physics are still deeply permeated by the tyranny of optics.
Our most recent paper, “Discursive Games, Discursive Warfare,” builds off the work of “erisology” blogger John Nerst and the social theorists Bruno Latour and Pierre Bourdieu. It argues that our beliefs, both publicly advocated and privately felt, are deeply situated and in a sense “reactionary.” The purpose of our public opinions, which is more clear in light of an evolutionary history of small hunter-gatherer bands, is not to have some ideal, “perfectly accurate” stance which corresponds to our best assessment of reality, but rather to influence decision-making processes. In other words, our beliefs are goal-oriented and transformative. Perhaps the best metaphor for this approach is bargaining: in informal economies, sellers will initially offer far lower prices than they are willing to pay, in order to bring down the eventual compromise price. Similarly, many espoused political stances are more radical than their advocates. There are several pieces of evidence that point to this being the case: first, individuals frequently “switch sides” depending on who they are arguing against. Second, many proponents of seemingly radical views will quickly equivocate when privately questioned. In liberal camps, we have seen this recently with calls to abolish police, or in denials that there are any biological bases to sex differences. Advocates for the former typically carry policy goals of ending private prisons, funding restorative justice programs, and removing firearms as standard-issue police tools. Advocates for the latter typically believe that biological differences are overstated and weaponized against women.
None of this is especially ground-breaking; in a sense, we are merely stating that political actors rhetorically (read “strategically”) overstate the strength of their beliefs, while adding a Bob Trivers spin and claiming that these overstatements do not feel, phenomenologically, like rhetoric. But we argue that similar trends permeate academic and theoretical work as well. As Richard Rorty writes in Consequences of Pragmatism,:
When dialectical philosophers are accused of idealism, they usually reply as Berkeley replied to his critics—by explaining that they are only protesting against the errors of a certain philosophical school and that they are really not saying anything at which the plain man would demur. As Austin said in this connection, “There’s the bit where you say it and the bit where you take it back.”
This discursive inclination, coupled with what we call “the telephone effect”—the tendency of originally subtle academic arguments to become diluted or exaggerated by successive waves of popularization—leads to discursive battles being waged against “weak men.” That is, critics assault the real, stated views of at least some advocates of a position, while avoiding its strongest arguments—either because they have only encountered the weaker versions, or because the weaker versions are more easily assailable. Many are familiar with Franz Boas’s claim that the Eskimos have “50 words for snow”; many are also aware that such a claim has been repeatedly falsified by anti-Sapir-Whorfists. Far fewer know that Boas never did advance such a claim. His field work found only that Inuit tribes appeared to have more fine distinctions for types of snow than did Anglo societies, but the repeated, motivated exaggeration of his finding by pro-Sapir-Whorfists lead to the “50 words” narrative dominating linguistics—a claim easily rebutted by anthropologists hungry for a high-profile take-down. Only recently, after many decades of such viewpoints being widely dismissed in linguistics, has broad empirical work suggested that Boas’s original, more modest claim was perfectly accurate.
“Discursive Games, Discursive Warfare” walks through many cross-domain examples of similar effects, from Freudianism to feminism and literary theory. Given this “battle of weak-men,” we belief that a more synthetic approach is necessary to make forward progress on inexact science problems, one which seeks out the strongest, original arguments for various scholarly camps, and then treats them non-dichotomously, as reconcilable, “three blind men” perspectives on the elephant. We call this “general compatibilism,” after our belief that the most compelling resolution of the free will/determinism debate is a compatibilist approach which re-conceptualizes the problem such that both positions are, in some meaningful sense, true.
Our recent case study, “Situating LessWrong in Contemporary Analytic Philosophy,” puts this approach to the test by working through the hostile, mutual suspicion between analytic philosophers and LessWrong rationalists. Examining in detail a set of philosophy-of-language issues which both camps have written extensively on, we find that, despite their apparent disagreements, the two sides have, in recent years, independently come to very similar conclusions and beliefs about the nature of language.
Earlier, we mentioned that we see beliefs, and discursive utterances, as transformative “moves” in a public decision-making game, which are best understood not so much in a “vacuum” but as situated ploys to accomplish certain desired outcomes. We think that all communication, in a similar way, is fundamentally goal-oriented. The goal of this supplementary material is to gain grant funding, and almost all linguistic decisions that have gone into it at some level are shaped by this desire.
Coming to everyday communication through an enactive “games” lens, we have coined the phrase, “All communication is manipulation; some manipulation is mutually advantageous.” Despite the negative connotations of a word like “manipulation,” we mean it in a very neutral sense, in which linguistic communication is fundamentally an action: something designed to change one’s circumstances. But whereas actions can be wielded against nature, e.g. lifting a rock, communication only works with other humans (and in some cases, other intelligent non-human animals). If a goal is a delta between one’s present and desired situation—I wish to have a candy bar—then my social interaction with a cashier is designed to “manipulate” the cashier into selling me the candy bar. Of course, in many communication situations, this manipulation is perfectly transparent and consensual, because it is mutually desired by both parties: the cashier knows that I am trying to provoke a sale, just as my friend knows, when I shout his name, that I am trying to gain his attention.
While developing this framework for communication, we came across the 1960s work of Erving Goffman (1969, Strategic Interaction) and Thomas Schelling (1960, Strategy of Conflict). We realized that, for a brief, decade-long window, this kind of open-ended, strategic approach to social communication had been introduced and then promptly forgotten. In “Axes of Strategy,” an in-progress forum investigation of strategic interaction, I attempt to rekindle the currently démodé approach, showing how much premises of commitment, coordination, and deception undergird everyday linguistic use.
In these social games, there are typically “evaluated” and “evaluator,” or “selectee” and “selector,” roles. (In many, symmetrical interactions, each person occupies both roles depending on whose turn it is.) In such assessment games, there is the obvious, aforementioned advantage of deception—that an evaluated party will definitionally accomplish more favorable outcomes if he can create an impression for the evaluator that serves his interactional goals (regardless of whether that impression “matches” reality). But, we realized, especially in more formal and institutional evaluation settings, this style of deception takes a backseat to a more pernicious problem, which we call either “surrogation” (when befalling an evaluating party) or “degenerate play” (when performed by the evaluated). In social games of assessment, the evaluating party looks for “cues” or proxies, with which he can make larger inferences about the individual’s quality, character, motives, background, etc. Clothing, linguistic style, or the prompts in a grant application are such proxies—because we can never know the “full” situation in all its relevance to our own evaluation goals, we must discover and interpret metonyms which statistically correlate with harder-to-discern qualities.
In economics, this is often known as the distance between private and public information. But digging into literature from sociology, statistics, game studies, artificial intelligence research, signaling theory, accounting theory, counter-insurgency, and many more fields, we discovered that such a concept had been invented many times over. The structure of each discourse’s in-house concept was roughly similar: some metonymic cue, which correlated with a “deeper,” desired quality, became fetishized by evaluators, who lost sight of its role as purely a proxy and began to treat it de-contextually and automatically, as valuable in-itself. At the same time, the evaluated players, realizing that they would be rewarded more for the appearance of these metonyms than for actually possessing the qualities such metonyms imply, began optimizing toward the metonyms themselves, re-allocating effort away from real problems and into appearances.
The first draft of this paper, “Surrogation,” brought us, finally, to the divide between a game’s spirit and its letter. Many incentive structures become perverse, and many laws become unjust, precisely because of how difficult it is to specify the spirit of a game—the intended way that players ought to participate, or ought to avoid behaving as—in concrete, literal law. In gaming communities, both of board and video games, the tendency for players to exploit the poorly written “letter” of games is called “degenerate play.” We explore these game dynamics in our short piece “Spirit vs. Letter,” beginning with the lessons of King Midas, who underspecified his desire for gold and was cursed by this failure. The underspecification problem, of course, is one of the most notorious in artificial intelligence, and will require a much deeper understanding of how formal representations of reality—for instance, language—are interpreted and “signify” in different contexts and by different players.
In other words, we believe that for many of our most crucial problems—from artificial intelligence to economics to law—it is language—and our failure to understand language in all its complexity—which holds us back. Linguistic meaning may seem a narrow problem at first—an issue best relegated to literary theorists, perhaps. But we believe that it is the most widespread instance of a larger class of representation problems which are fundamental to all coordination between entities.