Common knowledge and "elite theory"
1.
I have never read Hegel and that's probably for the best. J. S. Mill once wrote that he "found by actual experience of Hegel that conversancy with him tends to deprave one’s intellect." Setting aside Hegel, let's consider a more comprehensible version of historical idealism I just made up. This should be viewed as an answer to the question: if there can be said to be historical development, what is the essence of this development? How should this development be analogised? One answer might be that historical development is analogous to the development of a human body, with periods of youth, adolescence, adulthood, old age and death. Such a view would "explain" the irreversibility of historical development as deriving from the irreversibility of ageing. On this view there is nothing mental about historical development, it is a purely biophysical process. A historical idealist, by contrast, would say that historical development is analogous to the development of an argument. Just as an argument has starting premises and these premises can be developed to derive conclusions, which supply further premises, so history develops. An argument has a flow from premises to conclusion, and though one can fail to derive the right conclusion you can't un-derive it after you've succeeded. This kind of logical irreversibility is then claimed to be at the root of historical irreversibility. You hear people say that a certain idea "has no future"—by which they mean it cannot rationally develop out of current premises, it has been "refuted" by past developments, and so can never command legitimacy. Put that way, historical idealism is to a certain extent implicit in discourse. We Are All Historical Idealists Now. On the other hand this rational development which is supposed to unfold through history is clearly dependent on the ability of the cognitive elite of a society to pick up the intellectual threads. You could find all sorts of other caveats with this picture (e.g reason itself can’t derive political conclusions due to the Is-Ought gap, biological drives are needed to bridge this gap).
I wrote in a previous post that this idea cuts the Gordian knot between "mistake theory" and "conflict theory": individuals may be supposed to act according to self-interest1, but intellectual work needs to be done to make sense of their interests in a political context. Thus even if people are totally self-interested, mental development still shapes events because such self-interest is unenlightened and historical development is the process of enlightenment of self-interest. The cognitive elite is essential in creating a cache of intellectual "subroutines" that the general public can draw on in making sense of their own interests. In another post I asked if there can be "laws" of historical development. If you subscribe to a "historical materialist" theory (a misnomer), where historical development is driven by technological development, then insofar as you can predict technological development you can predict historical development. On the other hand, "laws" of historical development under historical idealism are more like "laws" of economics. "Firms maximise profit" could be said to be a "law" of economics in that it correctly describes how firms behave but this couldn't be used to make predictions without knowing exactly how the firm is supposed to maximise profit, which is not accessible in detail to the economist since it involves all sorts of particular information which it is the business of the firm to discover. This is Mises' Calculation Problem in disguise. You might be able to make such a prediction if you have a lot of detailed specific knowledge about the firm's circumstances, but you can't get this out of any general theory. Likewise, to an idealist predicting historical development is like predicting how someone else might reason—there is no theory on how to do this, you just need to be smarter and more knowledgeable in a lot of specific ways.
By the way, this reminds me that it is often considered a "bias" (assumed to be a kind of "optimism bias") that people from a political camp overrate the ability of their group to win in politics. But is this really irrational? If you believe policy X is a disaster, it is reasonable to think that its disastrous nature should become more and more obvious, should dominate discourse more and more over time. If you use the truth as a predictor of someone's long-run beliefs, and since your own beliefs are your best estimate of the truth, your own beliefs are your best prediction of what others are converging to. It's why many communists continue to believe revolution in the West will eventually happen despite seeming overwhelming evidence to the contrary.
Supposedly (???) Hegel thought this process of reasoning occurred in a kind of collective mind. If I had to guess why he thought this it would be that the rational historical development clearly extends beyond the lifetime of individuals, thus if a rational development is taking place it must be taking place in a "mind" which continues to exist when the individual minds pass way. This can be permitted as a verbal device so long as we don't reify it, just as we use expressions like "the economy" to talk about many individual actions coordinated in a certain way. But to avoid mysticism we should disaggregate and understand how individual actions and beliefs lead to such social phenomena.
2.
There is a notorious logic puzzle called the "Blue Eyed Islander Puzzle", known under many other names. The story goes that there is an island full of blue eyed people (let’s call it Thule). Each inhabitant knows everyone else on the island has blue eyes, but he doesn't know his own eyes are blue because there are no mirrors and it is taboo to discuss eye colour on the island. The custom of the island is that if anyone should come to know his own eyes are blue he should commit suicide on the morning of the next day. One day an explorer turns up on the island and he says, in front of the whole tribe, "at least one of you has blue eyes." Suppose there are 50 islanders in total. Furthermore suppose that the customs of the tribe are common knowledge to everyone, so that everyone knows them, everyone knows they know them etc. The puzzle is: after the explorer's statement, what happens?
Maybe you want to go away and try to solve it yourself, but if you're happy for me to ruin it keep reading. The essence of the puzzle is knowledge of knowledge matters. If there are 50 islanders, each islander knows at least 49 people have blue eyes. Since, as far as he knows, he may not have blue eyes, he also knows that everyone knows at least 48 people have blue eyes. Similarly he knows that everyone knows that everyone knows that at least 47 people have blue eyes, and so on. Each time we apply "everyone knows that..." we have to subtract the number by 1. This is because "everyone knows that..." should be read as "for every person P, P knows that..." and P cannot rule out the possibility that his eyes are brown. At some point the number of blue eyes that are the subject of such iterated knowledge becomes 0.
Recall that in game theory something is common knowledge if everyone knows it, everyone knows everyone knows it, everyone knows everyone knows everyone knows it etc. Thus—rather surprisingly—the fact that at least one person has blue eyes is in fact not common knowledge among the islanders.2 It is precisely this which the explorer disturbs. By stating in front of everyone that at least one person has blue eyes—a seemingly innocuous statement given that's it's known by everyone—he makes it common knowledge. After all, everyone heard him say it, and everyone could see that everyone else heard it, and so on.
The next afternoon, when it's seen that no one has committed suicide, it becomes common knowledge that at least 2 people in the tribe have blue eyes. Another day passes without a suicide and it becomes common knowledge that at least 3 people have blue eyes. After 50 days it becomes common knowledge that everyone in the tribe has blue eyes and they all commit suicide. The explorer's innocuous statement of widely known fact has totally upset the equilibrium of the tribe.
Although the story is made implausible by the level of reasoning we credit to the islanders, it is still true that in the real world there are important differences between something being widely known, something being widely known that it is widely known, etc. and I claim such differences are important for politics. I first had this idea after the Brexit vote. Many people were saying that politicians will simply ignore the referendum result, after all they regularly go back on manifesto promises and other things they say in election campaigns—why would it be any different now? Something struck me as implausible about this idea. It seemed "obvious" to me that ignoring the referendum result would have a devastating effect on the legitimacy of the political system in a way that previous past promises wouldn't. The referendum result had a kind of undeniability that other election promises—even if reported in the mainstream press—don't have. It struck me that the key point is that the referendum established a very high degree of mutual knowledge. Everyone knows the result, everyone knows everyone knows the result, etc. to a high degree, whereas other broken promises have a kind of plausible deniability because even if they are widely known, there is a lot of uncertainty about exactly how widely known they are, which frustrates coordination. (I will say more about what I mean by "coordination" later.)
A similar thing can be said about elections. Scott Alexander once had a post where he complains people read way too much into elections, since they are decided on such narrow margins that the outcome can be changed by weather events. This is true, yet people continue to treat elections as "vindicating" certain ideas—why? My explanation is: the number of votes cast for each side might be widely known, but it is not common knowledge. What is common knowledge (or known to a very high degree) is who won. You cannot count on other people knowing Trump's margin of victory in 2024, but you can count on them knowing he won, and you can count on them knowing that you know that they know it. Thus such election results are a basis for coordination in a way that the much less binary public opinion data isn't.
Scott Aaronson had an old blog post on the importance of common knowledge which, upon re-reading, makes many of the points I was planning to make in this post.3 He points out that in social situations one often has to delicately balance degrees of knowledge, so that you might want to make something known but not known that it's known, and many “nerds” miss this insight because in science making the truth as clear and explicit as possible is the goal. Furthermore, an authoritarian regime can keep itself in power by preventing knowledge that is widely known from becoming common knowledge. Thus it might be the case that everyone regards the existing regime as illegitimate, and even knows that everyone else regards it as illegitimate, but it remains stable because they don't know that everyone else knows that everyone else knows it. This is where events come into play. An election result, even if won by only a narrow margin, can operate like the equivalent of the explorer on the island saying something they all already knew.
3.
I have said that common knowledge is important for coordination, but what do I mean by coordination? Coordination in a loose sense means bringing the actions, or plans, of many different people into harmony. In particular, coordination is needed in a situation where the right action for one person depends on the actions of everyone else. Such situations arise in group projects where each person needs to perform the right part of the project, but they also arise when there is no overarching goal, such as in situations where people form plans on the basis of how they expect others to behave. An equilibrium is a situation in which all plans are consistent4, so that everyone can simultaneously carry out their plans without running into conflict with one another.
I want milk so I plan to buy milk. For that to happen the local supermarket needs to not have run out of milk. For that plan to be carried out it must be the case that a large number of people, acting independently according to their self-interests, acted in a way such that enough milk is available at my local supermarket. Conversely, the dairy farmer wants to sell his milk, and that requires millions of consumers to organise their lives in a way that they desire enough milk so that his produce can be sold. Thus the equality of supply and demand amounts to a consistency of plans. That millions of people each pursuing their own plan generally do not come into conflict with each other is a remarkable accomplishment of the price system.5
In politics, each person has a large number of objectives and the set of people who share exactly your objectives is very small. To advance your views, you need to find a set of issues that lie in the intersection of a sufficiently large number of other people. Unlike participants in a market, you can't act alone in politics and expect something like the price mechanism to take care of the rest. To achieve your objectives in politics, setting aside the intellectual work of promoting your views, some degree of common knowledge needs to be established among people who share those views so that you can act in concert. Such common knowledge operates as a "Schelling point."
4.
There is a pseudish "materialistic" version of "elite theory" I often see promoted. I use "materialist" as a term of abuse but I should say that I am an atheist and a physicalist. I think the universe is made of physical stuff and nothing else, I am not trying to promote a spiritual worldview here. However there is a certain confused philosophical tradition which, deriving from a set of philosophical hangups, is unwilling to attribute beliefs and knowledge as genuine causal factors in society, and any apparent correlation between how society is organised and people's beliefs about how it should be organised is merely after-the-fact rationalisation. This view is adopted, not because there's any good evidence in its favour, but because it is believed on methodological grounds to be more "scientific." It seems to derive from a vague feeling that beliefs and knowledge aren't physical things, that they seem kind of ghostly or spiritual, and not befitting of the “objective” standpoint of the scientist. Thus beliefs are a social epiphenomenon: just froth on the waves of more fundamental phenomena.
There is nothing wrong with having an elite theory of society, but it has to be done right without committing "materialist" fallacies. The "materialist" elite theory, for example, says that political conflicts are simply power conflicts with no real ideological content at all, and such apparent ideological content is just cover, and doesn't really affect how things play out. To give a concrete example, this theory might present communism as representing one side in a purely racial conflict. There is a racial dimension to communism, a racial dimension that gets ignored by "mainstream" discussion, but is it really appropriate to strip away all ideological content? Do you think communism could have spread in the 20th century the way it did if the ideas of communism didn't themselves have some moral and intellectual force? And what about the downfall of communism? Is it plausible to represent this purely as a content-free power conflict, and not one in which the discrediting of communist ideas played a genuine causal role?
There is, in fact, an even more fundamental confusion at work here. This is that “power” is just as “socially constructed” and “non-physical” as beliefs. Thus the hardheaded “materialist” who proposes to disregard what people believe and instead focus on the “hard realities of power” doesn’t seem to realise the full implications of his own methodological hangups. I can't really give full justice to this point, but it is fleshed out in John Searle's book The Construction of Social Reality. The power of a sitting president, of a bureaucrat, of a CEO, of a wealthy person, derives entirely from the fact that society treats certain objects or actions as having certain socially constructed properties. And this “treating X as Y” is something that happens in the minds of the members of society. Money is a classic example of this: something is money because it is considered to be money. As Searle points out, this puts the lie to the slogan that "power grows out of the barrel of a gun."
To do "elite theory" correctly, all these philosophical confusions and hangups need to be cleared away. For example, a common way an "elite" stays in power is through censorship (obviously). This is because it prevents the more cognitively capable from creating and spreading the "subroutines" that allow the masses to more effectively perceive their self-interest, and it also prevents the coordination required to mount an attack on the regime. If you view belief as a mere epiphenomenon with no causal power you are forced to ignore this piece of obvious common sense. This explains the manner in which pseudish people think relaxations on internet censorship or very public blows to the legitimacy of the sitting regime make no difference because these are all "mental" events. You fool, you think any of this matters? You clearly haven’t read Theory.
This is controversial, and I have heard Bryan Caplan argue quite convincingly against it: that actually policies that benefit narrow special interests, like farming subsidies, are primarily in place because most people think those policies are actually good rather than because farmers lobby for them. In a sense it would be quite irrational to narrowly advocate for your interests in politics independent of any ideals because your advocacy is extremely unlikely to affect the political outcome but will affect how others view you.
Common knowledge is quite hard to establish in real life situations. A real world situation in which it is established would be something like: two people are on either side of a burning building and they look into one another's eyes while the building is still in view. Then “the building is burning” is common knowledge between the two people.
“What I have here written makes no claim to novelty in points of detail; and therefore I give no sources, because it is indifferent to me whether what I have thought has already been thought before me by another.”
Prices accomplish this after a process of adjustment, not just any prices will do, though characterising the mechanism of this adjustment is an important problem in the foundations of economics.



