Ayn Rand was wrong. Of course, it’s difficult to summarize one’s thinking in a tweet limited to 140 characters; my friend Lisa Plaxco asked me to elaborate, so I’ll attempt to do so in this blog post. Let me start with where I agree with Rand. I agree that blind faith, or believing things without reason, is both unnecessary and in many ways problematic.
But there are so many points at which she goes wrong. Perhaps one of the most fundamental has to do with her conflation of propositional logic and ordinary meaning. As Einstein famously said, “As far as the laws of mathematics refer to reality, they are not certain; as far as they are certain, they do not refer to reality.” Einstein understood the fundamental issue here, which is that to conflate a description of reality with reality itself is to make a very elementary mistake.
Ordinary concepts and propositions simply do not map into the Objectivist conceptual structure: for example, take the assertion that all propositions are either true or false. I remember bringing this up with a classmate of mine in my freshman year at Harvard… he was a physics major and a big fan of Rand. Very smart guy, so I decided to explore this issue with him, because I thought he could appreciate the argument.
The basic argument I made was this: if all propositions are either true, or false, let’s take a simple proposition such as “the apple is red.” Is this true, or false? What if you had an apple and you slowly, ever so slowly, changed it from red to orange? At what point would the proposition cease to be true, and become false?
Or let’s take a statement such as “It is hot today.” Is it true, or false? If you slowly lower the temperature, half a degree at a time, when it does it magically transform from being true to false? Could it be that this statement varies in truth value depending on who is saying it? Could it be that statements can be approximately true, or somewhat true and somewhat false, or true depending on context, depending on who is speaking, who is listening?
Yes, obviously. It’s amazingly, painfully clear that ordinary use of language and concepts can’t possibly map neatly onto propositional, mathematical logic. Propositional logic is an idealized system, it doesn’t relate directly to the way we think or communicate in everyday terms.
I made this argument and my classmate indicated that my arguments seemed quite intriguing, though I wasn’t sure if he was convinced or not. Interestingly, four years later I ran into him in Palo Alto — he was going to grad school at Stanford. He greeted me very warmly, and we got into a conversation about logic and AI. It turned out that he had graduated from Harvard summa cum laude after writing his thesis: on fuzzy logic! I was glad he had found my argument at least somewhat persuasive.
Of course, interestingly, we went on to discuss AI some more, and I was telling him that I thought neural network models were quite promising. He thought that was interesting, but he challenged me: doesn’t it seem as though our thought process is inherently serial, not massively parallel? So I said, okay, but right now, you’re seeing, you’re hearing, you’re feeling the wind on your skin, you’re talking, all at once. Isn’t that obviously massively parallel? He had to agree.
The interesting thing about all this is that we have a cognitive habit which is deeply ingrained: we create internal models which simplify things in some way, which make it easier to think about things conceptually, and then we make the fundamental error of conflating the map with the territory. Of course, any map must necessarily be a simplification of reality in order for it to be useful; but it is inherently a simplification, that is to say, it must leave out massive amounts of detail. As Borges alluded in “On Exactitude in Science“:
In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.
Leaving out details is of course necessary for information processing; but it is incredibly sloppy and rationally inexcusable to forget that this is what any conceptual system does.
The impetus for my thinking about this was a conversation I was having with Magda today. The problem with the world view of Rand is that it appears to be clear and precise. But, in fact, it is overly simplistic — oversimplified. I was telling Magda that I remember a snarky remark Brian Cantwell Smith made once (someone with whom I’m personally acquainted, who wrote the excellent book On the Origin of Objects which is a very straightforward and entertaining read — essentially explaining how the idea of “objects” can arise starting from a sort of basic physics-inspired view of reality), in which he was joking that reality would be a lot easier to model if we had the Army Corps of Engineers reshape the land to be in exact 1000-foot-high contour increments. To pretend that your model can map reality exactly is to be incredibly imprecise: sloppy. An equivalent idea would be to try to make a model of the world made up of 1000-foot blocks. Such a model would be apparently clear and exact, but it would be horribly imprecise.
I’ll end by discussing one of the other errors Rand makes in her thinking: her conception of self-interest as being the highest good. The virtue of this idea is that it encourages decentralized, distributed system function. However, the problem with this idea is that it conflates a model of reality (a model in which we divide the world into entities we call individuals) with reality itself, which is far more intricate and complex. Of course, she’s right that some degree of self-interest is important: but in a more general sense, self-interest is local optimization. Doing what makes sense for yourself as an individual means working with a criterion of local optimization as your goal: improving things in your immediate surroundings (e.g., your “self”).
The problem with this is obvious if you again include more about what is actually part of the world; what we already understand about the world in ordinary terms; for example: fish in a lake. Suppose you had four fisheries all fishing in the same lake. Now, as we all know, if you overfish the lake, then you will eventually cause the extinction of fish in the lake. Thus it is clearly in the long-term interest of the fisheries to carefully manage the resource of the fish in the lake, to prevent the catastrophe of the destruction of the fishing industry.
Suppose, however, that three of the four fisheries agreed to voluntarily limit their catch, but one did not. The one that did not simply went out and caught as many fish as possible, which certainly would work to their short-term economic advantage. This fishery would make much more money than the others, and thus be far more successful in the marketplace. Yet eventually the behavior of this fourth fishery would destroy the livelihoods of all the fisheries.
The reason maximization of local (individual) self-interest is not a sufficient criterion for making decisions is that it is not stable (in a mathematical sense), in general, over the long term. Objectivism and Randian philosophy (I hesitate to even use the word “philosophy” in conjunction with Rand, as it is so riddled with these sorts of massive errors it hardly qualifies as philosophy at all — it’s more like confused adolescent fantasy — admittedly to people untrained in real philosophy it might seem “philosophical”, but it’s really a joke when compared to any serious philosophy for all the reasons and many more I’ve stated here) leads to reification of the principle of local optimization in space and time. But it’s well-known that purely local algorithms for optimization are prone to getting stuck in local optima which can be quite far from the global optimum. Add in the factor of time, and you can get a system which optimizes itself locally into a disaster such as the BP oil spill.
Now, of course, in theory, a truly rational actor would take into account long-term considerations in what they do. The problem, however, with Ayn Rand’s approach is that it relies simply on the hope that every individual actor will, on their own, take into account long-term and large-scale considerations in every one of their decisions. But that’s simply not possible — not everyone has the time, the expertise, or the information and skill to do so. Furthermore, for those subset of people, or companies, or groups which do attempt to take longer term considerations into account, they can be trampled in the short run by unscrupulous competitors who fail to take these considerations into account. Thus, operating a society based only on short-term and local optimization rules leads to systemic instability, because it virtually forces all actors to behave in a way which maximizes short-term gain at the expense of long-term stability. Of course, this is not just a theoretical, philosophical concern: it’s what just happened with the financial meltdown.permalink |