The definition of insanity is doing the same thing over and over and expecting different results.
This well-circulated quote is often attributed to Albert Einstein. I find it somewhat strange in that that is clearly not “the definition of insanity”. Insanity has multiple definitions. “Sane” and “insane” come from the Latin word for health. In the past, the word insanity was like the word illness.
The words sane and insane were eventually used to describe mental health and mental illness, and that association became so strong that “insanity” was universally understood as shorthand for “mental insanity”. Nowadays, I don’t think much of the original meaning survives.
The modern meaning of insanity is not a scientific one. It can refer to anything that is unreasonable, outrageous, wild, or extreme in any way, although of course it retains its psychological connotations. People do still sometimes use it to literally mean a mental illness, although this is often seen as stigmatizing language.
What insanity is not is one single type of behavior. What the quote describes is what I would call a type of irrational behavior.
There does seem to be something to it, though. When people talk about certainty, they often point to patterns of nature: every human is mortal, the sun will rise tomorrow, and so on. It makes sense, therefore, that the peak of irrationality should be denial of patterned events where effects are determined by their causes.
In other words, the craziest thing a person can do is act like the universe is not inherently predictable. But is the universe inherently predictable?
Prediction
“Sane” humans, and indeed many other animals, understand full well that events occur in patterns that make them predictable. On the other hand, most humans also fully understand that it’s not possible to predict the future with absolute certainty. The universe is both predictable and unpredictable.
Many people have believed that unpredictability simply comes from incomplete knowledge. The universe is therefore completely predictable in principle. This is the idea behind Laplace’s Demon, a thought experiment about determinism. We imagine that there is a supernatural entity capable of knowing the exact positions and momentums of all particles in the universe. According to classical physics, such an entity would be able to perfectly predict the future. If there is a future to be perfectly predicted, then we know that there is only one possible future, even if we ourselves can’t predict it.
The most fundamental problem with Laplace’s Demon is, of course, quantum physics’ inherent unpredictability. The Uncertainty Principle says that knowledge of both a particle’s exact location and its momentum is impossible. It’s not clear, then, that there is one single future that is completely determined by the past. Quantum uncertainty is a problem for determinism generally, though there are some quite reasonable ways of rescuing it.
Determinism is not my focus here, however, predictability is. Quantum physics establishes that the effects of uncertainty are negligible at statistical scales. It can be shown mathematically that quantum equations reduce to Newtonian physics as the number of particles becomes very large. We know from everyday experience that macroscopic physical effects like bouncing a ball are extremely predictable, even if the interactions of individual quantum particles within the ball are unpredictable.
All this is to say that the universe is not as straightforwardly predictable as some had hoped, but it is extremely predictable. One can just gesture vague at all of technology and science to see that it is the case.
Induction
Induction or inductive reasoning is usually contrasted with deduction. Deduction is what we think of as the logic of syllogisms. If I have some premises and an applicable rule of inference, I can deduce a conclusion.
If Socrates is a man and all men are mortal, then Socrates is mortal. The conclusion is logically necessary (it couldn’t possibly be false) if the premises are true. Observe that this reasoning applies a general rule to a specific case.
Induction, on the other hand, extrapolates a pattern based on a limited number of examples. The most basic way that induction operates is that, if something happens repeatedly, we expect it to happen again. This is the kind of “reasoning” a dog uses to know when her person will be home, and indeed that they will continue returning home day after day.
It’s not always reliable, though. If I slept through my alarm then spilled my coffee then stepped in a puddle on my way to work, I’m liable to think I’m unlucky today. I might even think something caused me to have bad luck.
This is what’s called magical thinking: contriving an explanation that connects events in a desired way. Magical thinking is often used by scientifically ignorant people to make sense of the world around them (hence why ancient people, who knew very little compared to us, were very prone to magical thinking). It’s not entirely a bad thing. The juggernaut that is modern science necessarily grew out of such simple pattern recognition.
Empirically, luck as some kind of force that influences the outcomes of events in any predictable way is totally implausible. To the extent that people have made falsifiable claims about charms and talismans bringing luck, those claims have been falsified. It is also relatively easy to understand using statistics that certain things are more likely to occur by coincidence than they appear. Moreover, there are known cognitive biases (such as confirmation bias) that can easily explain people’s ardent belief in the efficacy of lucky objects or rituals.
Humans and other animals’ basic pattern recognition is an evolved adaptation, and evolution always goes with “good enough” rather than perfect. We know for a fact that human brains are overfit for pattern recognition, seeing patterns where there aren’t any (e.g. pareidolia). With the power of language and culture, we’ve been able to learn more clearly what patterns are genuine and what patterns are imagined.
One part of that has been a more logical, formal approach to inductive reasoning. Unlike deduction, induction is not usually interpreted as establishing that something is true, but rather likely.
Corroboration and falsification
Philosopher Karl Popper pioneered our modern understanding of what scientific claims are saying. In a specific prediction, like “the moon will be full next Thursday”, we can directly confirm or falsify the claim by observing the moon next Thursday.
For a hypothesis like “the time between two full moons is approximately 29.5 days”, things work differently. The claim can be directly falsified by, for example, observing two full moons a week apart. However, it can never be confirmed.
This is because it makes a prediction about the future indefinitely. Each individual observation of a full moon on the correct day corroborates the hypothesis and builds our confidence in its validity, but there’s never a point at which we’ve observed it enough times to call it confirmed indefinitely into the future. We can only ever fail to falsify a scientific hypothesis.
For Popper, this was the most important distinguishing characteristic of science: that any claim made must be empirically falsifiable. This means that, if the claim is false, then there is something we could measure that would indicate it is false. In particular, unfalsifiable claims are not scientific and cannot be investigated by science at all. For example, astrological horoscopes are virtually always unfalsifiable due to their vagueness. The belief in god in general is not falsifiable.
Popper was concerned about pseudoscience, but not all unfalsifiable claims should be painted with the same brush. For example, my belief that I am a real human and not in a simulation is unfalsifiable. It’s true that I’m not certain in this belief, but I have a high level of confidence that I’m correct just based on what my experience of living is like.
Most people have a belief in some kind of god. While I personally disagree, my general claim of atheism is equally unfalsifiable. The question is outside the domain of science.
Induction and the scientific method
So, the point is that modern science is based in an epistemology of uncertainty and degrees of confidence. Returning to the full moon example, we’re in a very different place than someone simply looking at when the moon is full. The motions of the earth, moon, and sun are understood in great detail. We don’t need to use induction to say the full moon will occur once per lunar month, we can deduce that as a consequence of what we know about the moon and gravity and so on.
That’s good, right? No more induction? Unfortunately, our sophisticated modern theories have only pushed the problem back. You can imagine if, like a 5-year-old, we were to continue asking “why” over and over.
We can answer the question of why the moon appears full every 29.5 days on average by explaining the law of universal gravitation and orbital mechanics and so on. If we ask why those equations and explanations work, we push the problem back to general relativity. Mass attracts mass because of the warping of spacetime.
We eventually get to a point where we simply don’t know in any more detail why the universe is the way it is. This is also where the inductive step occurs.
The most general, foundational principle for all of science is the uniformity of nature. Specifically, this states that the universe is ordered and not just chaos, that the future will in some way resemble the past, and ultimately that the project of understanding nature is possible at all.
This seems obviously true, but how do we know? The best argument we can make that the future will resemble the past is by saying, “In the past, the future did resemble the past, which we know because that future is now past and we observed that it resembled the further past.” However, in order to infer from this that the future will resemble the past, we have to have already established that the future will resemble the past with respect to the future resembling the past.
In other words, there is no way to establish that the uniformity of nature is valid without appealing to the uniformity of nature. It’s a bootstrap problem. From the standpoint of analytic epistemology, this is devastating. Not only does it prevent certainty about the future, it also makes any confidence in a prediction unfounded. Logically, we have no reason to think any prediction is more likely than any other outcome, including the universe suddenly devolving into chaos.
Despite how much of a “problem” this is, virtually no one is seriously concerned about whether nature has this uniformity or not. The uniformity of nature is itself an axiomatic starting point for most people. It doesn’t even really make sense to ask how we know it’s true. And, outside of analytic philosophy, many worldviews are not concerned at all with this type of logical problem.
Causality
Philosopher David Hume is often credited with introducing the problem of induction to western philosophy. His original approach to the problem was different from how we tend to talk about it now. These days it’s usually framed in terms of the future resembling the past, but Hume described it as a problem with causality.
When it comes to predicting the future, it’s not just the fact that patterns occur that gives us confidence. For a great many things, we would say that we can explain how and why a prediction will come true. This comes from understanding events as being causally related.
For example, a glass does not shatter coincidentally upon hitting the floor. We understand that the glass falling caused a collision which caused the glass to shatter. Do we know that?
Hume wanted to use empirical observation to establish knowledge. Empirically, all we can ever observe is correlation, and correlation does not imply causation. This is the same problem of induction as before, but framed in a different way. The general idea is that we can only ever observe instances, never the underlying pattern itself.
Is it insanity?
So, the quote reflects the fact that inductive reasoning is the cornerstone of humans’ understanding of the universe. To suppose that it would simply stop working is the height of irrationality because it discards the one thing that makes any understanding of the world possible.
Simultaneously, the future is inherently unpredictable. There’s no way to have absolute certainty that any specific prediction will come true. The quote is also wrong because there are many experiments that can be repeated and have different, unpredictable results.
Aside from quantum uncertainty, chaotic systems exhibit deterministic unpredictability through extreme sensitivity to initial conditions. That is to say, no action can be repeated perfectly atom-for-atom, and that small variation from trial to trial affects the outcome of a chaotic system significantly.
There is also, of course, a more practical problem in that even repeating actions closely is difficult, forget about perfectly. In science, there is often a great deal of time and energy spent ensuring an experiment can be repeated closely. In everyday life, many things are likely to have different outcomes.
For example, if I toss a crumpled paper towards a recycling bin, it’s likely to land in a different spot each time I throw it. If I missed repeatedly, would it be insane for me to keep trying and expect a different result? I think the opposite is true.
As with virtually everything, nuance makes it more difficult to understand. It would be rational to continue trying to land the paper in the bin, whereas it would not be rational to continue trying to open a locked door.
But what if there’s a door that often gets stuck, and you’re not sure if it’s locked or not? I think you’d be rational in trying repeatedly to open it for a certain amount of time, after which point it would be more rational to conclude that it is likely locked and stop trying to open it. You could imagine different scenarios where you have different levels of knowledge and confidence about the status of the door.
What if you’re not sure it gets stuck, but it looks like it probably does? What if you’re aware that it might be unlocked but you’re not physically able to force it open? What if the door opens out instead of in, and it doesn’t occur to you? What if people have told you conflicting information about the door? What if you trust those people different amounts, or they themselves expressed different levels of confidence? In each situation, exactly how long is it rational to continue trying to open the door?
Clearly, there is no single correct answer. There is no universal standard for when trying something repeatedly is rational or irrational. It’s more a matter of human judgment. What’s the point of the quote, then? Why does it get spread around?
It’s insanity
I see the quote used on social media in response to situations where a person is irrationally trying the same thing over and over. Like many popular idioms and aphorisms, it’s stating something very general which is not actually universal. Different sayings used by the same person can outright contradict one another, and it’s not really a problem. These things are used contextually to reflect a recognized pattern, not state a rule that always applies. Despite being called a “definition”, the quote is somewhat figurative.
It seems like it’s having a moment on the internet right now. That suggests people might be unusually keenly aware of this type of irrationality. It’s essentially the same as failing to learn from history, which many people seem to be doing these days. It reflects an insistence that reality conform to one’s expectations and not the other way around.
This is the same irrationality as the creationist, the anti-vaxxer, the flat earther, the climate change denier, the faith healer, the race realist, and so on. It’s ultimately a failure to recognize genuine patterns in nature, even when presented with evidence. That is insane.
Photo by Steve Johnson.
