Category Archives: Nonfiction

Hume and the problem of induction

In 2014, Japanese scientist Haruko Obokata announced a startling discovery in one of the world’s most prestigious scientific journals, Nature. She had found that she could create pluripotent stem cells out of ordinary cells using certain types of stress, like acid or even physical trauma. Pluripotent stem cells have the ability to become any type of cell, and scientists hope that, in the future, injuries of any sort will be healed with the simple application of some of these stem cells.

Although we had already known how to create pluripotent stem cells before Obakata’s discovery, it was a difficult, expensive, and somewhat ethically questionable process, involving the use of embryos. Obokata’s method was easy, cheap, and completely ethical. All you’d need was some cells and some acid. If it proved replicable, doctors of the future would be able to provide their patients with stem cells on demand, and medicine as we know it would be forever changed. You might even say it would undergo a paradigm shift.

You might be wondering at this point why Haruko Obokata isn’t a household name. Or, if you’re more familiar with science news, you might be shaking your head already. The short answer is because she faked her results. She falsified data and then tried to cover her tracks. Her story, so briefly a triumph, ended up a disgrace. Her PhD was revoked, one of her advisors committed suicide, and the organization which provided her with resources, RIKEN, was roundly criticized for allowing such a scientific travesty to take place.

This isn’t a morality tale, however. I’m not concerned with Haruko Obokata, or her advisors. They knew what they were doing, and their failings were ethical ones. I’m concerned with Nature and its editors. The editors of Nature had no idea that Obokata faked data. All they knew was what they were presented with: evidence seeming to show that stress caused ordinary cells to become pluripotent stem cells. Should they have known better?

As you might imagine, there’s a philosopher who had a lot to say about this issue. His name was David Hume, and he was one of the foremost members of the Scottish Enlightenment, an 18th century movement which included among its members Adam Smith and Robbie Burns. Hume was concerned with many things, but what he’s perhaps most famous for is his analysis of the issue of causation.

Aristotle believed that he could prove something caused something else by setting up an elaborate logical system. For example, in his logical system, rocks fell to Earth because they naturally belonged on Earth. Their “belongingness” caused their falling. Bacon disagreed, and insisted that we prove causation by careful observation of the real world. Bacon would hope to discover why rocks fell by observing enough instances of rocks falling, as well as instances of rocks not falling, and then carefully coming up with a theory about when rocks would fall and not fall.

Hume took a different approach. First, he disagreed with Aristotle entirely. In fact, he claimed that there was no such thing as a logical system independent of the real world, because all of our ideas come from our interactions with the real world in some way or another. This is a pretty major claim in philosophy, but we’ll leave it for now. More importantly for our purposes, Hume said that the only way you can hope to approach if something causes something else is by observation, but you will never actually know if something causes something else. In other words, no matter how many rocks Bacon watched, Bacon would never truly know what caused a rock to fall.

This is a serious claim. Someone like Bacon, had he been alive at the same time as Hume, probably would have been furious. But let’s look at Hume’s evidence, first. Hume tells us that what we consider causation is really just two events happening at the same time, or one after the other. So, for instance, if you take a sip of hot tea and burn your tongue, you think, “The hot tea burned my tongue!” Hume says, “No, you took a sip of hot tea and you burnt your tongue. I don’t see why one has to have caused the other.”

The obvious response to this is that every time you’ve taken a sip of burning hot tea in the past, you’ve burnt your tongue. And so you think that it makes logical sense to say that hot tea burns your tongue. But Hume still disagrees. He sees no reason why you should be allowed to assume anything from the past. Hume wants a reason why hot tea necessarily burns your tongue, not that you’ve seen that connection in the past.

Well, the final answer has to be that you’ve always made connections based on what’s happened in the past, right? That’s how you learned everything, and it’s worked so far. You learned how hot tea burns your tongue by doing it as a little kid. You learned how to use pencils by playing with them when you were little, making doodles on a paper. Almost all of your knowledge is composed of these associations you’ve made, that you do something and something else happened.

In other words, according to Hume, in the end your justification for reasoning from the past is that, in the past, reasoning from the past has worked. You think you can predict that the next time you sip hot tea, you will burn your tongue, because this is how you learned your lesson about putting a marshmallow too close to the fire when you were little. In philosophical terms, your only proof of induction is induction.

The implications of this are quite serious. In Cartesian terms, science is built on a shaky foundation, and our impressions of causation are really just associations. Hume even tells us that before we argue with him that our knowledge is different, we have to consider how sure others have been of wrong causations, like tribes who believed their dances caused the rain. Despite all of our advances, there’s no way of proving we’re not just as mistaken as those tribes, about any of our scientific ideas.

Our answer to this problem, according to Hume, is to treat all philosophy as entertainment and to stop teaching it in schools. It’s hard to tell if he’s serious, but it certainly seems somewhat appealing. Unfortunately, we cannot take that option. We have to proceed onwards, always aware of how shaky our foundations are, putting one foot carefully in front of the other on the rickety, occasionally rotten bridge of induction.

Now, back to the editors of Nature, and what they should have done. In my opinion, they should have recognized that, by the standards of induction, Obokata’s claim was extraordinary. Putting cells through stress, whether physical trauma or acid, had to have been something other scientists had done before, and none of them had noticed any pluripotent stem cells. The pattern of association, in other words, was not there. Furthermore, her graphs and figures were unusual for the data, raising further red flags about how carefully she analyzed her data, and how open she was to the possibility that her extraordinary claim was incorrect.

Induction is never totally certain in our world. It merely exists on a scale of certainty, as Hume proved. What is certain is that extraordinary claims require extraordinary evidence to be even tentatively believed, and Obokata made an extraordinary claim with poor evidence. Her claims were so extraordinary that they were in fact more tempting to believe than an ordinary claim, which is a phenomenon that Hume was well aware of. Nevertheless, the editors of Nature should have been on their guard, and realized that Obokata, even if she hadn’t faked her data, should never have claimed that stress caused cells to revert to pluripotent stem cells. Alas, they didn’t, and the scientific world was worse off for it.

There’s a follow-up question that Hume never asked, though. You might wonder why, if causation is never able to be proven, that humans believe it in the first place. In other words, what causes the belief of causation? Hume would say that asking the question proves that we didn’t really understand his critique, but I’d beg to differ. And so would Immanuel Kant, who we’ll cover next time.

Descartes and radical skepticism

I always know when I’m about to get sick because I have very vivid dreams. They’re often frighteningly real, and I’ll wake up and not be sure of whether I’m still dreaming or not. Once or twice I’ve even had sleep paralysis, and I’ve “woken up”, only to find that I can’t move, and that strange things are happening inside my room. I still remember one night where I dreamt that I was trapped inside my bed, and that it was slowly heating up. I had a nightmare that I’d be baked alive, and I was really shaken by it the morning after.

If you’re like me, you probably would do your best to forget a nightmare like that. However, if you’re more a philosopher, it’d make you think. You might start thinking about how vivid that seemed, and how you were unable to tell it from reality. Then, a realization: how do you know that you’re not dreaming right now? Isn’t it possible that this is just a long-lasting dream, one which you’ll wake up from eventually?

This is what Rene Descartes, 17th century French philosopher, started with. Except he took it in an interesting direction. From this point, Descartes asked what our basis is for knowledge in the first place. Remember, at this point we’ve covered two broad categories of knowledge: Aristotle’s syllogisms, which are a form of deduction, and Bacon’s reasoning from careful observations, which is a form of induction. Now Bacon knew that induction could be a difficult process, and Aristotle was wary of improper syllogisms, but both thought the foundation of these processes were sound. Descartes disagreed.

Remembering how realistic his dreams were, Descartes proposed a thought experiment. What if a demon (or, in Descartes’s words, an “evil genius”), made it his life’s mission to mess with Descartes? First of all, he’d obviously start with hallucinations. He’d make Descartes constantly see things as he did in dreams. Descartes would see people and things that aren’t there, and never be sure if what was in front of his face was really in front of his face.

This would knock out any hope we had of performing Bacon’s new method of science. If we can’t rely on what we observe, then we’d have no hope of using what we observe to form conclusions about the world. But Descartes isn’t finished yet. He asks about what would happen if this demon made him perform deduction wrong as well. For instance, what if every time that Descartes tries to add 3 and 5, the demon interferes with his thoughts, and makes Descartes answer 9?

Someone like Aristotle would, of course, say this is preposterous. Everyone has had vivid dreams, but the great Aristotle would never perform simple math incorrectly, no matter how powerful the demon. But Descartes was a great mathematician, far more accomplished than Aristotle, and Descartes admitted that he made careless math errors. If Descartes could make a careless arithmetic error when he was tired, then it’s possible he could have been making errors all the time, in the same way he could be constantly dreaming.


So the demon could destroy Descartes’s ability to perform deduction as well. Now Descartes is in trouble. He couldn’t rely on his ability to reason, because the demon could mess with that. He couldn’t rely on his ability to observe and assume, because the demon could mess with that. We could even say (although Descartes didn’t) that he couldn’t assume that demons don’t actually exist, because that would rely on logic and our experience, both things that a demon could attack. Descartes, for his part, is saved only by his faith in God, and his belief that God is too good to allow something like this to happen. But still, this is a lonely and frightening path that Descartes has walked down.

Then Descartes has a revelation, likely the most famous one in all of philosophy. Cogito ergo sum. I think, therefore I am. Every time he thinks that he exists, he has to exist. After all, even if a demon has fooled him about everything else, the demon has to fool him. That is, there has to be someone for the demon to trick, and therefore, Descartes knows he exists.

Descartes goes on after this, but I want to stop with his thought process here. The question is what we can learn from Descartes. This is an undoubtedly an impressive piece of philosophy, and there’s a reason why Descartes is so famous. But it seems a bit pointless to discuss if there’s nothing to learn from it, and we can only admire it without ever hoping to emulate it.

Well, the good news, for you and for my essay series, is that there is something to learn from it. I want to focus on Descartes’s radical skepticism. Descartes slowly and systematically digs out all of his knowledge, including the roots. While there were skeptics before Descartes, nobody before Descartes had dared to question their knowledge to such an extent.

It is rare in this world that is necessary to question knowledge like this. After all, normally we can assume that what we’re certain of is true. It might not be. It’s possible we’re living in the Matrix, and the walls that seem so solid to us are really just cleverly constructed pieces of computer code. Still, no matter the true nature of the walls in whatever room you’re in, I would not recommend attempting to walk through them. You’re liable to give yourself quite a headache.

However, there are occasions in which it is necessary to question our fundamental assumptions. In the words of historian of science and philosopher Thomas Kuhn, these are paradigm shifts. Paradigms aren’t just opinions, facts, or theories, but ways of looking at the world and structuring our thoughts. A paradigm shift, therefore, is an entirely new way of looking at a field, with new concepts, terms, and models.

Kuhn’s most famous example of a paradigm shift was the shift from the Earth being the center of the Solar System to the Sun being the center of the Solar System. By the 16th century, an elaborate system of theories had been built up to support the Earth being the center of the Solar System, a paradigm known as the Ptolemaic system. Named after 1st century Greco-Egyptian scientist Ptolemy, it proposed that all the stars and planets revolved around the Earth. This system seemed good, but astronomers noticed that it often seemed that planets and stars would stop in their rotation, or even move in reverse. Their explanation was that occasionally planets and stars would rotate in tiny orbits within big orbits, which they called epicycles.

These epicycles were complicated, and had to be constantly modified to support new observations. To our mind, they seem ridiculous. However, no astronomer was willing to question the fundamental assumption that the Earth was the center of the Solar System, so no better system was developed. It fell to two men to question this fundamental assumption: Copernicus and Galileo.

Copernicus was the first. He proposed that the Earth revolved around the Sun, rather than vice versa. However, he didn’t go far enough. Copernicus still believed in epicycles and the system of physics that underpinned it, Aristotelian physics (which we’ve discussed the problems of before). These ideas supported geocentrism (Earth-centering), not heliocentrism (sun-centering). His intuition was correct, but he didn’t have the tools to defend it. So his idea was discarded by his fellow astronomers.

Galileo, on the other hand, was willing to dig up his knowledge by the roots, much like Descartes did about 40 years after. Galileo asked the unthinkable: what if astronomers weren’t just wrong about the structure of the Solar System, but about physics itself? Specifically, Galileo proposed that Aristotle’s old idea that an object in motion will naturally come to a halt was false. Instead, Galileo said that an object in motion will stay in motion, unless acted upon by an outside force. Therefore, epicycles would never make sense. No planet would ever just stop, or move in reverse. It could appear to move in reverse, but only from the perspective of Earth, because Earth itself was constantly moving and rotating.

This wasn’t the end of the Copernican Revolution, but it was the start. The math of Kepler, the physics of Newton, and, finally, the reluctant acceptance of the Catholic Church were all necessary components before the Revolution could conclude. Eventually, though, the idea that Earth was the center of the Solar System seemed ridiculous, and a new paradigm was firmly established, with accompanying terms, equations, and models.

It’s important to note exactly how radical this revolution was. Each of these scientists had to go against hundreds of years of science and religion in order to make their claims. They had to question what they had supposedly seen with their own eyes, like planets going backwards in their orbit, as well as what had supposedly been proven by logic and mathematics. Much like Descartes, there had to have been periods where they felt alone, unsure of what they could trust and what they could claim to actually know.

There will be very few times in your life when it is necessary for you to question the foundations of your knowledge. As I said before, most of the time your assumptions about what is true will serve you fine. But there will come a time when some system of knowledge seems questionable to you. Perhaps you will find yourself doubting a superstition of your mom, a theory of your professor, or even your own basic assumptions about the reality of your world. When that time comes, I encourage you to be bold, and to not hesitate to question. Pursue your line of questioning to its conclusion, wherever it may lead you, and do not be afraid to clear out everything and start anew.

For my next essay, I’ll continue my discussions of skepticism, with the famous Scottish skeptic David Hume. We’ll explore causation, and what it means when we say something “causes” something else. I’d tell you to be excited, but I’ll skip that for now. After all, whether you choose to be excited or not excited, you should come to that conclusion by yourself. I’m just here to light the way.

Francis Bacon and observation

In 1963, Harvard psychologist Robert Rosenthal ran an interesting experiment. He sent some students a group of rats which he called “maze bright” rats. He told them that these were rats bred to run mazes more quickly than the average rat. He asked the students to confirm these results, and they did. Then Rosenthal sent other students “maze dull” rats, which, as you might imagine, were rats bred to run mazes more slowly than the average rat. The students once again confirmed the rats’ labels.

So Rosenthal ended up with data that showed that about half of his rats ran mazes incredibly quickly, and half of his rats ran mazes incredibly slowly, exactly as he told the students they would. Nevertheless, this was surprising data. It was surprising, because he never bred the rats at all. They were all from the same original population, and there was no difference between them.

Rosenthal ran this experiment to prove what is called the “observer-expectancy” effect. In short, this effect is about how every human being, including scientists, want certain things to be true. And, because we want certain things to be true, we will slightly “nudge” the evidence, either in real life or in our minds, to make it true. It’s basically the academic version of your love-struck friend. If Jessica says “hi” to him, he’s convinced that it means she’s flirting with him, even if she says hi to everyone at work. He willfully ignores the evidence to confirm what he wants to be true.

Rosenthal’s students wanted their professor to be right. Or, more likely, they didn’t want to look like incompetent experimenters in front of their august professor. Either way, they changed the data to fit what they were told was true. It’s possible they changed it in real life, by changing their measurements to fit in with the “maze bright” and “maze dull” hypothesis. Or they might have changed it in their analysis, by ignoring the real data as “outliers”. Regardless, these were well-trained scientists who all, in the end, accomplished the exact same wrong result. Ironically, this ended up being a pretty successful experiment for Rosenthal, if not for his students.

It’s interesting that it took until the 1960’s for psychology to formally recognize this problem, because it had been known for a lot longer. As you might have already guessed, one philosopher in particular described this issue to a T, all the way back in the 16th century. His name was Francis Bacon, or, if you prefer his full name, Francis Bacon, 1st Viscount St Alban.

Francis Bacon came to his theories a very different way than did Robert Rosenthal. Francis Bacon had been taught in the intellectual methods of Aristotle, who we’ve talked about previously. As we know, this meant that Francis Bacon had been trained in a logical, robust system that had little relation to the real world. Even worse, it didn’t even try to have relation to the real world.

Bacon decided there was a need for a “new instrument of science”, one which would gather observations from the real world and turn them into theories. Unlike Aristotle’s system, everything in it would be true and supported from observation. Bacon just needed to make sure that his instrument worked.

Bacon had a problem, though. Coming up with theories based on observations seemed like the right way to go, but other people had tried that before. After all, that’s exactly what people who didn’t know about Aristotle tried to do. They would see a natural phenomenon, like lightning, and then come up with a theory, like lightning is Zeus throwing thunderbolts. This, of course, wasn’t science, but they didn’t know that. So Bacon needed something better than just “observation leading to theories”.

Bacon’s specific method was a bit silly, so we won’t go into it. But his general idea was sound. He said we had to be very, very careful when we let our observations become theories. We had to build up our theories carefully, starting with specific instances, and only gradually working our way up to universal laws, making sure that we don’t let mental tricks lead us astray along the way.

What were these mental tricks? Well, Bacon called them “idols”, and they included the “observer-expectancy” effect of Rosenthal. He had four of them: idols caused by human nature, idols caused by personality, idols caused by language, and idols caused by philosophy.

Idols caused by human nature include the observer-expectancy effect, as Rosenthal mentioned. They also include what we call today “confirmation bias”, which is how we tend to form a conclusion, then draw evidence to support it. To paraphrase Andrew Lang, we use evidence like a drunk uses a lamppost, for support rather than illumination. Bacon also identifies an amazing number of other biases which were only named hundreds of years later, like the availability heuristic and the gambler’s fallacy.

Idols caused by personality are much simpler, and tend to just be how different people like different ways of looking at the world, and tend to frame hypotheses in their preferred manner, whether rightly or wrongly. For instance, an engineer might explain why a bridge collapsed in terms of how it was built, while a historian would explain it in terms of how political conflicts led to shoddy workmanship on the bridge.

Idols caused by language are simpler still, and simply pertain to how we sometimes twist our observations to fit our language, which can result in oversimplifying, complicating, or simply miscommunicating what was actually observed. Rosenthal, in coming up with the catchy names “maze bright” and “maze dull”, may have helped to cause his students to fall prey (pun somewhat intended) to this idol. Alternatively, early sailors who saw manatees and insisted they saw mermaids may also have fallen prey to this, as they had the word for manatee but not for mermaid.

Finally, idols caused by philosophy are Bacon’s ways of attacking those who he thought were actively holding science back: those who followed Aristotle, those who insisted on putting Christianity into science, and the “experimentalists”. Those who followed Aristotle, in Bacon’s view, twisted their observations to fit their logic, when they should have taken pains to make their logic fit their observations. Those who put Christianity into science “ruined both”. And the experimentalists, those who tried to use experiments to learn science rather than observations (like Galileo), never really understood what they thought they did.

Needless to say, I’m much more sympathetic to Bacon’s attack on the first two than the last one. Bacon’s disdain for experiments is part of the reason why his specific instrument of science never really works. However, his insistence on carefully building up a theory from facts, and his identification of so many of the cognitive biases we must avoid when doing so, cement his reputation as a truly great thinker.

What should you take away from this? Well, it wouldn’t hurt for you to actually read Bacon’s list of idols. Even though they were written 400 years ago, they’re surprisingly understandable, and pretty interesting. However, even if you don’t, you should think about how to form opinions and theories of your own. Don’t go charging in with a preconceived notion, and gather your facts in response to your opinion. Be open to the possibility that your first reaction is mistaken, and be aware of any mental shortcuts you take along the way. Finally, when you communicate your results or thoughts, watch your language closely, and make sure your audience understands your ideas as they were when you came up with them.

In fact, one place you might do this today is with your political opinions. You probably fall on side of a hot political issue, and you are likely aware of the other side’s view. What I want you to do is to carefully consider the other side. Pretend you were an alien, coming down from outer space, and you had no opinions on this whatsoever. If you were given the other side’s evidence and arguments, would you be convinced by them? Or, if you look at your own side, are you convinced by your side’s arguments?

The only way to actually carry out this exercise, by the way, is to actually go through with it. Belly-crawl your way through the muck and the grime, and peer carefully at the facts, the other side’s and your own. Then, carefully, like you’re building up a stack of cards, build your theories from the facts.

That’s how Francis Bacon would have had it. But, if you find it hard to pretend you know nothing, and to build your card castle starting from the ground floor, you are not alone. Luckily, there’s another philosopher waiting in the wings to help you, me, and all of humanity question our knowledge. Not just our theories, like Bacon would have it, but all of it, even our very existence. Who is this radical skeptic? Well, it’s the one and only Rene Descartes, and we’ll talk about him next!

Aristotle and logical systems

The world we live in is a fundamentally illogical place. Things happen all the time, and no explanations are ever given. Dogs, cats, and all the other animals seem to be mostly okay with this state of affairs. However, we humans insist on imposing a logical structure on the world. We need to know why things happened and what caused what. Or, more specifically, we need an answer for why things happened which satisfies us, even if that answer might not be the correct answer.


In our relentless search for answers to life’s unanswerable questions, we are following in the tradition of Aristotle. Aristotle was a student of Plato, who was a student of Socrates, who I’ve talked about. Aristotle took Socrates’s system of questioning about the world, formalized it in the world’s first (mostly) complete logical system, and applied it to everything.

Just as Socrates had his tool in the Socratic method, Aristotle had his tool in the “syllogism”. The syllogism is a way of expressing a logical relation. You start off with a proposition, like “all dogs are animals”. Then you add another proposition, adding new information to what you introduced in the first proposition: “all animals have four legs”. Then you conclude “all dogs have four legs”. If the two propositions are true, the conclusion has to be true, and you’ve successfully formed a syllogism.

Notice, however, that these propositions aren’t necessarily true. The second proposition stated all animals have four legs, but kangaroos don’t. They don’t even have to make sense. Instead of “dogs”, I could have used the word “pancakes”, and I’d end up concluding “all pancakes have four legs”. It’d still be a valid syllogism, but complete nonsense in terms of the real world.

Aristotle knew this, but he was too excited by his new tool to be overly concerned. As long as he could come up with propositions, he could categorize the world and make it make sense to him. He categorized the elements of the earth, the animals, the plants, the stars, and basically everything else he could see. Everything in a category was defined and bounded by its syllogisms, and it was all perfectly logical. Unfortunately, it didn’t always actually apply to what was in the real world, or even what was obvious, like when Aristotle claimed that men have more teeth than women.

Nevertheless, this was a powerful new tool to be used. The human desire for things to make sense was satisfied. In fact, syllogisms were so powerful, and categories so useful, that Western science itself proceeded solely under the Aristotelian umbrella until about 2000 years after Aristotle’s death. Even during Shakespeare’s time, men were still confidently citing Aristotle as the utmost authority on all matters science.

Now we have better methods and better ideas about what science should be, which I’ll cover in later essays. But for now I want to talk about this powerful idea from Aristotle, which is the creation of logical categories. Aristotle knew that with his method, he could divide up the world. For instance, he could divide the world into living and non-living, then living into animals and plants, then animals into large animals and insects, and large animals into birds, lizards, and mammals. Then, he could make sweeping inferences, and back them up logically. For instance, if he could conclude that all living things die, then he knew that any animal or insect he came across would also eventually die, even if he knew nothing else about the animal or insect.

This also meant that he could assume causes about the world. For instance, he made the universal rule that everything has a natural place that it will always return to. Then, when it came to the specific question of why birds fly and turtles don’t, he could say that the natural place of a bird is in the sky, and the natural place of a turtle is on the ground or in the water. This isn’t a scientific or “true” explanation of flight, but it makes sense, which is more than his contemporaries could say.

Today, we use these logical categories for all sorts of things. For instance, if we carefully come up with the scientific observation that all bees sting, then, when we come across a new bee, we know it stings. If it doesn’t sting, then it’s not in the category of “bee”, and the other assumptions that come with the category (e.g. flight) do not apply. Or, in poetry, we might say that all Shakespeare’s poetry falls into the meter category of “iambic”. Therefore, given that all “iambic” poetry is easy to remember, we can say that all of Shakespeare’s poetry was written to be easy to remember.

There’s one particularly interesting example of how we are following in Aristotle’s footsteps that I want to explore in depth. It’s in what you’re looking at right now, and what I’m typing on. Your computer is an object completely within Aristotle’s paradigm, even though it was created about 2300 years after Aristotle’s death.


You may be wondering how this is. Well, have you ever thought how amazing it is that computers are so easy to use? You click on icons to open them, then click on buttons on your screen to get your computer to perform functions. Sometimes, if the computer’s not sure what you want to happen, it’ll give you a range of buttons to choose from after you click the first one. This is all logical, and it makes a well-designed program, and your computer as a whole, easy to use.


But it’s all completely artificial. Your computer is an incredibly complex mix of silicon, metal, and rare earths. In order for your computer to perform an action, it has to translate whatever you want into language that it can understand, and then uses that language to set minuscule and sometimes microscopic machinery whirring away at top speeds. In order for this to be a pleasant experience for you, the end user, the computer has to make sure that it almost always interprets your desires in the right way, and that it almost always performs its tasks reliably and efficiently.

In short, the version of the computer that you and I see and interact with, and that we think of as the “computer”, is just a very logical structure built on top of a beautifully engineered mess. Even if you’ve never used a program or a certain computer before, it still makes sense to you, because it was designed to make sense to you. Different buttons and functions fit into the categories of what you already know, so, for example, “copy and paste” works the same in Facebook, Google Docs, and Microsoft Word, even though these are different programs, made by different companies, which have to perform very very different operations to translate that into language the computer can understand.

Syllogisms, categories, and a world that makes sense to us, the human. This is the legacy which Aristotle left us. He didn’t leave us a way to actually understand the world, but he left us a way to make it make sense to us. We’d need others to show us how to gain true knowledge about the world, and I’ll talk about that subject in my next essay.

Socrates and categorizations

Socrates, in many ways, was the first true philosopher. While men before him had attempted to answer the big questions about life, the universe, and the world around them, their answers had tended to be built on a bed of assumptions. These old thinkers built their theories on the grand forces of their world: lightning, fire, rain, the sun. When we read their thoughts today, it’s easy to understand why these elemental forces would play such a prominent role in their philosophy, but it’s hard to emphasize with it. We live in a world where lights come on at the flick of a button, and fire with the turn of a switch. Natural forces have lost their mystery.

Reading Socrates, however, is a different experience. His thoughts still have the power to surprise us. This is because Socrates started from a standpoint of willful ignorance. His famous Socratic method was designed to scrape away assumptions, harden definitions, and clearly define categories. Our thinking today is no less free of assumptions than our ancestors, so the Socratic method still has the capacity to improve us.

But this essay isn’t just about extolling the virtues of the Socratic method. It’s about explaining how it works. The Socratic method is a tool, first and foremost, with specific uses and limitations. And, like any tool, it requires practice. This is especially true for the Socratic method, because not only can it be hard to tell when you’re using it incorrectly, but it’s often misunderstood.

To explain how the Socratic method works, first I’m going to show you an example. Then I’m going to explain how the example works, and what the outcome of it is. Finally, I’m going to show you an example from Socrates himself.

You and your friend are having a dialogue about what the best university is. Your friend says it’s Harvard, and you say it’s Princeton. Then Socrates comes along, and the dialogue begins.

You: Hey Socrates, what’s the best university? I think it’s Princeton, but John thinks it’s Harvard.
John: It’s totally Harvard. Bill Gates and Mark Zuckerberg dropped out from there!
You: So the best university is the one with the best dropouts? It has to be Princeton. It’s on top of the rankings!
Socrates: Settle down, settle down. First, when you say the best university, are you not referring to the best one to graduate from as an undergraduate?
You: I suppose so.
John: Well, I’m not so sure about that.
Socrates: Well, if we were asking who was the best at training race horses, we would look at which trainer produced the best race horses. That is, if a horse came to him, he could make it become much better at racing after training it.
John: That seems fair.
Socrates: And if another, lesser trainer had trained the same horse, it would not have become quite as good at racing, right? This is regardless of the horse.
You: Yes, so it’s Princeton!
Socrates: I’m not so sure yet. So a university teaches students like a horse trainer trains horses. It would seem that the best university should produce a better student than a lesser university, just like the best trainer produces a better horse than a lesser trainer, regardless of the student.
John: But that’s unfair. Universities don’t all get the same students, so they might produce better students just because they get better students. It’s like a horse trainer receiving faster horses than his competitors.
Socrates: Aha, that’s one problem, and I agree that’s a big one. But I want to pursue a different route. What does it mean to produce a better student?
John: Well, that means they’ll be more successful.
Socrates: But success has many different categories, from being wealthy, to changing the world, to becoming renowned. If we discussed the best race horse, we’d discuss the one that wins the most races. But what if we were discussing the best flavor of ice cream?
You: That’s just a matter of opinion. It’s different.
Socrates: What if I wanted to say the best flavor of ice cream was dirt flavored?
John: Eww.
Socrates: Exactly! So when we discuss the best flavor of ice cream, there’s opinion, but there’s also some standard that we accept. If we discuss the best students, there’s also opinion, because there’s many ways to be successful, and which one is best is a matter of opinion. However, there’s also some standard. Surely the most successful students couldn’t be living in the gutter, so if a university produces gutter students, then it’s not the best university.
You: So should we discuss the worst universities, then? It seems easier.
Socrates: It depends. Would you rather eat dirt-flavored ice cream or sand-flavored ice cream?
You: Neither, both sound terrible.
Socrates: Exactly. The comparison is difficult to make for the worst universities, just as it is for the best.
You: So are all comparisons inherently meaningless?
Socrates: I’d answer, but I have to leave before this angry mob catches up with me.
Socrates leaves, quickly pursued by an angry mob of Athenians carrying pitchforks and torches. One has a plant, which you recognize as hemlock.
John: Well, this discussion is pretty much ruined. Want to get ice cream?
You: Not really.
Finis

So, in this conversation, you started off with the idea of comparing universities to find out which is the best. However, first Socrates persuaded you that you were really comparing which university produces the best students. Then he persuaded you that you were really saying which students were the most successful. Finally, he persuaded you that, although success has some objective qualities, it is primarily subjective.

Now, “persuade” might seem like a strange choice of words, but it’s accurate. After all, it’s not clear that Socrates’s definitions were the same ones you would have used. In fact, judging by John’s example at the beginning with Bill Gates and Mark Zuckerberg, it would seem that he had a different definition in mind, although he couldn’t articulate it.

Socrates persuaded you with the use of common-sense analogies. Horse training was an analogy that the real Socrates actually liked to use quite often, while ice cream was one of my own invention. At any point, you could have disagreed with his analogies. You could have disagreed with the appropriateness of them, and said that universities are nothing like horse training. You could have disagreed with the conclusion, and said that dirt flavored ice cream could be considered the best ice cream flavor. But, because the analogies seemed reasonable, you didn’t.

Socrates took your definition, changed it and refined it, and then used analogies to persuade you. This might seem like just an exercise in persuasion, then, and you might be wondering why this is philosophy. Well, because there’s something else going on here.

When you and John were originally arguing, your argument could never end. This was because you were working from different definitions. You thought you were arguing about the same thing, but the evidence you used clearly showed something different. John was arguing about some version of “best” which incorporated who has dropped out of a university, and you were arguing about some version of “best” which relies on third party rankings, which are based on metrics that aren’t immediately obvious. Your definitions weren’t properly formed, so even you didn’t know what you were really arguing about. After all, John likely didn’t really believe that dropouts should count for how good a university is, and arguing about how good a university is based on rankings is just kicking the can up the road, because then you have to argue about how good those rankings are.

This brings us back to why Socrates is the first true philosopher. In order for philosophy to begin, we have to know what we’re discussing, and that has to be clearly communicated to other people. In fact, that’s true of all formal systems of thinking. It’s only possible to come up with ideas if it’s clear what the ideas are about. Socrates gives us a tool for starting discussion, and a way of persuading others to our point of view.

However, this is only a beginning. Socrates doesn’t give us a clear way of progressing our thoughts, because common-sense analogies rely on what you already know and believe. In order to learn new things, and come up with new things to believe, we have to have some additional tools and concepts under our belt. I’ll cover those in subsequent essays.