Speculation, Inquiry, and a Quest for Purpose

On Knowing Everything

O

An Introduction to “Everything is Obvious

I used to think that a childhood spent reading books, an adolescence spent watching movies, and an undergraduate education spent discussing great works of philosophy and literature meant I had a pretty robust “hypothetical simulator” kicking around inside my brain. For most of my life, I was certain I could predict the outcome of any given scenario with the confidence of Archimedes lifting something with his lever. Everything was just a complex series of equations that, once properly balanced, would play out according to knowable principles about how the world worked, and I believed I was innately talented at this sort of math.

In other words, I thought I was smarter than everyone else, and I wanted to make sure they thought so, too. Whenever someone disagreed with me, I assumed they either didn’t know as much as I did, or they weren’t smart enough to reach the same—obvious to me—conclusion. When I was younger—naive and idealistic—I would spin my wheels for hours trying to find the discrepancies in our datasets that would lead to disagreement, or evaluating the perceived intelligence of my interlocutor to find out if the problem was that they were just too stupid to see the answer. As I got older and more cynical, I would “waste” less time on people who I thought were not my responsibility to educate and whom I deemed less intelligent than myself.

I’m sad to say it, but this worldview persisted until my livelihood depended on me making a paradigm shift in the way I thought about other people—and that was a change I didn’t make overnight. It took taking a job at Apple fixing computers for me to start to understand that everything I thought I knew was wrong.

When most people think of someone who is “good with computers,” they imagine an arrogant nerd who lords their knowledge over people they view as inferior, and that’s often who people expected to find when they brought me their issues. Hell, that was me before I started this job. I was hired because of my natural aptitude for problem solving and my innate understanding of computers. Until I went through the intensive training process, I thought that would be the summation of my job responsibilities: solve the computer problem as quickly and thoroughly as possible.

Instead, I was charged with keeping the conversation respectful, amicable, and (if at all possible) empowering for the person seeking my help with their device—while also actually fixing the thing they brought in to me. This is a delicate thing. How do you fix something for someone without making them feel stupid?

Some computer problems are hardware (a part is malfunctioning and needs to be replaced,) some are software (a specific program or operating system isn’t behaving properly,) and for those, the resolution is technical. People usually don’t feel threatened by a technical solution—in fact, it’s what they usually expect—and a large part of my job involved referring to Apple’s repair manuals and diagnostic tools to understand which parts needed replacing. These tools were wonderfully unambiguous and easy to understand. They bright red lights or green lights to indicate component health, and when the red light appeared next to the hard drive or memory chip, people knew as well as I did what needed to be done.

But a lot of computer problems are operational, meaning the machine and its software are performing the tasks they’re being asked to do, it’s just that they aren’t being asked the right way, or what they’re being asked to do is different from what the user thinks they’re asking it to do. How do you tell someone they’re doing it wrong without making them feel bad? A younger me wouldn’t have cared—I could have just written it off as a PEBKAC[1] or ID-10T error[2]—but now my job depended on repairing the relationship as well as the machine.

My first response, after making sure I understood whatever it was they were trying to do, was to show them how I would go about accomplishing the task or resolving the operational issue. Invariably, people would say something along the lines of, “I struggled with this for hours—I was ready to throw the computer across the room!—and you fixed it just like that! How do you make it look so easy?” and this was the moment where the easy part (solve the problem) was done but the hard part (encourage the person to feel empowered and capable) reached a critical juncture: how could I answer the question without making the person in front of me feel inferior?

If I said, “I’m just really good at computers,” then I’d be declaring computer aptitude to be an inherent attribute—you’ve either got it or you don’t. That doesn’t leave the user feeling empowered, it makes them feel like success is infinitely unattainable and they’ll always be dependent on me (or someone like me) to solve their problems. Young me would’ve said it anyway, but I needed a more empowering solution.

If I said, “Apple trained me extensively in troubleshooting theory and I’ve spent years working through computer problems,” then they’ll feel an insurmountable barrier of investment—no one whose job isn’t to solve computer problems for a living will be able to come close the amount of hours I’ve logged without making it their full-time job, too. It was just another way to reinforce my superiority.

Even if I said, “Lucky guess!” I’d be implying that there was no rhyme or reason to making computers work and they’re just rolling the dice every time. That’s hardly an empowering perspective—although slightly more humble on my part—and it’s not particularly true. I wasn’t guessing, I was approaching the problem systematically. My job wasn’t to lie to anyone, but what else was left?

Through trial and error (remember, systematic troubleshooting was my primary trade skill,) worked through all of the above solutions before landing on something that seemed to universally help people feel better about why I could solve a problem they couldn’t:

Everything is easy when you already know how to do it.

Relief would spread over people’s faces upon hearing these words. There was nothing deficient about them, nothing special about me, no insurmountable barrier between their inability to solve the problem on their own and my relative facility with it—just a fact of life that applied equally well to troubleshooting computers as it did to any number of other skilled enterprises. I have to imagine surgery is fairly easy for surgeons and rocket science is fairly easy for rocket scientists, assuming a sufficient amount of practice.

And that makes sense, right? Of course something is difficult when you don’t know how to do it, and it gets easier the more you practice. It’s just common sense! It seems so obvious in retrospect that I was shocked it had taken me so long to come up with that explanation. How had I missed such an obvious aspect of human nature? For that matter, how come the people coming to me for help didn’t already have this understanding, such that they didn’t need me to make them feel better about having a problem they couldn’t solve?

That’s the subject of Everything is Obvious: Once You Know the Answer by Duncan J. Watts, a book about the many pitfalls of common sense, and the focus of my new weblog series. This book turned everything I thought I knew about human nature upside down—I was stunned, reading it, at the cognitive biases to which I was susceptible, and how they were affecting my relationship with the world around me.

Before reading this book, I thought I had a pretty good understanding of why people do the things they do—I’m a writer of fiction, after all, I have to be able to plausibly predict human behavior—but even I had to admit that a lot of things that seemed obvious to me were hopelessly opaque to the other people in my life. No matter how smart or well-educated, no matter how much we had in common in terms of shared experience or cultural references, I repeatedly found myself blindsided by the wildly different assumptions other people have about how things are, how they should be, and the means to get from the one to the other. This book offers to explore that disconnect from a scientific lens, and it has been responsible for most of the other books I’ve been reading throughout 2023.

We’ll start with an excerpt from the Preface, then I hope to see you back here real soon as I dig deeper into the arguments and examples presented by this book:

Since becoming a sociologist, I have frequently been asked by curious outsiders what sociology has to say about the world that an intelligent person couldn’t have figured out on their own. It’s a reasonable question, but as the sociologist Paul Lazarsfeld pointed out nearly sixty years ago, it also reveals a common misconception about the nature of social science. Lazarsfeld was writing about The American Soldier, a then-recently published study of more than 600,000 servicemen that had been conducted by the research branch of the war department during and immediately after the Second World War. To make his point, Lazarsfeld listed six findings from the study that he claimed were representative of the report. For example, number two was that, “Men from rural backgrounds were usually in better spirits during their Army life than soldiers from city backgrounds.” “Aha,” says Lazarfeld’s imagined reader, “that makes perfect sense. Rural men in the 1940s were accustomed to harsher living standards and more physical labor than city men, so naturally they had an easier time adjusting. Why did we need such a vast and expensive study to tell me what I could have figured out on my own?”

 

Why indeed…But Lazarsfeld then reveals that all six of the “findings” were in fact the exact opposite of what the study actually found. It was city men, not rural men, who were happier during their Army life. Of course, had the reader been told the real answers in the first place, she could just as easily have reconciled them with other things that she already thought she knew: “City men are more used to working in crowded conditions and in corporations, with chains of command, strict standards of clothing and social etiquette, and so on. That’s obvious!” But that’s exactly the point that Lazarsfeld was making. When every answer and its opposite appears equally obvious, then, as Lazarsfeld put it, “something is wrong with the entire argument of ‘obvsiousness.’”[3]

 

Lazarsfeld was talking about social science, but what I will argue in this book is that his point is equally relevant to any activity—whether politics, business, marketing, philanthropy—that involves understanding, predicting, changing, or responding to the behavior of people. Politicians trying to decide how to deal with urban poverty already feel that they have a pretty good idea why people are poor. Marketers planning an advertising campaign already feel that they have a decent sense of what consumers want and how to make them want more of it. And policy makers designing new schemes to drive down healthcare costs or to improve teaching quality in public schools or to reduce smoking or improve energy conservation already feel that they can do a reasonable job of getting the incentives right. Typically people in these positions do not expect to get everything right all the time. But they also feel that the problems they are contemplating are mostly within their ability to solve—that “it’s not rocket science,” as it were.[4] Well, I’m no rocket scientist, and I have immense respect for the people who can land a machine the size of a small car on another planet. But the sad fact is that we’re actually much better at planning the flight path of an interplanetary rocket than we are at managing the economy, merging two corporations, or even predicting how many copies of a book will sell. So why is it that rocket science seems hard, whereas problems having to do with people—which arguably are much harder—seem like they ought to be just a matter of common sense? In this book, I argue that the key to the paradox is common sense itself.

If you’re anything like me—inquisitive, contemplative, and ready to challenge your pre-existing biases—you’re probably already on the edge of your seat. That’s great! This series is for you, and if you want to spare yourself my commentary and analysis, you should head right to the library and pick up a copy for yourself. That being said, there are some really uncomfortable aspects of this discussion on the horizon that Watts wants you to know about up front:

Before I start, though, I would like to make one related point: that in talking with friends and colleagues about this book, I’ve noticed an interesting pattern. When I describe the argument in the abstract—that the way we make sense of the world can actually prevent us from understanding it—they nod their heads in vigorous agreement. “Yes,” they say, “I’ve always thought that people believe all sorts of silly things in order to make themselves feel like they understand things that in fact they don’t understand at all.” Yet when the very same argument calls into question some particular belief of their own, they invariably change their tune. “Everything you are saying about the pitfalls of common sense and intuition may be right,” they are in effect saying," but it doesn’t undermine my own confidence in the particular beliefs I happen to hold." It’s as if the failure of commonsense reasoning is only the failure of other people’s reasoning, not their own.

 

People, of course, make this sort of error all the time. Around 90 percent of Americans believe they are better-than-average drivers, and a similarly impossible number of people claim that they are happier, more popular, or more likely to succeed than the average person. In one study, an incredible 25 percent of respondents rated themselves in the top 1 percent in terms of leadership ability.[5] This “illusory superiority” effect is so common and so well known that it even has a colloquial catchphrase—the Lake Wobegone effect, named for Prairie Home Companion host Garrison Keillor’s fictitious town where “all the children are above average.” It’s probably not surprising, therefore, that people are much more willing to believe that others have misguided beliefs about the world than that their own beliefs are misguided. Nevertheless, the uncomfortable reality is that what applies to “everyone” necessarily applies to us, too. That is, the fallacies embedded in our everyday thinking and explanations, which I will be discussing in more detail later, must apply to many of our own, possibly deeply held, beliefs.

I’ll be honest, it is not a comfortable experience to learn that something you think you know about the world is false or unfounded, and this book challenged a lot of my preconceived notions about…well, everything. But, if that sounds like the kind of challenge that emboldens you rather than something you’d rather shy away from, then here’s a rough outline and index for this series as it progresses:

  1. On Knowing Everything — An Introduction to “Everything is Obvious”
  2. The Unequivocal Benefits of Common Sense — Honey on the Rim of “Everything is Obvious”

  1. Problem Exists Between Keyboard And Chair ↩︎

  2. An “Idiot” error ↩︎

  3. Lazarsfeld, Paul F. 1949. “The American Soldier—An Expository Review.” Public Opinion Quarterly 13 (3):377-404. ↩︎

  4. For an example of the “it’s not rocket science” mentality, see Bill Frist, Mark McCellan, James P. Pinkerton, et al. 2010. “How the G.O.P. Can Fix Health Care” New York Times, Feb. 21. ↩︎

  5. See Svenson, Ola “Are We All Less Risky and More Skillful Than Our Fellow Drivers?” (1981) for the result about drivers. See Hoorens, Vera “Self-Enhancement and Superiority Biases in Social Comparison” (1993), Klar, Yechiel, and Eilath E. Giladi “Are Most People Happier Than Their Peers, or Are They Just Happy?” ( 1999), Dunning, David et al “Ambiguity and Self-Evaluation: The Role of Idiosyncratic Trait Definitions in Self-Serving Assessments of Ability.” (1989), and Zuckerman, Ezra W., and John T. Jost “What Makes You Think You’re So Popular? Self-Evaluation Maintenance and the Subjective Side of the Friendship Paradox.” (2001) for other examples of illusory superiority bias. See Alicke, Mark D., and Olesya Govorun “The Better-Than-Average Effect” (2005) for the leadership result. ↩︎

About the author

Ian Hayes

Former technical support and customer service professional, now freelance writer and entrepreneur writing Horror, Narrative Nonfiction, and Literary/Speculative Fiction.

Also backpacker, rock climber, casual biker, woodworker and armchair philosopher.

Currently living in Portland, Oregon, but also from New York, Alabama, New Mexico, Virginia, Georgia, Connecticut and Tennessee.

By Ian Hayes
Speculation, Inquiry, and a Quest for Purpose

Categories

Posts

Archives

Tags

Meta