It’s common wisdom to be skeptical about conspiracy theories and fringe views. At the same time, mainstream sources can be disturbingly inaccurate or dishonest too. Although I’m not going to make the blanket claim that everything you read in reputable publications is a lie (it isn’t), what you read warrants healthy skepticism.
I recently read Dreamland by Sam Quinones, a book on how the opioid crisis got underway in the U.S. There’s a lot that’s eye-opening and depressing in that book, including how medical professionals, academics, and mainstream publications repeated a Pharma-friendly claim that only a tiny fraction of opioid users develop an addiction (less than 1 percent!). The statistic is based on gross misrepresentations, including this one: a brief letter to the editor published in the New England Journal of Medicine that got referred to as a “landmark study.”
The letter to the editor wasn’t a full-fledged study. It communicated the observations a doctor and a grad student made about a population of hospitalized patients who received painkillers in a controlled and supervised way that also accounted for a prior history of drug abuse – a far cry from how painkillers later got prescribed to the general population.
Why did no one bother to look into this “landmark study”? Academic journal archives have always existed. You wouldn’t have needed the Internet to fact check, although yes, you would have had to look up a physical copy of the journal in an academic library.
Repeated until it seemed like established fact, this is just one example of a lie – a devastating one – that uncritically became mainstream. Many people, including journalists and highly educated experts, can be shaky investigators. Publications often don’t prioritize investigative work (or don’t have the budget for it). Also, it’s easier to be a mouthpiece than it is to ask uncomfortable questions and uncover awful answers. Fear, money, laziness, conformity, and an abundance of misplaced trust are all influential forces. So are ideological biases.
I don’t want to argue that there’s no truth at all in mainstream publications. That would be a ridiculous claim. But healthy skepticism is always warranted, even when you’re reading from a respected source. Even if you largely agree with something, leave some room mentally for a correction and updated knowledge.
Ignorance just means you don’t know something. For example, I’m ignorant about the names and accomplishments of many famous athletes and the rules of the sports they play.
At any point, if I want to learn more about these athletes and sports, I can. Ignorance doesn’t have to be permanent. It can change if I want it to, and if I have access to the relevant information.
Willful ignorance is different and worse than regular ignorance. With willful ignorance, I don’t know something, but I act as if I’m knowledgeable. I act as if I know what there is to know. I resist learning anything more, even if that’s what I need to do to share my opinion, teach a topic, or make a decision.
Let’s return to the sports example. If I were willfully ignorant, I would launch into a confident-sounding commentary about a game. I would share some strong opinions about the athletes’ techniques and strategies. If anyone were to tell me, “You don’t know what you’re talking about,” I would argue that what I’m saying is reasonable, valid, relevant, and sufficiently well-informed. Just by watching a sport for five minutes, I can learn what there is to know about it.
Willful ignorance isn’t just the state of not knowing something. It’s an attitude that blocks learning. It undermines intellectual humility and careful thought. If you’re just ignorant, you can become less ignorant. But if you’re willfully ignorant, how will you learn more?
In a development psychology course, the professor told us to find two things:
1) A newspaper article about a research study in child development.
2) The actual study itself (written up in an academic research journal).
We then had to do the following:
– Read both the newspaper article and the research paper.
– Evaluate the strengths and flaws of the study. (Some examples: How did the researchers select the sample, and was the sample size too small? How did the researchers define the concepts or phenomena they were studying? What were the weaknesses in the statistical analyses?)
– Note discrepancies between what the study actually found and the way the newspaper article reported the findings.
This was an eye-opening assignment. It helped show me the effects of study design and statistical analyses. And how newspaper articles misrepresent findings, usually in the headline and opening paragraph of the article – the parts needed for grabbing attention through bold claims. Also, the parts people usually don’t read past.
I recommend this as an exercise in critical thinking. Research papers are often behind paywalls, but not always (sometimes, a professor will have a copy on their site). And if you’re already a college or graduate student, you may be able to access journal papers for free using school library privileges.
One of the most annoying types of arguments to come across (for me, anyway) is the one involving forced binaries. A complex issue gets reduced to two possibilities – like nature or nurture, or the question of whether rape is about sex or power – and these two possibilities get treated as if they’re mutually exclusive. Pick one, and make your stand.
Whether you’re having a classroom discussion or arguing with someone online, here are three steps to take when you’re confronted by a forced binary:
Ask yourself what each choice really means. In the context of the discussion, how are people defining ‘power,’ ‘nature,’ or any other word? Sometimes, you get a disagreement because people are thinking about the same concept in fairly different ways. If you clarify definitions, you may discover a greater degree of agreement than you expected.
Ask yourself if these choices are really mutually exclusive. Just start with, “Why not both?” and think about it from there. The two possibilities you’re forced to choose between may be interacting with each other in interesting ways.
Ask yourself if there are other factors at play. Forced binaries are simpler and tidier. They’re also a great way to create two clear sides and pit people against each other. But the issues you’re discussing often have more complexity.
The quality of your thinking depends so much on your character. The company you keep is also important.
It’s not that intelligence doesn’t play a role. It’s just insufficient. Intelligent people don’t necessarily think with depth, either generally or in response to specific topics. There’s no guarantee that they’ll ever investigate their own opinions or question their own conclusions with any seriousness.
They may use their mental agility to deflect substantive pieces of evidence, anything that contradicts their view of “how things are.” These deflections can be harmful, shutting down important questions and preventing a much-needed discussion.
Intelligent people may be clever at crafting rationalizations or arguments that seem well-structured. Many times, they don’t question whether they’re behaving with integrity; it’s enough that other “right-minded” people are expressing the same thoughts. They may prioritize “owning” someone in an argument over learning anything. Or they use their intelligence mostly for snark and viciousness.
An intelligent mind may be a lazy mind. It may be narrow or given to exceptional dishonesty. (Context matters too. An individual can display in-depth thinking in one area of life while remaining superficial or dishonest in other areas – and either not recognizing the superficiality or not being troubled by it, because it doesn’t cost them social approval.)
Continue reading “What Affects the Quality of Your Thinking? (It’s Not Just Intelligence)”
In a class I took a few years ago, the professor assigned readings every week and instructed the students to come up with some comments or discussion questions in response. The readings were primarily research articles in psychology and neuroscience.
At one point the professor brought to our attention that most of the time our comments were negative and critical. “The researchers could’ve done XYZ but they didn’t” or “You can’t use an ANOVA for these data, can you?” or “They didn’t perfectly control for XYZ so their results are less conclusive.” These comments usually weren’t followed up on with alternate suggestions, so the professor would try to coax them out of people. “How would you have improved on the study?” she’d ask. But what’s more, she wanted a substantive discussion of the bigger picture questions. She started to demand more questions larger in scope and accompanied by people’s own ideas for experiments. The discussion had a different intensity then, more energetic and thought-provoking than when students just sat around picking at other people’s work.
Don’t get me wrong. It’s important to pick apart ideas and recognize a study’s limitations and flaws, whatever they happen to be: abuse of statistics, a poorly chosen subject population, a set of conclusions that’s too bold given the relatively weak results. That’s all a necessary part of critical thinking. Regardless of whether you’re a scientist or not you need to be able to evaluate people’s claims and see what merit they have.
But it’s also important to think in a more positive sense – generating ideas, asking questions, relating one topic to another and considering the implications of different findings. Fewer student comments referred to the strengths of any given study, only the weaknesses.
I remember at the time thinking of why negative remarks naturally dominated our discussions until the professor stepped in:
- We were afraid to look stupid. If we offered our own ideas they could get shot down and maybe show the workings of an immature mind. What did we know? We didn’t want to take risks. Picking at other people’s mistakes protected us from the most part from criticism, and this was important because we worried too much about what others thought of us.
- Some of us wanted to look like hotshots in a game of one-upmanship. It was less about the research, more about scoring points off of other people.
- We were emulating certain professors. The one who ran the class wasn’t like this, but over the years I’ve known other professors who liked to devote their seminars to shredding the work of academic rivals in a mix of scholarly rigor and personal enmity (recently I watched a movie that explores this toxic mix).
- We were on the receiving end of frequent critical evaluation, sometimes of a very negative kind, so we liked being able to dish it out. It gave us a feeling of power.
- Making small focused negative remarks took less effort than also trying to think of solutions or come up with new ideas or questions to investigate. Granted, our critical thinking, even if it was mostly negative criticism, took more mental effort than just blindly accepting or rejecting something without justification; we did our homework. But for lack of time, training, knowledge or willingness to put in the effort, we stuck to picking things apart.
One reason I respected the professor who taught that class was her balanced approach to criticizing other people’s work. She looked for flaws, but also for possibilities. She encouraged debate and discussion but didn’t permit nasty remarks. The idea was that we were supposed to take risks, and think more widely and broadly than a purely negative approach would allow, while also being perceptive enough to delve into the nitty-gritty details of a research study and understand its limitations.
Staying purely negative would have been a safer option. In playing the part of ‘superior critic’ we wouldn’t have had to confront our fears, insecurities and weaknesses as much, or take as many risks. And the discussions wouldn’t have been nearly as productive and inspiring.