About a third of the way into Ezra Klein’s new essay “How Politics Makes Us Stupid,” I met a stumbling block. Klein begins his essay by describing a 2013 study that tested whether political affiliation could compromise people’s ability to solve a simple statistical problem. In an experiment, researchers gave some subjects a stats problem about the efficacy of a skin-rash lotion, and others a structurally identical problem about the efficacy of a gun-control law. Here’s Klein’s summary of the results:

Being better at math didn’t just fail to help partisans converge on the right answer. It actually drove them further apart. Partisans with weak math skills were 25 percentage points likelier to get the answer right when it fit their ideology. Partisans with strong math skills were 45 percentage points likelier to get the answer right when it fit their ideology. The smarter the person is, the dumber politics can make them.

Consider how utterly insane that is: being better at math made partisans less likely to solve the problem correctly when solving the problem correctly meant betraying their political instincts. People weren’t reasoning to get the right answer; they were reasoning to get the answer that they wanted to be right.

Something’s not quite right with Klein’s inferences here, I’m pretty sure. Here’s a link to the research paper that Klein is describing: “Motivated Numeracy and Enlightened Self-Government” by Dan Kahan, Ellen Peters, Erica Dawson, and Paul Slovic. And here’s how the original authors phrase the results that have caught Klein’s eye:

On average, the high Numeracy partisan whose political outlooks were affirmed by the data, properly interpreted, was

45 percentage pointsmore likely (± 14, LC = 0.95) to identify the conclusion actually supported by the gun-ban experiment than was the high Numeracy partisan whose political outlooks were affirmed by selecting the incorrect response. The average difference in the case of low Numeracy partisans was 25 percentage points (± 10)—a difference of 20 percentage points (± 16).

Klein has reported the numbers accurately, but his interpretation of them is fallacious. As you can see by comparing Kahan et al.’s words with Klein’s, Klein is correct when he writes that “Partisans with weak math skills were 25 percentage points likelier to get the answer right when it fit their ideology. Partisans with strong math skills were 45 percentage points likelier to get the answer right when it fit their ideology.” But Klein is in error when he adds, “The smarter the person is, the dumber politics can make them.” If higher-numeracy subjects are 45 percent more likely to identify the correct answer when they find it congenial, and lower-numeracy subjects are only 25 percent more likely to do the same under the same conditions, then math skills improve the ability to solve the problem under those conditions by 20 percentage points, as Kahan, Peters, Dawson, and Slovic note. Smarter people are in fact smarter (the trouble is that they only bother to use their smarts to confirm their political bias—more on that in a moment).

Klein also writes, “Being better at math made partisans less likely to solve the problem correctly when solving the problem correctly meant betraying their political instincts.” That’s *not* an accurate report of Kahan et al.’s results. In their study, being better at math did make partisans a tiny bit more likely to solve the stats problem correctly even when the correct answer contradicted their partisan druthers. (For the evidence, see the dotted blue and solid red curves in the lower graph of figure 6 in Kahan et al.’s paper; the drift is upward in both cases, though it’s an exceptionally modest upward; that is, when solving a puzzle that declares that gun control increases crime, a liberal’s odds go up very slightly as his math skills improve, and so do a conservative’s odds when solving a puzzle that declares that gun control lowers crime.) Kahan et al. didn’t discover that math hurt problem solving. They discovered that math skills helped disproportionately more when the correct answer confirmed the subject’s political biases.

Klein writes that “People weren’t reasoning to get the right answer; they were reasoning to get the answer that they wanted to be right.” In fact, the original researchers’ explanation was a bit more subtle. They noted that an easy wrong answer tempts anyone who first glances at the type of statistics puzzle they chose, and they suggested that when the easy wrong answer confirmed a partisan’s bias, he was more likely to fall for it. Partisans resorted to brain-taxing math skills only when the easy wrong answer contradicted what they hoped to hear.

Kahan et al. did discover that math skills increased polarization. Not polarization in political bias, though: within the experiment’s sample of subjects, polarity in political bias was a given. The polarization that worsened was between likelihood of solving the problem correctly when it confirmed biases and likelihood of solving it correctly when it contradicted biases. Intriguingly, that polarization was not only higher when math skills were higher. It was also higher among conservatives than among liberals. (The evidence is in the lower two graphs in figure 7 of Kahan et al.’s paper. In both graphs, the red bumps are much further from one another than the blue bumps are, which suggests that conservatives’ ability to solve the problem diverges more according to bias than liberals’ ability does.)