Skip to main content

A Cambridge professor on how to stop being so easily manipulated by misleading statistics

http://qz.com/643234/cambridge-professor-on-how-to-stop-being-so-easily-manipulated-by-misleading-statistics/

“There are three kinds of lies: Lies, damned lies, and statistics.” Few people know the struggle of correcting such lies better than David Spiegelhalter. Since 2007, he has been the Winton professor for the public understanding of risk (though he prefers “statistics” to “risk”) at the University of Cambridge.
In a sunlit hotel room in Washington DC, Quartz caught up with Spiegelhalter recently to talk about his unique job. The conversation sprawled from the wisdom of eating bacon (would you swallow any other known carcinogen?), to the serious crime of manipulating charts, to the right way to talk about rare but scary diseases.
When he isn’t fixing people’s misunderstandings of numbers, he works to communicate numbers better so that misunderstandings can be avoided from the beginning. The interview is edited and condensed for clarity.
Quartz: You have one of the most unique jobs in the world. What does your job involve?
Spiegelhalter: Most of the time I’m working on quantitative and qualitative evidence. I give a lot of talks, write books, and advise people who want to communicate numbers. I also get called by the media to talk about numbers and whether we can believe them.  “Humans are very bad at understanding probability. Everyone finds it difficult, even I do.” So although my post is called “professor for the public understanding of risk,” I interpret it as professor for the public understanding of statistics.
In terms of research, my work is mostly collaborative, working with psychologists, mathematicians, and others who are trying to find ways to communicate risk. My current project, for example, is working on a website for families with babies that have congenital heart disease.
What we are communicating are simple statistical issues, such as underlying risk, standard errors, and variability. But they are extremely difficult to communicate clearly, even to people with some training in statistics. So we spend a lot of time with patient groups, changing wording after wording, such that we end up with something that is understandable without being technical or misleading.
What’s a recent example of misrepresentation of statistics that drove you bonkers?
 “Graphs can be as manipulative as words.” I got very grumpy at an official graph of British teenage pregnancy rates that apparently showed they had declined to nearly zero. Until I realized that the bottom part of the axis had been cut off, which made it impossible to visualize the (very impressive) 50% reduction since 2000.
You once said graphical representation of data does not always communicate what we think it communicates. What do you mean by that?
Graphs can be as manipulative as words. Using tricks such as cutting axes, rescaling things, changing data from positive to negative, etc. Sometimes putting zero on the y-axis is wrong. So to be sure that you are communicating the right things, you need to evaluate the message that people are taking away. There are no absolute rules. It all depends on what you want to communicate.
Surely though, in your years of work, there must be some lessons that those involved in communicating risk—journalists, politicians, doctors and such—can take away. What are they?
There are. We know, for example, that “relative risks” can be used to look impressive. Twice a small number is still a small number. We know that talking in whole numbers—so many people out of 100—is clearer than talking in percentages or decimals. We know if done right, visual representation can often do a better job of explaining numbers, especially to those with low numeracy. “As a statistician, the perception of numbers is new to me. I thought people would know that 3 out of 100 = 3% = 0.03.” 
We’ve used this knowledge, worked with psychologists around the world, to build guidelines for how people can best communicate risk. But there are still things that we haven’t got a good answer to. For instance, we know that people think 30 out of 1,000 is bigger than 3 out of 100. We know that we make numbers look bigger by manipulating the denominator. As a statistician, the perception of numbers is new to me. I thought people would know that 3 out of 100 is equal to 3% is equal to 0.03. But they are very different!
The bottom line is that humans are very bad at understanding probability. Everyone finds it difficult, even I do. We just have to get better at it. We need to learn to spot when we are being manipulated. Changing axes on a chart is one way, but there are many other subtle ways to do it.
What if humans were perfect at understanding probability? How would things change?
Oh, we would be strange people I think. *laughs*
But maybe not. Take the example of lotteries. People know that the chance of winning a lottery is low. The probability of winning the UK jackpot is about 1 in 45 million. “If we understood probability perfectly, then we would be less open to manipulation.” The way to illustrate that is: Think about a big bath, fill it to the brim with rice. That’s about 45 million grains of rice. Then take one grain of rice, paint it gold, and bury it somewhere in there. Then you ask people to pay £2 to put their hand in and pull out that golden grain of rice.
That is a good image and it seems ridiculous. But people do win. Last year, there were two people who drew the winning number. So people care about the small but real chance of a huge change.
My hope would be, if we understood probability perfectly, then we would be less open to manipulation: people trying to sell things, scare others, or even falsely reassure someone. But it may not change behavior. All the studies show that, even with good risk communication, people carry on doing what they did before.
Is this why you say that, through your work, you only want to inform people, not change their behavior?
I don’t particularly want to change behavior. I feel that it would be better if people lived healthier lives, so that they can see their grandchildren grow up. That would be a good thing.
 “A carcinogen—bacon…is classified in the same category as smoking, but I happily ate my carcinogen this morning.” But that’s not my primary aim. My hope is that people are aware of the risks. That if they are doing something then they know the consequences.
This morning I was eating a carcinogen—bacon. It is classified in the same category as smoking, but I happily ate my carcinogen this morning. But I’m of aware that, if I eat bacon every day in substantial quantity, it does increase my risk of getting bowel cancer and dying earlier.
If rational decisions are not the outcome you are looking for, why bother?
Depends on what you mean by rational. I don’t like that word. You could use other words like “value-congruent,” which fit in with what people feel is the appropriate value. Those are the decisions they will make and not regret in the future. People will take the consequences if they feel they are autonomous human beings and have made a judgement on their own.
So not “rational” in the narrow sense of a logically perfect outcome. But if “rational” is taken to mean something broader, something in which your actions, emotions and value fit together in a coherent whole, then my hope is to that people will make rational decisions.
Poorly communicated risk can have a severe effect. For instance, the news story about the risk that pregnant women are exposing their unborn child to when they drink alcohol caused stress to one of our news editors who had consumed wine moderately through her pregnancy.
I think it’s irresponsible to say there is a risk when they actually don’t know if there is one. There is scientific uncertainty about that.
 “‘Absence of evidence is not evidence of absence.’ I hate that phrase…It’s always used in a manipulative way.”  In such situations of unknown risk, there is a phrase that is often used: “Absence of evidence is not evidence of absence.” I hate that phrase. I get so angry when people use that phrase. It’s always used in a manipulative way. I say to them that it’s not evidence of absence, but if you’ve looked hard enough you’ll see that most of the time the evidence shows a very small effect, if at all.
So on the risks of drinking alcohol while being pregnant, the UK’s health authority said that as a precautionary step it’s better not to drink. That’s fair enough. This honesty is important. To say that we don’t definitely know if drinking is harmful, but to be safe we say you shouldn’t. That’s treating people as adults and allowing them to use their own judgement.
Science is a bigger and bigger part of our lives. What is the limitation in science journalism right now and how can we improve it?
The dedicated science journalists I know are very impressive people and they make a huge effort in putting out a balanced, accurate story. The problem is when science stories leave science journalists and get into the hand of general journalists. Then you do see ridiculous manipulation of evidence and story. So journalism about science has problems, especially when it leaves those who understand what’s going on.
It is, of course, the ultimate challenge to be true to the facts, but also be vivid, to arouse enough emotion to make people read the story. It’s terribly difficult. That’s what I’m working at now. My job is to make rather unexciting things into a vivid enough story, like the effects of having alcohol or eating a bacon sandwich. Finding the drama in the mundane is the real challenge.
Currently the world is playing a waiting game on the evidence whether the Zika virus causes birth defects or doesn’t. What do you think about risk communication in these conditions?
It’s a classic case where precautionary measures would be better. I would say that there is sufficient evidence to take precautions, such as not getting pregnant if you or partner have been to an affected area.
 “On radio people talk about the ‘high risk’ of getting microcephaly, but that’s not the case.” It’s a temporary holding measure, and it’s an appropriate form of risk communication. In the future, we will be able to give a much stronger opinion. What it allows you to do is acknowledge scientific uncertainty. You don’t need to claim a causal link and overstate the case, but that there is enough evidence to be cautious.
Though we don’t know the exact incidence of microcephaly cases, we have an idea that the number will be low. Otherwise we would have had a much larger number of cases. On radio people talk about the “high risk” of getting microcephaly, but that’s not the case. The risk is probably “higher” but it’s not likely to have high absolute risk. This kind of risk communication will increase people’s anxiety unnecessarily.
This is what we know. This is what we don’t know. We don’t know what the risks are. We are doing this to find out. In the meantime, to be on the safe side, you might want to do X, Y, and Z. That’s self-empowerment if you are anxious about it. Then we will come back to you and our recommendation will change in the future. It’s an adaptive and flexible strategy.
This has been the case for most pandemics. For instance, the cases of swine flu were wildly exaggerated when the outbreak began. That’s not without reason, because that’s precaution. But it’s important to communicate that the figures are temporary and they will be updated as we gather evidence.

Comments

Popular posts from this blog

The Difference Between LEGO MINDSTORMS EV3 Home Edition (#31313) and LEGO MINDSTORMS Education EV3 (#45544)

http://robotsquare.com/2013/11/25/difference-between-ev3-home-edition-and-education-ev3/ This article covers the difference between the LEGO MINDSTORMS EV3 Home Edition and LEGO MINDSTORMS Education EV3 products. Other articles in the ‘difference between’ series: * The difference and compatibility between EV3 and NXT ( link ) * The difference between NXT Home Edition and NXT Education products ( link ) One robotics platform, two targets The LEGO MINDSTORMS EV3 robotics platform has been developed for two different target audiences. We have home users (children and hobbyists) and educational users (students and teachers). LEGO has designed a base set for each group, as well as several add on sets. There isn’t a clear line between home users and educational users, though. It’s fine to use the Education set at home, and it’s fine to use the Home Edition set at school. This article aims to clarify the differences between the two product lines so you can decide which

Let’s ban PowerPoint in lectures – it makes students more stupid and professors more boring

https://theconversation.com/lets-ban-powerpoint-in-lectures-it-makes-students-more-stupid-and-professors-more-boring-36183 Reading bullet points off a screen doesn't teach anyone anything. Author Bent Meier Sørensen Professor in Philosophy and Business at Copenhagen Business School Disclosure Statement Bent Meier Sørensen does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations. The Conversation is funded by CSIRO, Melbourne, Monash, RMIT, UTS, UWA, ACU, ANU, ASB, Baker IDI, Canberra, CDU, Curtin, Deakin, ECU, Flinders, Griffith, the Harry Perkins Institute, JCU, La Trobe, Massey, Murdoch, Newcastle, UQ, QUT, SAHMRI, Swinburne, Sydney, UNDA, UNE, UniSA, UNSW, USC, USQ, UTAS, UWS, VU and Wollongong.

Logic Analyzer with STM32 Boards

https://sysprogs.com/w/how-we-turned-8-popular-stm32-boards-into-powerful-logic-analyzers/ How We Turned 8 Popular STM32 Boards into Powerful Logic Analyzers March 23, 2017 Ivan Shcherbakov The idea of making a “soft logic analyzer” that will run on top of popular prototyping boards has been crossing my mind since we first got acquainted with the STM32 Discovery and Nucleo boards. The STM32 GPIO is blazingly fast and the built-in DMA controller looks powerful enough to handle high bandwidths. So having that in mind, we spent several months perfecting both software and firmware side and here is what we got in the end. Capturing the signals The main challenge when using a microcontroller like STM32 as a core of a logic analyzer is dealing with sampling irregularities. Unlike FPGA-based analyzers, the microcontroller has to share the same resources to load instructions from memory, read/write the program state and capture the external inputs from the G