Check out this related story on PERA's policy blog, PERA on the Issues :
How to Be a Smart Consumer of Information
On any given day, we’re bombarded by data. Numbers, formulas, analyses, and comparisons help inform our decisions and occasionally even change our minds on some issues.
Much of the data we see and hear comes in the form of studies. Between traditional and social media, we constantly see the results of studies on social science topics ranging from health and behavior, to whether or not our cats actually need us. But, how do you know if the information or conclusions are valid, biased, or geared to persuade towards a particular viewpoint?
Recent research has called into question the credibility of published studies from the fields of social sciences (psychology, political science). However, it’s not just the social sciences getting a bad rap for questionable practices. Beverage giant, Coca-Cola, came under fire for not disclosing its role in a study which claimed lack of exercise—rather than eating (and drinking) habits—was to blame for America’s growing obesity problem.
A paper published earlier this year by the Centers for Disease Control (CDC) found a lack of trustworthiness in medical research usually stems from bad study designs. They laid out principles for determining the worth of a study, noting “…publishing innovative but severely biased studies can do more harm than good.” The paper’s authors also pointed out flawed studies are often disseminated by the media and influence public policy.
Another set of research, this time from the field of education, offers some tips for policymakers considering whether to use the conclusions of a study as the basis for policy changes. Given the depth to which these papers go to guide us on how to interpret conclusions, it’s easy to understand why it doesn’t simply fit into a 30-second soundbite.
So how does anyone know who or what “data” to trust? Here are some tips:
1. Consider the source.
Was the study performed by an independent third party, or a person or organization with an agenda? Be wary of studies which seem to confirm the biases of the entity conducting them. Conversely, see who might be challenging the results, or even trying to discredit them. What is their motive for disagreeing?
2. Follow the money.
Does the person or organization paying for the study have an ulterior motive or agenda? Is it trying to influence the outcome or validate its own perspective or world view? Just because the funders of a study have an agenda doesn’t necessarily mean the conclusions are flawed, but it can cast a shadow of doubt on the results.
3. Think critically.
Often, we see results or conclusions of studies without even seeing the original questions or premise, much less the full set of data results. You don’t have to be a scientist or a policy wonk to get a sense of whether a study looks like it was well conducted or not.
4. Ask hard questions.
Often, good research will create as many questions as it solves. This doesn’t mean the research is bad—it may be quite the opposite. Good data can help inform or even shift an opinion, but no single study can answer all the secrets to the universe. In fact, we should be skeptical of research that seems to be trying to do just that. Instead, using data to inform our views should be a constant process. Use data to prompt hard questions, and then look for more data to get the answers—and more questions.
5. Be open to other points of view.
Confirmation bias can cloud even a skeptic’s best thinking about controversial topics. This doesn’t mean all facts are created equally (see points one through three). Just because we like the conclusion reached by one study over another, the conclusion isn’t necessarily more, or equally, valid. Be willing to question data, especially if it confirms what you already believe. And don’t be afraid to get “no” for an answer. That’s just an opportunity to find more, or better, data.