Here’s How To Improve (Social) Psychology’s Academic Reputation

The social sciences such as psychology and sociology have been getting hammered as questions arise over the validity of findings coming out of various studies. The most heinous of causes are incidences of fraud, which is a terrible blow to the field any time it happens, but it goes beyond even that. Research methods are being questioned and classic findings that had been assumed to be reliable are put in doubt when replications of studies fail to reproduce and support said findings

Rumblings in the social sciences are not new, an article about the subject by Jonah Lehrer back in 2010 created a lot of attention. Most recently Daniel Kahneman has attempted to get the attention of his colleagues in the social priming field, sounding the alarm as replication failures for classic priming studies have proven to be disappointing. His call to action involves greater collaboration between laboratories in order to check results in more transparent and trustworthy ways.

A snippet from conversations between Kahneman and Ed Yong, author of an article about this subject on Discover magazine caught my attention. To quote from the article:

Kahneman said that priming effects are very subtle, and could be undermined by small changes to experimental protocols at the hands of unskilled experimenters.

And this immediately reminded me of another controversy in the social sciences when Daryl Bem published a paper detailing a series of classical experiments that had been slightly modified to test for paranormal phenomena (with significantly positive results). Many skeptics in the field responded critically and it prompted to call into question some of the most basic research practices in psychology. You see, Daryl Bem mostly followed text book practices and so when results turn up that many skeptics will find incredulous, there must also be something wrong with the basic research protocols we have been teaching students for decades.

The scientific method was in full motion in response to Bem’s studies, as many labs were motivated to replicate the studies. As some (but not all) research groups were unable to reproduce his results, many thus concluded that Bem’s results could now be discarded. On paper it seemed like a win for the scientific method, especially if you are a critic. You have to understand that for many scientists, this fringe area is seen as detracting from the integrity and credibility of the entire field. It’s a sore in the eyes of hordes of many traditional scientists.

But those that are especially familiar with the parapsychology field know that ESP type studies are typically done with greater scrutiny and attention to detail than the average study. An obvious reason for this is that if you want to arrive at convincing results in such a controversial arena, you have to be extremely diligent about ruling out effects that  would otherwise explain results. Another reason has something in common with the priming field: when experimentally studied ESP effects can be notoriously subtle. The debate continues to whether the small, but positive effects obtained in studies are pointing to something real, or just noise.

In the wake of Bem’s paper, failed replications added weight to the notion that the original study must have been flawed in some way, or at the very least, that the results are meaningless with so much evidence to the contrary. You see, if you can’t replicate a study, it’s usefulness disintegrates. Kahneman implies that some replications regarding classic priming effects were done inaccurately. The same argument regarding replications of Bem’s studies would be regarded by many as wishful thinking on the part of people who want to believe in ESP. Since priming effects rely on less exotic grounds, it’s a different story.

Whichever way you cut it, there’s a serious problem when you have a field that has generated a lot of data and you can’t tell how reliable that data is. My solution to this problem would improve research methods as well as the reliability of important and influential findings. My proposal would be to make replicating classical studies part of the curriculum across universities around the world. By repeating replications of classical studies on a periodic basis and prioritizing the findings from papers that are being cited the most, the entire field will get a boost. Data will become more reliable and students will learn stricter research methods than those of previous generations.

Leave a Reply

Your email address will not be published.