Social psychology is having a rough time now, with replication studies questioning the validity of several well known and cited studies, by central figures in the field. Is the field alone with these problems? Probably not. Andrew Gelman presents some relevant factors that have make social psychology an easy target, mentioning the sophistication of methods used, the overconfidence in the results these methods offer the authors, the openness of the field, lack of strong financial interests, and the general interest in the field. (the full post is well worth reading! http://andrewgelman.com/2016/09/22/why-is-the-scientific-replication-crisis-centered-on-psychology/ )

In short, the same issues are very likely present in other fields as well!Jeffrey Pott

The debate can also get rough, when titans clash. Susan Fiske has recently written a column in the APS Observer, lamenting how social media allows for unmoderated attacks on published work, which can damage careers. Instead, she wants academics to stick to the system where criticism is given in private, or through a letter-to-the-editor; basically though a curated system with strong gate keepers.

Andrew Gelman has written a lengthy reply to her column, (where he also includes her letter) which I can only recommend to read in its entirety. It is well written, describes the historical development of the field (esp. last ten years) is easy to read, and make excellent points. http://andrewgelman.com/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/

(While a very rough measure of prominence, in Google Scholar, Susan Fiske has about 70 000 citation; Andrew Gelman about 62 000 citation. In my book, this qualifies as “titans” :))

Some notes from it:

  • Available methods became increasingly sophisticated, while the rigor with which they were implemented did not keep up. Many researchers did not fully understand the methods and assumptions underpinning them, leading to a cadre of mistakes. This can come to light in the past few years.
  • Rather than fight the evidence of methodological errors, leading lights like Susan Fiske should accept them, learn from them, and lead the way towards better research.
  • One can not have it both ways: one can not both use statistical methods to underpin a claim, and also refuse criticism from statisticians when the results are wrong.
    • “And let me emphasize here that, yes, statisticians can play a useful role in this discussion. If Fiske etc. really hate statistics and research methods, that’s fine; they could try to design transparent experiments that work every time. But, no, they’re the ones justifying their claims using p-values extracted from noisy data, they’re the ones rejecting submissions from PPNAS because they’re not exciting enough, they’re the ones who seem to believe just about anything (e.g., the claim that women were changing their vote preferences by 20 percentage points based on the time of the month) if it has a “p less than .05” attached to it. If that’s the game you want to play, then methods criticism is relevant, for sure.”

  • The complaint that such criticism and scrutiny in social media, can damage promicing careers, well, it goes both ways:
    • In her article that was my excuse to write this long post, Fiske expresses concerns for the careers of her friends, careers that may have been damaged by public airing of their research mistakes. Just remember that, for each of these people, there may well be three other young researchers who were doing careful, serious work but then didn’t get picked for a plum job or promotion because it was too hard to compete with other candidates who did sloppy but flashy work that got published in Psych Science or PPNAS. It goes both ways.

  • Critics and those who scrutinize academic work are not adversaries (as Fiske called them in her letter) but people who bother to check work done. This is more a collaborator.

 

 

Share This