See no evil, hear no evil, speak no evil: A tale of a moral dilemma.

Today marks a special day: I visited the website Biblehub.com, whose existence has eluded me up until now, mostly because I don’t believe in (any) god, but also because I’m marginally interested at best in religious history. However, today I googled the etymology of the famous proverb “Let him who is without sin cast the first stone”. Apparently it’s an excerpt from John 8:7, referring to Jesus’s plea to not pass judgment onto others unless you can consider yourself an entirely flaw-, sin- and errorless person (which in my eyes, no matter how inflated some egos are, is pretty much impossible). The original quote, paraphrased and translated in different ways, refers to adultery, about which I have nothing to say at the moment, but obviously every religious saying can be interpreted in a number of ways, so I figured it would be a good opener to what I want to unroll in this post: I have reason to believe that I have discovered a rather serious case of scientific misconduct and I am not sure what to do about it.

The reason why I thought of this quote was that even, or especially in the case of science, nobody is innocent. As much as we preach open science and transparent research methods and what not, I’m fairly sure everyone trying to survive in this jungle has done things that s/he is not proud of and would rather keep under tight wraps. I know I have, and I’m trying to change that with every new experiment I run. Anecdotally, I recently submitted a paper packed with null findings, so I suppose I’ll soon find out how fruitful non-replications under the veil of openness will be. What I want to say, we’ve all done some not so kosher things, and of course we’ve all made mistakes, and continue to do so (coincidentally, “to err is human” can also be found on Biblehub). That’s fine, no problem with that. However, there are cases which clearly go beyond a simple instance of “Let’s look at the data and see if we need to continue testing to make the effect more robust” (beware, critics: I know that that’s bad too, but as I said, we’ve all been there). Cases in which data are tortured until they confess, void of any statistical morale, where results are made up entirely, where the hypothesis follows the data collection. There have been a number of large-scale data fraud cases (being situated in the Netherlands of course rings the Diederik Stapel bell right away, and recently everybody’s been thrashing Brian Wansink, but the list is painfully longer), and there are retractions on a regular basis, so superficially one might think that these situations pretty much regulate themselves. Then again, if you’re a cynic like me, and agree that there is at least some truth to John 8:7, isn’t there reason to believe that given the vast scientific output, it’s rather likely that a lot of wrong-doing actually remains undiscovered forever?

To me, the simple answer is yes. First of all, if you want to fabricate your data, I ascribe you some extent of method to the deed, so technically, if you spend some time on it, chances are you can do that pretty well without anyone actually ever noticing. In other words, we might never find out what percentage of research is simply the result of someone getting their geek on with an Excel sheet. (Preregistrations, you say? I hear you, but I guess even then there are enough degrees of [individual] academic freedom to fuck up on purpose.) Unfortunately, this might be an unavoidable aspect of science, and there are only ways to limit, but probably not to entirely stop it. But what about situations in which someone discovers fraud, but doesn’t do anything about it? Now here is where my personal dilemma starts.

Some background info on the case that’s kept me up at night and probably doesn’t work in favour of my blood pressure: During the last months, I have spent an unreasonable amount of time inspecting two papers of the same authors, both published in highly ranked journals, which report the same data without referring back to each other. More precisely, the first paper reports a very interesting study, and the second paper, published three years later, recycles the exact same data set (from the exact same participants, according to the demographic description of the sample), omitting one condition, but adding two new ones. Large parts of the methods and results sections are absolutely identical, and to crown it all, the other day I found a paper co-authored by some of the initial authors referring to the second study as a replication of the first one. Now pardon my scepticism, but replicating a dataset up until the last standard error does sound a bit unrealistic to me.

Boom. That’s it. I discovered this, and I had no clue what to do next. First I complained to a few people at work about it (still do, sorry about that, guys). Everyone agreed that it sucked, but nobody had clear advice on how to proceed. I then asked for opinions in the PsychMAP group on Facebook, which is usually rather eloquent about everything methodologically unsound, but in this case was suspiciously silent, apart from two responders. I finally plucked up my courage and e-mailed the first author, explicitly asking about how these obviously identical data points reported with a time lag of three years came about. In her defence, she responded. She even admitted that the data are the same. However, my follow-up questions – purposefully naïve in tone – have remained unanswered so far.

Now, I am not going to disclose who the authors and publications in question are, and here is why: I worry about what effect such an action may have on my future career. Of course I could blurt it out in a number of outlets, or even contact the editors of the respective journals, but careful consideration of potential consequences has held me back. The reason: I am a postdoc who would like to stay in academia. I don’t know if I’ve ever written anything that has required a big, fat “(sic!)” as much as this. But let me explain.

I know that in an ideal world, we should try our best at what we’re doing, which in this case would also include reporting a case of fraud, if only to disambiguate a misunderstanding (part of me still wants to believe that that’s what this is). However, my current fellowship expires next year, I will have to apply for new grants, and chances are that one or more of the authors will be part of the evaluation committee. Now I don’t know about you, but I would be pretty pissed if somebody rained on my parade like this (even if it’s rightful), so I’m afraid that somebody whose work I have criticised will not exactly be willing to grant me money for research. Long story short: I think I’ll have to choose between sincerity and at least a chance for a new grant. Sucks to be me, doesn’t it? Then again, I’m finding it hard to believe that I’m the first one who has spotted the “mistakes” in these publications, or any other scientific output, for that matter. I asked earlier how much fraud remains untouched just because people don’t dare to speak up about it. Well I’m afraid it’s a frighteningly large amount. Of course we might agree that this particular instance is just a tiny speck in the universe of false science (which I don’t think it is, but I’m by now far too invested to be objective), because we’re not talking about a pharmacological study or a new cancer treatment. However, the longer I think about it, the more I question the value that initially drew me towards science (hypothesis testing, quantitative reasoning), and that unsettles me. So we can all flatter ourselves about how hard we try to improve science, but as long as people like me have to be afraid to actually take steps, I think we have a long way to go. The number of publications shouldn’t equal academic rigour (actually I’m getting so tired of saying this, but despite people saying that this criterion has lost importance, I haven’t encountered proof of it), which might result in a decrease of scientific misconduct. I’m quite sure the current case came about, at least partly, because of publication pressure. But on the other hand, negative consequences for reporting such cases should be minimised as well, if only to encourage people to look at results more critically. You might have guessed that I have no solution to this problem at hand at the moment. I’m sure if you look hard enough you’ll find something appropriate on Biblehub though.


Back