Guest blogpost from Anna Fedor (postdoc at Eötvös University, Budapest)
I honestly don’t remember how I joined the Reproducibility Project. I looked up the first e-mail that had the words “Reproducibility Project” in my google account and it was a thank you note from the Open Science Collaboration for responding a survey. It was January 2013, my daughter was 3 months old, which might explain the blur. At the end of the e-mail, it says “If you expressed an interest in helping with any of our projects, we’ll get in touch with you within the next week.” So I must have ticked a box saying that I was interested!
In fact, I have been interested in reproducibility issues for years. My PhD work included the replication of a study from Fitch and Hauser. It did not replicate, and I felt uneasy when I wrote this down in my thesis. Later, an internal investigation at Harvard University found Hauser guilty of scientific misconduct and one of his papers was retracted too. This made me more comfortable with my results, like I needed an excuse for a failed replication. I know, how stupid this sounds.
Soon afterwards, during the first months of my first postdoc job abroad, I gave a talk about my past work. I summarized the unsuccessful replication and I mentioned the case about Hauser (I quoted from The New York Times). Half hour later, a senior scientist, who was present at the talk, publicly shamed me on a departmental mailing list for accusing a well-known and respected scientist. She treated the subject as a taboo, although the investigation about Hauser had already been closed, so his guilt was not a speculation any more.
This just made me more curious about reproducibility and replications. I went to talks on the subject from David Shank, Ap Dijksterhuis, watched online lectures from Geoff Cumming about new statistics. This is when I first heard about the PsychFileDrawer and OSF. So it’s safe to say I was primed on the subject to tick that box between a diaper change and a breastfeeding!
Unfortunately, at that time, I did not have access to a participant pool, so I could not perform an experiment myself. Instead, I helped the project with various small jobs. We had a list of to dos, and anyone could choose from this list anything they wanted to do. These included items, like reviewing the protocols, the statistical analyses, the replication reports, rating original and replication studies according to various criteria, writing and editing the manuscript. The organizers kept track of who did what and you had to do a certain number of jobs to be considered a co-author.
It was an amazing experience to be part of such a huge project. We had a very busy mailing list, with interesting discussions. All documents and results were shared so anyone could peek into any of the subprojects (not just the authors, really, anyone). The most interesting part for me was to see how different people approached certain sensitive subjects. There was some adversity towards the project, especially from authors, whose experiments did not replicate. We had long discussions about how to treat these situations to make sure that we do not offend people, how to word exactly what an unsuccessful replication means and what could be the possible reasons of this. Also, there were lengthy debates about statistical problems, and interpretations of data, from which I have learnt a lot.
In this project, I did not only learn about scientific methods, but also about project management, online collaboration, organizing team work and communication. It definitely changed how I do science now. I am trying to use new statistics, convince my colleagues to pre-register our studies. It changed how I read scientific papers, I pay more attention to sample sizes, effect sizes, I try to imagine how the results would look like without a “significant” stamp on them.
I hope the reproducibility crisis will eventually change policies. Already 538 journals and 57 organizations joined as signatories of the TOP guidelines. Pre-registration of studies is already necessary at some journals. I hope that publishing replications, insignificant results, and exploratory data analysis explicitly differentiated from confirmatory tests will be more and more common. The rise of open access journals can definitely help this process. Maybe projects like the Reproducibility Project can contribute to a more honest and open science. Am I naïve? I have to be!
Anna Fedor is currently a postdoc at Eötvös University, Budapest, where she also obtained her PhD in evolutionary biology. Before her current job, she completed postdocs at Birkbeck College, London and the Parmenides Foundation, Munich. Anna has worked on a variety of exciting topics, from computational models of word finding difficulties in children and problem solving, over object permanence in primates to language evolution and grammar learning in humans.