What happens when you stand up to the big wigs? A follow-up interview with Anne Scheel

Two years ago, Team CogTales (Sho and Christina) interviewed Anne Scheel. We were impressed how she stood up to ask a tough question at Germany’s largest Psychology conference (the DGPs Kongress) after a keynote presentation. Two years later, Christina and Anne actually met up at the next installment of the very same conference, and a lot has changed in the short time span. So it seems like a perfect moment to catch up and take stock.

CogTales: Hi Anne, thank you for talking to us again. And of course we are curious to hear whether you think your actions had any consequences for you and / or your career.

Anne: Unfortunately we didn’t set this up as an RCT so I have no idea how control-group Anne is doing, but I’ll happily overinterpret the correlational data and say that it’s had quite an impact on my career — in a snowball-/Matthew-effect kind of way: First I gained a lot of Twitter followers and the attention of some more influential people, who then gave me a platform (like you!), which led to more people knowing me, and so on. Now I regularly get invited to give workshops and talks, I’ve been a guest on two podcasts… if you had told me that two years ago, I would have recommended you to go easy on the mushrooms. I still think it’s crazy.

In fairness, some of this has (very likely) been the result of starting a blog with three friends, as we’ll talk about below — the causal relations are hard to disentangle!

One of the few things that can be traced back to my DGPs moment with certainty is that several people have bought me a drink for it (you guys are amazing).

CogTales: You presented and co-chaired a symposium at this year’s DGPs conference as well, congratulations. How did your session on “Ramping up Rigor: Current Practices, Problems, Potential Solutions” go? Do you think the field has changed in the past years?

Anne: We had a bit of a thankless slot on the first morning of the conference but still got a nicely filled room and some really interesting discussions. I found this year’s DGPs fascinating — so many open science/reproducibility sessions! There were several different ones every day, some by people I hadn’t known before, including a large symposium and keynote on theory and a policy-focussed panel with representatives from several psychological societies.

I think it’s safe to say that the movement has grown in width and depth. My experience is probably not representative and I’m likely to overestimate adoption rates, but it feels like e.g. preregistration has become so common that in most psychology departments you might meet someone who has done it. At the same time, the discussions about improved research practices have become more specialised and cover more ground; see for example the increasing whispers about the “theory crisis”. To me that’s progress. We started discovering our problems at the far and most concrete end — the numbers that are churned out by our research process — and since we lifted the rug, we’ve been tracing the problem back to its roots. It’s a long way, and the questions that arise become more and more philosophical, which unfortunately doesn’t make them easier to answer. But in any case, it feels like the need for reform is turning into an accepted narrative. I find that very exciting.

 

CogTales: We also noted that you are now part of a blog team, of course we hope the positive experience with CogTales helped in that decision. How do you experience blogging?

Anne: Funnily enough, The 100% CI is also a consequence of DGPs 2016. After Susan Fiske’s keynote, Malte Elson, Julia Rohrer, and Ruben Arslan were among the people who stuck around to talk to her, and after the conference the four of us stayed in touch and eventually decided to start a blog together.

It’s been a great experience! I can highly recommend blogging in a team — it’s much easier to provide regular(ish) content and reach a wider audience, and putting your thoughts out into the world feels less scary because you can review each other’s posts before publishing. And it’s just way more fun. Not that I’d need to tell you guys that!

We’ve gotten lots of positive responses and I learned a lot from the others and from discussions with our readers. That said, I have to admit that I’m a terrible blogger; I take ages to write anything. At the moment it’s so bad that I can’t really take any credit for the blog at all, it’s the other three who keep it going. Luckily they’re not just funny and smart but also some of the kindest people I know and bear with me…

 

CogTales: Do you think more early career researchers should blog?

Anne: It depends! For the individual (blogger) it’s a great opportunity to structure your thoughts, practice writing, and of course make a name for yourself and build your network. But I personally have a relatively high bar for what I think should be published (cf. taking ages to write a post…). People have limited time and there are so many blogs out there, so if you want impact, I think you need to find a niche and/or aim for (very) high quality. And that can then add a lot of self-inflicted pressure you otherwise wouldn’t have.

But if your main goal is to practice writing and your blog is intended as more of a public diary or writing tool, these (potential) drawbacks don’t apply of course. And I can see a lot of value in using it that way! Blog posts are usually a lot less formal than scientific papers, and that can be a great help when you struggle to put your thoughts on a blank page.

 

CogTales: Two years ago we talked about the skepticism social media assessments of published papers face. Has it become more accepted to blog about a paper? Is blogging and tweeting now more of a typical aspect of publishing and communicating science or do we still have a long way to go?

Anne: That is a really interesting question. On the one hand I think that publicly criticising published papers has become a lot more common in the last two years. Take the “pizza papers” affair, the avalanche of faulty papers by Brian Wansink that was uncovered by Tim van der Zee, Nick Brown, Jordan Anaya, and James Heathers: This investigation has been a constant source of news in my corner of the Twittersphere for the last couple of years, and helped popularise error detection and “forensic statistics” e.g. using tools like statcheck, GRIM/GRIMMER, and SPRITE. “We found inconsistencies in a paper” is a sentence I find a lot less surprising (or alarming) now than I did a few years ago.

On the other hand, criticising individual authors still seems to make people very uncomfortable — so uncomfortable that it reliably triggers pushback. I still get the strong sense that some people are only fine with criticism if it is directed against a behaviour or a group of people that is ill-defined enough to prevent individual group members from being identified. I have no numbers on this, but this “effect” feels exactly as strong to me as it did two years ago, and I find that really interesting.

Regarding the second part of your question, blogging and tweeting about research feels like a very “established” way of communicating to me, and I regularly see blog posts cited in published articles. But I’m never quite sure how much that reflects an actual change in the field and how much is just a result of the bubble I’m in — it’s absolutely possible that less Twitter-focussed colleagues would disagree with me on this.

 

CogTales: So it seems like we’ve already come a long way, but there’s still a lot to do. Tell us – what’s the most pressing issues we need to resolve in the way we do Science in the years to come?

Anne: At the moment I’m very concerned about standards for the new research practices that have been promoted in the last years, for example preregistration and data sharing. The concept “preregistration” is currently very poorly defined — it can be anything from a few lines of text stating a vague hypothesis to an extremely detailed, watertight research protocol including the analysis code. If the label “preregistered” applies to anything between these extremes, it doesn’t tell us anything and will inevitably lose its value in the long run (a recent study by Veldkamp et al. shows that even relatively strictly-reviewed preregistrations leave a lot of room for researcher degrees of freedom). And it leads to hard feelings: You now see cases popping up where a new finding from a “preregistered” study is published that some sceptics don’t find believable and proceed to criticise the preregistration as too vague. That then leads to surprised/frustrated responses from the authors, who feel unfairly treated because they already “went the extra mile” and still get flak.

If we want to get as many researchers as possible into this boat and also make sure that the new steps they are required to take really make their research more reproducible, the instructions have to be clear and fool-proof — without suffocating research lines that don’t fit neatly into the bog-standard hypothetico-deductive mould of experimental psychology research.

Similar issues apply to data sharing: Making your data publicly accessible has become a lot more common, but currently the used file formats, repositories, and data documentation are a huge mess, which is a problem for automated meta-analyses and long-term usability. Another aspect of data sharing that worries me is what happens when existing datasets do get reused: How can we make sure that secondary analyses don’t turn into the same overfitting spree that we just started to get a grip on for primary analyses? A group of people led by Sara Weston have done great work to develop a framework for preregistering analyses of pre-existing data, and I agree with them that it is possible. But what if individual researchers are less concerned with making use of such safeguards?

I see a risk of “collective overfishing” of existing datasets in the long run and wonder if we should replace open data with lightly-gated data: Imagine a central data-sharing platform where you have to request access to any dataset you want to retrieve. Authors could decide how strongly they want to restrict access to their data, e.g. fully private or access only for selected individuals/institutions. In the most “open” setting, anyone’s request for access would be approved immediately, but this instance of granted access would always be tracked and time-stamped. This kind of regulation could take a lot of strain off the the current system, which relies (too) heavily on interpersonal trust for my liking (nullius in verba!). As added benefits, a central platform would make it easier to establish standardised data structures and meta-data, and could even include built-in tools to facilitate and regulate cross-validation (e.g. split requested datasets into training data and holdout and restrict access to the holdout for a certain time period).

Long story short: I think we’ve seen many great new ideas and solutions to our problems sprouting in the last years, and now we’re entering a phase of pruning and channelling those ideas into effective, actionable, sustainable tracks — in other words: new standards. Developing standards that achieve their goals with minimal negative side effects is an extremely hard, but extremely important task.

Leave a comment