Let’s talk Methods

The topic of “New Methods” or “New Statistics” is a vocation for some, a pet subject for others, an unavoidable obstacle for yet others. And finally, it is an expression unheard of, or at least unfamiliar, to many*.

Whichever way we stand on this subject, however, what seems clear is that sooner or later we need to face it in some way or the other. And although this means that it is unavoidable, I am convinced that it is a topic that moves our field forward and carries long-term benefits.

There are many excellent and insightful resources on this topic out there (you could start with this article on the cognitive biases that can lead us to adopt sub-optimal methods), so no need to reinvent the wheel. However, there are two aspects of the “New Methods” discussion that strike me as noteworthy:

First, the way this topic is discussed and opinions are formed seems to me very encapsulated. A single lab’s culture often determines whether or not people are exposed to the discussion, and end up in favor of or against it. This is probably due to the fact that the pursuit of these new methods is still an option, not an obligation, and it is not standard teaching material in university curricula. My own close research community  is interested in and knowledgeable about these topics, and I would have to bury my head in the sand to avoid being exposed to it. However, I have also talked to many researchers who have not been part of this lively-geeky discussion , and for whom abbreviations like OSF are an unknown (don’t worry if you’re one of them – as I said, there are many!) I have talked to other people who are very well in the loop about all these things, but who(se superviors) think that occupying ourselves with changing the way we approach our research projects is something that decreases our creativity and intellectual freedom, or even worse something that unsuccessful scientists do to boost their publication records.

Second, while we see a lot of excellent resources on how we SHOULD do things and why, I see less about researchers’ personal experiences with actually putting some of these new methods in practice. Does it work for them? How time-consuming is it? What kind of benefits and disadvantages have they experienced? How do they select what methods to adopt?

Like with many things, for me personally it’s always easier to try out something new if a friend or colleague has already done so and tells me about it, than when reading rather abstract instructions online. That’s why we want to use this platform to share individual experiences among researchers working in different environments. We’ll start off soon with our own view on some of these topics.

 

 

* For those who are not even sure what I am talking about:

But what ARE New Methods?

The term “New Methods” was coined over the last years to describe reactions to the replication crisis, severe criticisms of scientific practices in psychology and neuroscience. Think p-value inflation when we calculate uncorrected t-tests over multiple voxels in the brain, or trying out four different kinds of excluding outliers (exclude trials with reaction times over 2 SD or 3 SD from the mean, calculated over individual or group level) and reporting only the one that leads to a significant difference between conditions of interest.

The former is wrong because when conducting multiple comparisons the likelihood of finding an effect by chance needs to be accounted for (otherwise you end up finding brain activation in a dead fish); the latter is misleading because, if every researcher tweaks results this way without reporting she did this, this will lead to an overestimation of actual effects in the literature. And these two examples are just a glimpse into a vast and dark landscape of conscious and unconscious scientific misconduct – for an overview you could, again, look at this article I cited above.

Luckily, there are proposed solutions to most of these problems. Some seem easy to implement, some not. And these are often named “New Stats/Methods” although they often are not new at all (for instance, the most obvious solution for the first problem would be p-value correction – hardly rocket science). But for cases like the latter, it’s a bit more tricky: If you haven’t decided on the way to exclude outliers beforehand, what would tell you that the way you finally are doing it is wrong? You might just be succeeding in reducing noise? These arguments are not invalid, but by using several ways of determining your final data-set, you end up doing something very similar to uncorrected multiple comparisons. So, very briefly, what has been proposed is a system of preregistration, thus declaring beforehand the steps you will follow. This forces you to think about the decisions you will take beforehand,  decreases your (unconsciously taken) degrees of freedom and leads to better distinction of hypotheses and post-hoc decisions. More on that topic will follow, but this is an example of adopting new methods in science.

Advertisements

2 thoughts on “Let’s talk Methods

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s