*Stephen Politzer-Ahles is Assistant Professor at the Department of Chinese and Bilingual Studies of The Hong Kong Polytechnic University. He is committed to finding solutions to current challenges in the cognitive sciences. For instance, he is developing efficient and transparent strategies to empty out his own file drawer.*

*p>.05*. We’ve all been there. Who among us hasn’t had a student crying in our office over an experiment that failed to show a significant effect? Who among us hasn’t been that student?

Statistical nonsignificance is one of the most serious challenges facing science. When experiments aren’t *p<.05*, they can’t be published (because the results aren’t real), people can’t graduate, no one can get university funding to party it up at that conference in that scenic location, and in general the whole enterprise falls apart. The amount of taxpayer dollars that have been wasted on *p>.05* experiments is frankly astounding.

Fortunately, there is a solution. In this post I would like to introduce to you a new R function, `phackR()`

, which helps you find the significant result in your dataset. The logic underlying this function is simple: anyone who can’t get a significant result is lazy. A truly dedicated researcher, especially one who is aware how much money has been spent running participants and who knows that that money must not go to waste, will always be able to find a real result in their data. (This is precisely the point that Simmons, Nelson, and Simonsohn (2011) make in their famous paper, I’m sure of it.) This function simply assists in that process.

To use the function, simply feed it two vectors of data, like you would the `t.test()`

function. (Currently it’s only set up to handle paired data, but since it’s so useful, I’m sure someone will soon update it to handle other designs.) You can also specify whether the alternative hypothesis you are testing is that the first vector is “greater” or “less” than the second (again, same as with the `t.test()`

function). `phackR()`

will then show you which sub-group of participants shows the expected effect, and it will offer some helpful suggestions for which moderating variable might be useful to explain the presence of different sub-groups.

Below is an example with some simulated data. This example shows how powerful the function is: even for simulated data with an effect size of zero, `phackR()`

can successfully find the true effect that **you** wanted! Feel free to also try it out with your real data, and kiss your *p>.05 *woes goodbye.

# Simulate paired data with N=48 participants, a raw effect size of 0 (SD 2) N <- 48 effectsize <- 0 effectsd <- 2 cond1 <- jitter( rep(0,N), amount=5 ) cond2 <- cond1 + rnorm( N, effectsize, effectsd ) # source the phackR function source( url("https://raw.githubusercontent.com/politzerahles/phackR/master/phackR.txt") ) # Find the significant result! phackR( cond1, cond2 )

It’s that easy! On this first day of April 2017, I am thrilled to share this powerful function with you all.

This post is inspired by the wonderful monetizr package, and by years of interacting with experimental psychologists.

I failed to try your function on real data. iris data for example. Common error message I got:

Error in xy.coords(x, y, xlabel, ylabel, log) :

‘x’ and ‘y’ lengths differ

Thank you .

LikeLike

Hi Jiangsan, I think the Iris dataset does not contain paired data such as the stuff you would enter into a paired t-test. In any case, I do hope that you take this script with a big, big wink, and don’t use it for your actual research 🙂

LikeLike

Oh, it looks like there was a bug in the function (it’s taking the global value of `N` rather than figuring it out within the function). Thanks for pointing this out. I have now fixed it and it should work:

source( url(“https://raw.githubusercontent.com/politzerahles/phackR/master/phackR.txt”) )

phackR( iris[,1], iris[,2] )

However, like Christina pointed out above, this function is only a joke so hopefully you aren’t planning on using it for real research!

LikeLike

Reblogged this on Psychology things and commented:

Too much yall :,D too much

LikeLike