Thursday, May 13, 2010

"negative" data


We all have it. "Negative" data. It's the data that says "Hey, nothing changed!" or "Well, that's not what caused X response (at least under Y conditions, in Z setting)." But the question is always what to do with said data. Because after all, it does MEAN something. It means that whatever you were testing isn't the answer to that question, which is just as important as determining what is the answer to that question. It's an important and necessary part of the scientific process, the asking of the question, testing it, refining the hypothesis. I have a hard time remembering that sometimes, that negative data (that's such a bad phrase for it -- it's not bad data, there's nothing inherently negative about it at all. It just doesn't support the hypothesis. Does that really make it negative? Isn't there a better word for that? Sorry, tangent. Back to the subject at hand) is important too. It's just so much more exciting to look at a graph with nice changes, short error bars, and little asterisk denoting significance than to look at a graph of essentially the same bar repeated in differing colors.


But the question is still there -- what do we do with the negative data? After all, we've now spent money and precious time collecting the data. And it is worth something. But publishing negative data can be hard. And we don't want to be known as that lab, the lab that publishes only negative data (i.e. the lab that can't make anything work!). But in reality, if we published ALL of our data, wouldn't the magazines look something like this:



Figure 1. Realistic view of how much AWESOME science v. how much negative science a lab generates.

So really then, what's a scientist to do? Most of my (negative) data thus far has been included in publications, but not show cased (i.e. included in a "data not shown" manner).  But does it really move science forward?

0 comments:

Post a Comment