Abolish statistical significance
More than 800 experts say: Scientific hypotheses should no longer be divided into two categories based on the so-called p-value.

When was the last time you heard someone claim that there was "no difference" between two groups because the difference was not statistically significant? If you feel like us, then that was probably at the last lecture you attended. Hopefully there was at least one person in the audience who took a pause when the chart or table did show a difference.
Why do statistics too often trick scientists into denying differences that are clearly visible to statistically less literate people? For many generations, researchers have been cautioned that a statistically insignificant result in no way "proves" the null hypothesis - that is, the assumption that there is no difference between two groups or no effect of a treatment on the measured result. Nor do statistically significant results "prove" any other hypotheses. It is precisely these misconceptions that lead to exaggerated claims in the specialist literature and, what is far less well known, to supposedly contradictory findings where there are actually no contradictions.
We have a few suggestions on how scientists can avoid falling victim to these misperceptions. Specifically, the point is: From a p-value that is greater than a limit value (such as 0.05), or from a confidence interval that contains the value zero, one must never conclude that there is "no difference" or "no association ". Likewise, we should never conclude that there is a contradiction when one study has produced a statistically significant result and the other has not. Such errors waste research efforts and provide wrong information to decision makers…