Trust me, look who
When new test results confirm old textbook knowledge, it is both frustrating and reassuring for researchers - the data will probably be met with little doubt. However, if they contradict the common opinion, not only experts, but also professional colleagues will probably be taken aback. So to what extent does what has already been published shape current work at the laboratory bench?
Anyone who is not currently researching something completely exotic is faced with a huge and almost unmanageable flood of information from scientific articles - regardless of the discipline. In biomedical research alone, more than five million publications with new results as well as comprehensive overviews have been published in the past ten years. If you want to keep track of whether protein A might regulate gene B, you can do well in the relevant literature databases. It is not for nothing that automated search robots for the text jungle are becoming increasingly popular.
Suppose one of these now spits out ten results: Three say the protein has an effect, five deny it, and the last two are somewhat contradictory – how does that affect the doctoral student whose recent experiments on mice speak against the regulation, while his colleague clearly observed an effect on cats? Will he skeptically put his results on the shelf and check the experiments again? Or throw it into the round at the next conference according to the motto "Now more than ever"?
Andrey Rzhetsky from Columbia University in New York and his colleagues should be aware of the conflict of conscience, and as bioinformaticians they resorted to their typical tool – the computer – to obtain an assessment. They used the Geneways data program to sift through millions of protein-gene interaction statements and analyzed how an earlier publication affected the content of subsequent publications.
In doing so, they took into account a number of limitations. It was clear to the scientists that a positive result – a proven interaction, for example – is more likely to be published than a negative one. In their model, they also conceded the possibility that experiments with false-positive results would find their way into the specialist literature, and of course every rule could have its exception, even a correct one. They also did not forget that automatic text analysis cannot distinguish between new experimental data and the repetition of the results of others - this factor was also taken into account, as was the psychological effect that there should be a difference in perception between multiply confirmed results and outliers the general opinion.
All in all, the image of the researcher type of the mild skeptic crystallized out of the analyzed data, who takes extensive note of other results and compares them with his own work - but still places the greatest trust in his own results. Rzhetsky and co found that there was a clear influence of others, with mainstream opinion outweighing more exotic views. However, the data they created themselves received at least ten times the weight.
But the palette offered even more. Rzhetsky and his collaborators were also able to identify the scientist who only takes care of his own work – what colleagues publish does not interest him and accordingly does not influence him. He makes himself completely independent of others in his world of ideas. The opposite extreme manifested itself as super-conformism: any prior finding consistent with one's own results affects the researcher so strongly that subsequent interpretations invariably tend in the same direction. If everyone were knitted like this, there would soon be a fairly uniform, anything but independent opinion, in which "inappropriate" results would hardly have a chance.
It's completely different with the next type: It is particularly encouraged and inspired by those essays that contradict the general picture. His conclusions also depend to a large extent on earlier work. A curiosity of a special kind is a kind of mixture of the last two forms: As long as no conflicts arise, the respective scientist follows the predominantly published opinion, but if the first discrepancies appear, the advocate suddenly becomes an opponent who now takes sides the subverter poses.
In any case, the opinion of colleagues apparently plays a decisive role in the further development of scientific ideas - earlier findings become veritable mini-paradigms. However, Rzhetsky's team was surprised to find that the influence of previous publications on the most common type of mild skeptic was not at all sufficient to explain the close connection between the successive interpretations. And even a simple, frequent parroting of earlier findings was not sufficient for the observed pattern. So what made the new build so heavily on the old?
Rzhetsky and his colleagues speculate that it may simply be the fact that experiments rarely produce false results and that the number of positive results far outweighs them - they accordingly call this possible explanation their "optimist world". However, a statistical analysis also provided the exact opposite possibility with the same probability: A "pessimist world" in which no result can actually be trusted and every randomly selected positive statement is more likely to be wrong than right. Ultimately, both extremes result in research that is strongly influenced by previous studies.
However, it becomes a decisive question of quality in which of the two worlds we live. Because in the pessimistic case, incorrect scientific knowledge would simply and continuously establish itself. The researchers point out that a redesign of the publication process or a new evaluation procedure would have to remedy this. In the optimistic case, however, scientific progress would run like clockwork - and the doctoral student who does not bury his results in a drawer would be on the right track.