C. Dale Poulter, a chemistry professor at the University of Utah and editor of the Journal of Organic Chemistry, knows these problems well. They are part of the reason he enlists the help of a data analyst.
When it comes to checking reported data, a reviewer or data analyst must make sure the spectra, elemental analyses, and other data required by the journal are there, Poulter explains. The presented data are then reviewed to be certain there aren’t any blatant misinterpretations. Any anomalies reviewers find could be the result of a simple mistake—such as a typo, math error, or loading the wrong data set. Or they might point to data manipulation.
The occurrence of data manipulation remains rare, Poulter says — only about a dozen cases out of the 3,000 manuscripts submitted to his journal each year. These cases aren’t reflected in paper corrections or retractions, he notes, because the papers are not accepted for publication.This works out to about a 0.4% hit rate, which is pretty low.
One assumes that reviewers are reasonably vigilant about such matters, but I wonder if there should be a financial incentive to detect data manipulation? I wonder what a $250 bounty for provable data manipulation (paid to the volunteer peer reviewer who detects it) would do for data integrity.
(Of course, the unintended consequence would be people setting up fake professors and sending in bogus article submissions to get some extra cash for reviewing...)