MESSAGE
DATE | 2017-02-28 |
FROM | Ruben Safir
|
SUBJECT | Subject: [Hangout-NYLXS] Peer Review
|
http://blog.scienceopen.com/2017/02/a-post-publication-peer-review-success-story/
A post-publication peer review success story
February 28, 2017 *
Author: Jon Tennant
In 2016, Dr. Joel Pitt and Prof. Helene Hill published an important
paper in ScienceOpen Research. In their paper, they propose new
statistical methods to detect scientific fraudulent data. Pitt and Hill
demonstrate the use of their method on a single case of suspected fraud.
Crucially, in their excellent effort to combat fraud, Pitt and Hill make
the raw data on which they tested their method publicly available on the
Open Science Framework (OSF). Considering that a single case of
scientific fraud can cost institutions and private citizens a huge
amount of money, their result is provocative, and it emphasizes how
important it is to make the raw data of research papers publicly available.
The Pitt and Hill (2016) article was read and downloaded almost 100
times a day since its publication on ScienceOpen. More importantly, it
now has 7 independent post-publication peer reviews and 5 comments.
Although this is a single paper in ScienceOpen’s vast index of 28
million research articles (all open to post-publication peer review!),
the story of how this article got so much attention is worth re-telling.
Enhanced article-level statistics and context – just one of 28 million
on our platform!
Peer review history
The manuscript was submitted and published in January 2016, and the
final typeset version of the article was available for download on March
1st. Shortly after this in May 2016, PhD student Chris Hartgerink
publicly peer reviewed the article, summarising it as “Interesting
research, but in need of substantial rewriting.”
It was after this that the article came to the attention of Prof. Philip
B. Stark, an Associate Dean at the University of California, Berkeley,
and author of the most highly read article on our platform with over
39,000 views to date!
Prof. Stark runs a course on the theory and application of statistical
models. In his course, groups of students replicate and critique the
statistical analyses of published research articles using the article’s
publicly available raw data. Obviously, for this course to work, Prof.
Stark needs rigorous research articles and the raw data used in the
article. In this sense, Pitt and Hill’s article on ScienceOpen was an
ideal candidate..
The groups of students started their critical replication of the Hill
and Pitt article in the Fall semester of 2016 and finished right before
the new year. By getting students to actively engage with research,
they gain the confidence and expertise to critically analyse published
research.
The Post-Publication Peer Review function on ScienceOpen is usually only
open to researchers with more than 5 published articles. This would have
normally barred Stark’s groups from publishing their critical
replications. However, upon hearing about his amazing initiative,
ScienceOpen opened their review function to each of Prof. Stark’s vetted
early career researchers. And importantly, since each peer review on
ScienceOpen is assigned a CrossRef DOI along with a CC-BY license, after
posting their reviews, each member of the group has officially shared
their very own scientific publication.
This also means that each peer review can be easily imported into any
user’s ORCID, Publons, and even ImpactStory profiles – the choice is yours!
Public, post-publication peer review works
All of the complete peer reviews from the groups of students can be
found below. They all come with highly detailed statistical analyses of
the research, and are thorough, constructive, and critical, as we expect
an open peer review process to be.
Furthermore, unlike almost every other Post Publication Peer Review
function out there, the peer reviews on ScienceOpen are integrated with
graphics and plots. This awesome feature was added specifically for
Prof. Stark’s course, but note that it is now available for any peer
review on ScienceOpen.
Maurer and Mohanty, who stated that the work was an important
demonstration of the use of statistical methods for detecting fraud;
Hejazi, Schiffman and Zhou, who evaluated the work as comprehensible
but largely incomplete;
Dwivedi, Hejazi, Schiffman and Zhou, who note that the research is a
strong advocate for detecting scientific fraud and the use of
reproducible statistical methods;
Stern, Gong and Zhou call the research clever in the application of
the techniques t uses to address a pressing problem in science;
Bertelli, DeGraaf and Hicks think the analysis is convincing and
valuable, but with a methodology that could be refined;
Hung, Sheehan, Chen and Liu evaluated the paper, finding a few minor
discrepancies between their own results on those of the published research.
A fine example from one of the students
So overall a large variety of findings drawn from the critical
replication project, and each of which individually greatly enhance the
published research.
Aftermath
Many of the peer reviews focused on a specific assumption that the Pitt
and Hill article made about how sets of numbers are distributed. We
talked to Dr. Helene Hill for comment. She noted that Dr. Pitt was
working on the distribution issue noted by many reviewers, and that she
was happy to see her research received such critical attention.
She notes that critical reception was:
just what I was hoping for — for our paper to be a model for posting
and analyzing data.
Prof. Stark said there are several things that this project accomplished
for him in terms of getting students actively involved in peer review:
Get students thinking about alternative models for scholarly
publication;
Get students thinking about reproducibility and open science;
Get students to work collaboratively on a data analysis project that
involves thinking hard about the underlying science;
Get students to register with ORCID;
Get students to post their analyses on GitHub so that their own work
is reproducible/extensible;
Get students their first scientific publication.
For another step of Open Science brilliance, the reviews themselves
sought to be completely reproducible, with the code for all the
students’ calculations is available on GitHub (eg here and here)!
Prof Stark said:
I think it’s remarkable that ScienceOpen extended the platform and
your process to make this possible, including allowing people with fewer
than 5 publications associated with their ORCID to submit reviews,
figuring out how to allow figures in reviews, and that you are working
on allowing reviews with multiple authors. ScienceOpen was really a
partner in making this exercise possible.
We also asked some of the students how they found the peer review
exercise, many of whom praised Pitt and Hill’s efforts on making the
research as reproducible as possible.
Stephanie DeGraaf: “The Hill and Pitt paper made the data publicly
available and explained their analyses thoroughly enough so that we
could reproduce all of their results. The paper focused on a really
fascinating topic of testing for fraudulent data, and I really enjoyed
thinking about how to tackle the problem in a statistically valid way. I
found it really interesting to see that even though all of us in the
class were reviewing this same paper, we all had different perspectives,
criticisms, and ideas for other ways to investigate the researchers’
claims.”
Aaron Stern: “The Hill and Pitt paper was a great choice for the purpose
of the course; not only did the authors employ interesting and novel
statistical methods for us to critique, but they also were tackling a
very important issue in science — namely, fraud. While we agreed with
the paper’s conclusions, we found a number of scenarios where their
approach applied to new datasets could result in false positives; i.e.,
their methods could impugn an innocent researcher. Thus, it’s important
to validate these methods thoroughly in order to avoid hurting innocent
scientists.”
Kenneth Hung: “It is not very agreed, among statisticians, what
constitutes replications and reproductions. In writing this review, it
gave my new and broader perspectives, in comparison to the
post-selection inference background I came from, as well as common tools
in practice for detecting scientific frauds.”
Nima Hejazi: “Constructing a review of the paper required extensive
collaboration, the use of open source software tools, and the leveraging
of statistical and domain knowledge for the purpose of detecting
fraudulent science – all in all an experience that demonstrated quite
well the challenges of working with real-world data and making use of
open-access publishing platforms.”
Prof. Stark’s profile on our platform. Those stats are looking great!
Great success!
So we definitely count this as a major success story on several levels.
Students gained the experience in performing analyses for the sake
of reproducible research.
Students also gained the skills and confidence to perform rigorous
and constructive peer reviews in public.
Post-publication peer review works just as well, if not better, than
traditional peer review.
Openness facilitates recognition and reward for peer review, which
is crucial for those just starting their research careers.
This whole exercise shows that just because research has been
published, it does not mean that critical evaluation of it should stop.
So, what is the next step? Well, anyone who has an ORCID can peer review
any of 28 million research articles on our platform. They don’t have to
be detailed statistical analyses – they can be critical commentaries,
additional notes and context, or what your own related research says.
The point is the choice is yours. The reason is that you help to improve
the context and progress of your research field in the open, while
improving your research skills and receiving recognition and credit for
doing so.
Share this:
--
So many immigrant groups have swept through our town
that Brooklyn, like Atlantis, reaches mythological
proportions in the mind of the world - RI Safir 1998
http://www.mrbrklyn.com
DRM is THEFT - We are the STAKEHOLDERS - RI Safir 2002
http://www.nylxs.com - Leadership Development in Free Software
http://www2.mrbrklyn.com/resources - Unpublished Archive
http://www.coinhangout.com - coins!
http://www.brooklyn-living.com
Being so tracked is for FARM ANIMALS and and extermination camps,
but incompatible with living as a free human being. -RI Safir 2013
_______________________________________________
hangout mailing list
hangout-at-nylxs.com
http://www.nylxs.com/
|
|