Wednesday, April 15, 2015

Megajournals: answers to six troubling questions

Jilian Buriak, editor-in-chief of the ACS journal Chemistry of Materials, recently asked six troubling questions related to the "explosive growth" of manuscripts published in mega journals that consider all technically sound papers irrespective of originality or novelty of the proposed research.”  Even more troubling, she did not provide any answers. No wonder. They are tough and often initially confusing questions. Either that or rhetorical. Anyway, I have given them some thought and have come up with some answers.

"(i) How will the community of peer reviewers (all practicing scientists) handle the onslaught of review requests?"

I was initially confused by this question. In my experience, papers rejected at one journal are just resubmitted to another journal until it is published. My initial impression was that the high rejection rate of some journals actually leads to more reviewing overall.  And then it hit me! I should only review for select journals like Chemistry of Materials.  That should keep my review pile nice and low.  I know, I know, some one will eventually point out that in this way I only get to see papers deemed important by a single person at one of these journals.  But no, at Chemistry of Materials this important decision is made by two people. And because "importance" is a completely subjective decision this can be done in as little as 5-7 days!

"(ii) How do we as scientists manage, and sort through, the vast increase in number of published journal pages, let alone read them?"

Yes, this is a tricky one.  Computers and crowd-sourcing are no good at dealing with wast amounts of information. I think the only solution is to agree on a list of journals we, as chemists, will review for and submit to.  Also, there should be a "one-strike-and-you're-out" rule. If one of these journals on the list rejects your paper, that's it.  As long as the list is relatively short, this can be strictly enforced.  I mean, if two people say "this is not important" what are the odds that a third person will say any different?  This should keep the number of published papers down to manageable number.  Well, or at least stop the increase.

"(iii) Some authors may be tempted by the apparent ease of publishing their work in a reports journal, but will these published reports make any lasting impact, or even be noticed?"

Of course it's tempting.  As an author I spend loads of time writing and re-writing the lasting impact parts of my manuscript.  It takes ages crafting sentences that makes these speculations as spectacular as possible without being demonstrably untrue!  Pro-tip: a slap-dash literature search makes the novelty section so much easier to write and if "caught" you can honestly say that these key previous studies had escaped your attention.  But remember, at this stage you have already gotten past the editor - sorry editors - so it's well worth the "risk".  Anyway, it's worth it.  Everyone knows that anything of lasting impacts are only published in the most selective journals.  And what self-respecting scientist discovers papers by search algorithms these days?  Especially now that one can easily peruse the table of contents - now with delightfully quirky graphics -  online!

"(iv) Is the goal of serving the public good, of doing high quality science with taxpayer funds, being diluted by the ambition of increasing one’s own publication metrics (i.e., sheer number of publications)?"

Yes, a paper is not primarily a way to share results, it's primarily a way to keep score.  Why would the taxpayer care about papers that suggest a promising idea doesn't work?  Or that an important published study can't be replicated? The taxpayer cares first an foremost about the individual careers of scientists and this is best judged by the number of papers an individual has published - but only if a handful of people have said it's important.  This is why it is so important where it is published. So when we make the list I propose in (ii) we should be sure to rank the journal in order of importance! I suggest a combination of collective gut feeling + some kind of average based on select citations. 

"(v) Can the scientific community, which is already over-burdened, maintain high scientific standards under these conditions?"

No! While some people include "reproducibility" under "scientific standards" it is hardly a "high" scientific standard like "importance" or "novelty".  So, a scientific community that tolerates the publication of numerous replication studies - positive as well as negative - can not be said to maintain a "high" scientific standard.  Importance and novelty are key.  This can only be done by publishing in a few select journals with that as a focus. Remember, these are subjective standards best left to a few "pros".

"(vi) How will the explosion of citations, including self-citations, skew existing metrics?"

Again, I was initially confused by this question.  But some Googling revealed that the primary function of citations is not really to refer to previous studies, but another way to score the impact of a particular paper or researcher.  Citation count is incredibly field-dependent and chemists have worked long and hard to establish a gut feeling for citation count vs impact and it would be a shame to loose this as more people start to share un-important information.  This is why the rules I have outlined under (i) and (ii) must be adhered to by all and strictly enforced by a select few.  Also, with some clever statistics and moderate data massaging, citations are a wonderful tool to rank journals!



This work is licensed under a Creative Commons Attribution 4.0

No comments: