Wednesday, February 1, 2012

scientific journals in the e-publishing age

There's been a bit of discussion on Google+ (John Baez - Jan 30, 2012) on the future of costly scientific journals. As print fades into history, there is no reason why scientists cannot have a system where their so-called pre-publications (e.g. on, a current source for many of these) can be reviewed, and with revisions acceptable to a peer community be qualified as being designated as published.

Proposal: continues as it is and some group creates (completely independent of that accomplishes the intended goal of reviewing the articles of When an article on gets a pass from the scientific peer community, it's designated as published.

(The problem, as has been pointed out, is for some group to actually go and create

What would be the result? A free and open article submission and access system as it exists now ( and a independent review system (

Update (2012-02-02): Looks like the site has been created.
Created On: 02-Feb-2012 03:35:26 UTC

(There's also a Google+ Page: +ArXiv Review)

The first step is to provide a Primer and Goals and Mission Statement for the site (along the lines of

Here is a start:

arXiv-Review is an openly accessible, moderated forum for commenting on and reviewing articles. (For information about arXiv, see To provide for this, each article submitted to arXiv can potentially have a review thread in arXiv-Review for comments and reviews.

Each review thread in arXiv-Review is identified by following the same reference scheme described in For example, corresponding to arXiv:math/9910001v1 <> is a potential review thread arXiv-Review:math/9910001v1 <>.

Browsing arXiv-Review is open to everyone (the "readers"). Those who comment on and review articles (the "reviewers") must register on arXiv-Review. In addition, a reviewer may be a member of a select "board" (TBD) on arXiv-Review (and this identification will indicated).

arXiv-Review system for evaluating arXiv articles: TBD.

arXiv-Review sponsors and operators: TBD!

1. See also: Proposal for A New Publishing Model in Computer Science, Yann LeCun.
2. Elsevier's Publishing Model Might be About to Go Up in Smoke - Forbes. Remarks on Google+: John Baez - Feb 1, 2012


  1. A trivial comment, which you should delete after acting on it. I think the second "cannot" in your second sentence should be "can".

  2. Great idea!

    I think we should start some independent 'review boards' for arXiv papers. These can referee papers and help hiring, tenure and promotion committees assess the worth of papers, thereby eliminating the biggest remaining reasons for the existence of traditional for-profit journals. They can also serve many other functions.

    Note that any number of boards can exist; anyone can start one! There's no need for a paper to be refereed or rated or discussed by just one board. This will allow for competition, experimentation, and improvement.

    The best boards will become important; the worst will be ignored. Some boards could act like existing journals and pick papers in a specific subject, waiting for authors to submit papers. Others could act like the Faculty of 1000 - a group that picks the 'most important papers' in biology. Others could be crowd-sourced, perhaps with a reputation system a bit like Mathoverflow. Others could proceed using principles I can't even imagine.

    Only time will decide which boards work best: there’s no point in arguing about it much now - and the great thing is, there’s no real need to! If some people here agree on some principles, they can go ahead and do something. If some other disagree, they can go ahead and do something else.

    The most important thing, it seems to me, is that people start some referee boards now and see what happens! Only a few will take them seriously at first; but in a decade lots of people will - because the traditional journals have made themselves too expensive, and they rely on unpaid labor.

  3. Andrew Stacey has some good ideas on how these boards could work, so let me quote him:

    "My proposal would be to have “boards” that produce a list of “important papers” each time period (monthly, quarterly, annually – there’d be a place for each). The characteristics that I would consider important would be:

    1. The papers themselves reside on the arXiv. A board certifies a particular version, so the author can update their paper if they wish.

    2. A paper can be “certified” by any number of boards. This would mean that boards can have different but overlapping scopes. For example, the Edinburgh mathematical society might wish to produce a list of significant papers with Scottish authors. Some of these will be in topology, whereupon a topological journal might also wish to include them on their list.

    3. A paper can be recommended to a board in one of several ways: an author can submit their paper, the board can simply decide to list a particular paper (without the author’s permission), an “interested party” can recommend a particular paper by someone else.

    4. Refereeing can be more finely grained. The “added value” from the listing can be the amount of refereeing that happened, and (as with our nJournal) the type of refereeing can be shown. In the case of a paper that the board has decided themselves to list, the letter to the author might say, “We’d like to list your paper in our yearly summary of advances in Topology. However, our referee has said that it needs the following polishing before we do that. Would you be willing to do this so that we can list it?”"

    1. Partly inspired by your comments here I have decided to give it a try. Introducing a new kind of chemistry journal set up entirely using Google products such as blogger: You can read more about it here

  4. An intriguing suggestion! My guess is that it would take about a generation for such a transformation to take place, given the difficulty of changing habits.

    Here are a few first thoughts:

    I'm not going to defend the role of elitism in organizing mathematics, but I don't think such an open-ended system can work without the active participation of recognized elites. But it looks like elites are busy starting new elite traditional journals (at non-commercial prices) -- for example
    and may not want to dissolve after having gone to so much trouble.

    Traditional journals reflect the taste of the editors and the most highly respected journals bring together articles in a variety of disciplines. Arxiv makes no provisions for taste and segments mathematics according to disciplines. John Baez (reporting on ideas of Andrew Stacey) suggests that the hypothetical "boards" will certify papers, and his priority seems to be verifying correctness as well as rating quality. I would also stress the importance of taste.

    The board proposal is so far missing the element of moral coercion. What is to prevent an article posted on arXiv from languishing for years unread? Or even if a referee has agreed to read and certify the paper, who will send increasingly anxious and personalized reminders? Not a robot, I hope!

    1. Hi, Michael!

      "My guess is that it would take about a generation for such a transformation to take place, given the difficulty of changing habits."

      True. But I've been waiting about a generation for this, so now it's time.

      Seriously: it will take a long time for hiring and promotion committees and university administrators to take new-fangled 'review boards' seriously. But there's another aspect of review boards that'll catch on really fast. They can let us find good papers, and read what other people think about them. Imagine a crowd-sourced system like Mathoverflow or Netflix (see Kevin's remarks below).

      In quantum information theory there already was a system like this! People would wake up each morning, go to a website that looked like the arXiv, and see which new papers their colleagues liked. It was very popular among people who use the arXiv group quant-ph. It died when the guy who read it took a job at Google. They're trying to restart it.

      "But it looks like elites are busy starting new elite traditional journals (at non-commercial prices) [...] and may not want to dissolve after having gone to so much trouble."

      We can't expect the dinosaurs to volunteer for extinction. At best we can pressure them to evolve in good directions or die off. I agree that elite participation in the review board system is essential for its ultimate adoption by university administrators. The sooner we can get that going, the better - but we shouldn't wait for it to happen before doing the fun part.

      "John Baez (reporting on ideas of Andrew Stacey) suggests that the hypothetical "boards" will certify papers, and his priority seems to be verifying correctness as well as rating quality. I would also stress the importance of taste."

      Yes, that's important too. We can easily have a system that allows me to find out

      1) what papers you like,

      2) what papers most people who have registered as algebraic geometers like,

      3) what papers the faculty of Harvard like,

      4) what papers the editors of a board resembling a traditional Journal of Algebraic Geometry like,


      5) lots of other things.

      It's also possible to build a system that says "if you liked this paper, maybe you'll also like..."

      All these are possible and we should ultimately have all of them, not just one.

      "The board proposal is so far missing the element of moral coercion. What is to prevent an article posted on arXiv from languishing for years unread?"

      Of course we all know that some articles deserve to languish for years unread. But I expect that some boards will resemble existing journals, in that you can submit papers to them.

  5. So, here's a slightly more specific idea, which I'll throw out for discussion.

    The first website we create should not be a particular 'review board' as described above. Instead it should be a 'portal' for review boards.

    I'm imagining the user would see something that looks a lot like the arXiv, but next to each paper a list of buttons: one for each review board that had discussed, or commented on, or reviewed, or 'accepted', that paper.

    And I'm imagining that review boards would need to apply to the portal to be added to this list.

    And I'm imagining that the portal could adopt some policies, like: *we only accept review boards whose information is freely available to everyone*.

    That would prevent traditional non-open-access journals from counting as 'review boards' - so if the portal became popular, the traditional journals would lose influence.

    Of course, this idea may be too 'meta' for right now. Maybe it's too much like starting Google before the web exists. Maybe we need people to start review boards before anyone will give a damn about a portal for review boards. If so, okay - I don't mind.

    But I imagine that ultimately people will want a good portal that shows papers on the arXiv together with what different groups of people think about those papers. If so, the policies that portal adopts could greatly influence the course of events.

    For example, suppose it's set up by Elsevier.

  6. I very much like the idea of review boards but I think a seemingly trivial matter could turn out to be quite important: what do you actually write on your publication list? Without a standardized system, people will feel safer writing something like "Bulletin of the LMS" after their paper.

    If the system is basically the same as the current system, so you submit your paper for consideration to a board and keep going until you get it accepted, then it's not too difficult -- you can make the references look more or less the same as they do at the moment. (E.g., if the Bulletin of the LMS reformed as a stamp-of-approval board, you could give the arXiv reference and then say, "approved by Bull. LMS" after it.) I suppose there's nothing to stop having a list of bodies after each paper. Whatever system is chosen needs to be able to coexist with the current system, so that a smooth transition is possible.

    1. Yes, it'll be important to achieve a smooth transition from the current system to the new one.

      On their CV's, we will feel most comfortable saying "approved by [some prestigious body]" - and there's no harm in using the word "published" if that makes people feel better.

      But we could also add that our paper is "rated 4.7 by the Mathematicians Review Board". And I expect that hiring committees may eventually go to the website and look at a variety of statistics.

  7. My fantasy is to have something like the Netflix rating system. On Netflix, I can give a movie 1-5 stars, and when I look at the page for a movie I haven't yet rated (perhaps haven't seen), Netflix guesses what my rating will be based on how other people similar to me have rated it. A few years ago Netflix held an open competition for a better rating system, so some reasonable algorithms are open source.

    With a system like this, if everyone's ratings are public (but anonymized), the concept of journal can be replaced with (possibly private) rating algorithms. Authors would no longer need to guesstimate the right journal: their work would rise to its appropriate place, and would come to the attention of all of the right people. A Netflix-ish algorithm also dodges the effect seen of MathOverflow of certain narrow areas being "highly respected" merely because the early adopters are in that area.

    Best of all, this can be done incrementally. First, just start collecting the ratings. The algorithms to process them can come later.

    First draft of a rubric:

    0 bulbs: I find this version of this paper to have substantive errors or gaps.

    1 bulb: The results are likely correct, but are not interesting to me.

    2 bulbs: The results are substantively correct (possibly typos), and the result is interesting.

    3 bulbs: The results are correct and surprising, fit well into my area, and are well written.

    4 bulbs: The results are breaking new ground, and either resolve an important problem or open a new area.

    5 bulbs: One of the very best papers in all of math in this year.

    1. If you have the possiblity of a "I think this is wrong" or other negative ratings, there will be some bitter fights. I don't think that this is the way to go. Another possibility would be to have review boards or individuals give positive certification. i.e. tags "verified by ..." and/or a facebook-ish "like" system. This is more or less how it works now. Wrong work can die by being ignored.

    2. I like the Netflix idea a lot, because it's easy and fun, and the algorithms for processing the data can be developed incrementally. I met a physicist yesterday who has an efficient algorithm he really likes. I'm trying to get him to blog about this.

      I think it'll be important to have a board where people can point out and discuss specific errors in other people's work. This is, however, different from a simple Netflix-like system where we collect simple rating information, which we can use to predict what papers I might like. It's more tricky to manage actual discussions. A reputation system as in Mathoverflow seems to do well in reducing flame wars. But:

      It's good to do easy fun things first.

  8. would have, of course, reviews, or comments on the articles in So for each article in, e.g., there would potentially be a comment thread (e.g. in The comments (and "bulbs", etc.) would be supplied by registered "reviewers". Now if these reviewers are members of a "panel", they would be so designated.

    This is the basic idea, I think.

    1. I was using a "panel" (on in the sense of a "board". How someone gets on a board is another issue.

    2. The word "board" or even "panel" may be suboptimal, since a "board" could be a crowd-sourced system, the opinions of a single expert on a given subject, or other things. I'm not sure what the best word is.

  9. I added a basic description of arXiv-Review at the end of the above post.

  10. This comment has been removed by the author.

  11. [Sorry, I am having problems in posting my comment: I am splitting it in several pieces: sorry for the inconvenience]

    I have only thought a little about these issues, but I will share my thoughts.

    I have in mind the the following model.

    A scientist should be associated with two numbers, similar to Google's PageRank:

    - an AuthorRank

    - a ReviewerRank.

    These two numbers would reflect the reputation (value?) of the researcher in the two major activities/roles of a scientist: that of producing new and interesting results, and that of judging/checking/validating the results of others. This numbers would be calculated also adopting an algorithm similar to PageRank (see below)

  12. This comment has been removed by the author.

  13. (continues)

    Each scientist should have an account with two corresponding modes: Author and Reviewer. The first would be associated with the real name of the scientist, while the second would allow the scientist to act anonymously. Anyone could open an account, but the Reviewer mode would be activated only upon referral from an official institution (university) or after having built enough AuthorRank. This would reduce the risk of people polluting the system with bad behavior in Reviewer mode, and of accounts opened just to rig the system.

  14. (continues)

    Each "published" ("arXived"?) paper should be open for discussion (commenting) and for voting. Voting would be given by scientists in their Reviewer (anonymous) mode, with only the ReviewerRank displayed and having an effect (although the Author mode would have an effect indirectly; see later). The vote casted by a Reviewer with higher ReviewerRank should count more than the vote casted by a Reviewer with a low ReviewerRank. In principle one could even keep track separately (besides in the total count) of the upvotes coming from people with high ReviewerRank (much in the similar way in which in Rottentomatoes one can check the rate of the "top critics").

    The AuthorRank would (should?) influence the ReviewerRank by adding to it. The rationale being that if one is a good author, he/she is probably able to judge properly the works of others, even if he/she does not dedicate much time to reviewing and to building the ReviewerRank with an intense review activity).

  15. (continues)

    The researcher would take part in the discussion on his/her article in his/her Author mode. His AuthorRank would increase thanks to the votes given to the article and potentially to the votes given to the activity of the author in the discussion on the author's paper (e.g., replying effectively to the comments/questions of the Reviewers). The AuthorRank would also increase with citations of his/her paper by other papers. As in the calculation of PageRank, this increase would depend on the AuthorRank of the authors of the citing paper. The point is to make the quality of the citations at least as important as the number of the citations. The ReviewerRank of a Reviewer would increase thanks to the votes of both the Authors and the other Reviewers for constructive feedback, good comments, helpful suggestions.

    There could be tags associated to papers to indicate the fields and subfields of research: one could then even end up with Author and Reviewer ranks in each subfield, depending on the votes associated to both to the upload and the discussion in a particular field. This would make more objective saying ("this person is leader in this field but also an expert in this other field").

  16. (continues)

    As a result of this system, a researcher would be associated with a his/her own Author and Reviewer ranks, possibly (sub)split by field/subfield. Also, each paper in the list of papers would have an associated score. Committees evaluating a candidate for a job should then be able to get a good sense of the ability of the person in a given field/subfield, as well as of his/her contribution to the community also through his/her referee activity.

    As I said, these are just ideas that came to my mind, and I have not thought much about pros and cons.

    Of course, a bad con (independent of other problems/issues with the system described) is that the system would be very different from the present system (journals with editors, referees, etc.). As many have already asked: is the transition to a very new system possible? Is there a way to make this transition incremental?

  17. The author/reviewer ranking system is a lot better than the Netflix/IMDB model, which for as long as I've been checking has (objectively and democratically) declared The Shawshank Redemption the best film ever, and which could easily be manipulated to declare Newt Gingrich (for example) an eminent mathematician.

    But I still think 1-dimensional (or 2-dimensional) rating systems are a severely impoverished replacement for what we have now, for the same reason that I believe taste is not something measured by an algorithm along the lines of those devised by marketing firms but is rather affirmed consciously by a group of like-minded human individuals. At the Jussieu journal we actually had discussions of this kind, though we didn't take the time to put it that way, and they were very interesting.

  18. For example, here's something I found on Wikipedia that explains why I'm not keen on algorithmic ratings:

    In February 2008, shareholders of Choicepoint voted in favor of acquisition by Reed Elsevier for $4.1 billion. Choicepoint [check them out!] is an American data aggregation company with personal files on more than 220 million people in the US and Latin America. The acquisition was completed in September 2008.

    By the way, there's also an (not particularly active).

  19. There is a lot of good points, but as others have said, it seems that the point system is really essential for the system to be eventually taken up by some of the stakeholders including the ones producing the papers. I summarized some of these issues here

  20. *Marginalia*

    That was an important feature of medieval libraries. Books with lots of comments at the margins. Sometimes it was the marginalia that made a book most valuable.

    Today we have too many libraries, "diluting" important marginals into insignificance. (Still, in my Sturm und Drang days, I used to add pencil notes or postit stickers to math books when I thought I could improve on details.)

    * With the new system, marginalia could have their place again!

  21. there are lots of publication house running up in the state with different ranges of scientific journals...

    SWCNT supplier
    Research Journals

  22. Andrew Stacey and I just launched Math 2.0 (, a forum for facilitating discussion of the future of mathematical publishing.

    So many good ideas have been suggested in blog comments and social networking threads. We hope providing a forum will allow interested participants to have focused discussions on concrete plans! Let's go!

  23. The internet seems to be bursting these days with ideas about how to improve/replace peer review and classical journal. This is a very exciting time...

    Here's my take on it: It reads the entire arXiv submissions on a daily basis, and allows you to submit ratings and comments regarding each paper's quality. Ratings are always anonymous, comments can be. Comments allow LaTeX and can be evaluated by other users.

    The biggest user group currently comes from astrophysics, but any discipline represented at arXiv can readily join.

  24. I actually enjoyed reading through this posting.Many thanks.


  25. Interesting post. scientific journals in e-publishing plays a major role like authors can publish as many articles they want publish. More over they can do author proof corrections as many times they want.

  26. Hey, nice site you have here! Keep up the excellent work!