Since 2006, Science Blogging has begun to transform scientific publishing, emerging alongside the Open Access model as a phenomenon that even the “glamour journals” must now take seriously. Indeed, Nature publishing group established the Nature Network social software platform (blogging/forums) for scientists in response to the surge in science blogging activity. network.nature.com/
Also since 2006, Nature Publishing Group, along with O’Reilly media and Google, has hosted the annual SciFOO camp. I attended last year and it was clear that while Nature sees the “writing on the wall” nobody is quite sure how to rate the quality or significance of science blogs, nor how such blogging will count toward building a scientist’s resume. For an interesting look at the future of the scientific paper see Bora Zivcovik’s piece of the same title: jcom.sissa.it/archive/07/02/Jcom0702(2008)C01/Jcom0702(2008)C02/Jcom0702(2008)C02.pdf
Probably, the end game is some type of social software for science (think Ebay ratings and Amazon recommendations but much more sophisticated and optimized for scientific discourse). Another significant advance in 08 was the implementation of post publication commentary by PLOS ONE, even though participation has not been overwhelming. See discussion here scienceblogs.com/clock/2008/08/postpublication_peerreview_in.php
For a comprehensive review of previous efforts along these lines (and why they have failed so far, see Michael Nielson: michaelnielsen.org/blog/?p=448) There seems to be a serious incentive problem, as scientists just aren’t motivated to engage in this kind of mini-critiquing.
Building a “killer app” for online science ratings poses profound epistemological difficulties and will require at the very least, a multidisciplinary team of game theorists, economists, scientists, and psychologists to get the ball rolling. To my knowledge, the India Open Source Drug Discovery Foundation is trying to implement a micro-attribution credit tracking system to let participants accumulate “points” or other reputation metrics that can then be exchanged for cash prizes or other forms of career advancement. It is clear that an infrastructure for online science reputation is urgently needed–however, leadership is lacking and various initiatives will probably keep trying to invent the wheel separately and failing for a while longer. My view is that some kind of consortium analogous to the W3C (perhaps a standing resource on design principles for reputations systems) could ease the pain.
For now, we must be content with annual “Best of” lists such as the one in the Open Laboratory compendium. Full entries are available here scienceblogs.com/clock/2008/12/the_open_laboaratory_2008_all.php
This is the unedited version of all 860 submissions available for free. The value added version with professional editing of the top 50 is available for download for a small fee with proceeds going to support the annual Science Online conference. www.lulu.com/content/6110823
Oddly, one of the most informative posts I have come across was not included in the submissions (this may reflect the perception that it was not “newsworthy” to the Science Blogging/Open Access participants, but as an observer I found it most informative). I refer to a post detailing a “feud” between Nature and PLOS after an unfavorable editorial on the “failure” of the Open Access model. scienceblogs.com/clock/2008/07/on_the_nature_of_plos.php
Finally, a great post from 2007 was featured in the previous edition of Open Laboratory. These reflections on the “Marketplace of Ideas”/Science and Democracy raise important considerations for the design of a truly robust online science infrastructure. backreaction.blogspot.com/2007/03/science-and-democracy-iii.html
We’ll explore these ideas and more in future posts.