Tell us something about the Science Foo camp.
Science Foo Camp (or Sci Foo) is a unique scientific conference run by Nature, O'Reilly and Google. The format is based on O'Reilly's Foo ("Friends of O'Reilly") Camp, which is an event for technologists. In 2003, Tim O'Reilly and his colleague, Sara Winge, had the sublime idea of inviting a couple of hundred leading-edge technologists to their campus for a weekend of fun-filled idea sharing. They took an unusual approach -- for example, the schedule wasn't decided until the participants met on the first evening -- and it worked brilliantly. As a result, it's been much imitated and these days there are many different Foo Camps, Bar Camps and other so-called 'unconferences' happening all over the world.
In early 2005, Tim and a mutual friend, Linda Stone, suggested to me that we should hold a Science Foo Camp. I loved the idea, and so did Google, who kindly agreed to host it at the Googleplex, their HQ in California. And so a new type of scientific conference - or rather unconference - was born. Among other things, I see it as a unique opportunity for scientists at all levels of experience and from every discipline to share ideas. We also invite writers, filmmakers, business people, technologists and others, as long as they have an interest in science, so it's a very eclectic mix.
Because it doesn't have a well-defined aim or agenda, Sci Foo might seem rather unfocussed and self-indulgent. But open exploration of ideas for their own sake is really what science is all about, and feedback from attendees has been tremendous -- some people go so far as to say that it's changed their life. So now we hold it every summer. There are more details at http://www.nature.com/nature/meetings/scifoo/, but it's invitation-only I'm afraid.
After training in neurophysiology, you are currently heading the Web Publishing division at Nature. Do you see yourself more as a scientific publisher or a scientist who happens to work in publishing?
Definitely the latter. In my mind, I'm a scientist rather than a publisher. I think that's simply because of the route I took into science publishing -- and also the fact that I've been a science geek since before I can remember. For me, it just turned out that enabling scientific communication was a better way for me to do something valuable than working at the lab bench. (I wasn't a particularly good experimentalist, so I have a lot of respect for people who are).
One of Nature's strengths is that it contains a high proportion of ex-scientists as well as experienced publishers. I like to think that this brings us closer to the researchers we serve, and helps us to do the right thing when faced with a decisions about where to put our efforts.
Scientific publishing is generally dominated by Journals and databases. But now we see a new trend of journal articles providing inputs to databases and articles themselves being enriched with semantic annotations like chemical entities, proteins, genes etc. What is your opinion on this convergence of journal articles and databases? How will users effectively find the content they are seeking in this new environment of increasingly richer content?
I've thought for a long time that journals are becoming more like databases, and vice versa. While there's clearly a difference between the prose that scientists put into their papers and the kind of highly structured information that you typically find in a database, the former is gradually becoming more structured, as you describe. I think it's also true that databases have learned from journals in areas such as curation, peer-review, and other forms of filtering and quality control -- not to mention things like archiving, versioning and enabling citation.
The ideal publication would provide the best of both worlds: it would be highly structured, searchable and continually updated (like a database) but also quality-controlled, permanent and formally citable (like a journal). An example of a project we've done in this area is the Molecule Pages (http://www.signaling-gateway.org/molecule/), a journal-cum-database that we've been running for the past six or seven years in collaboration with UCSD.
I think these trends will gradually blur the lines between journals and databases to the point where it will be very hard to tell them apart. Of course, there will continue to be written research reports of the kind that we're used to, but they will become more structured, easier to search, and more integrated with the associated data sets. At the other end of the spectrum, highly structured databases will also continue to exist. But journals and databases will become better interlinked, as well as becoming more like each other in the ways I've described. So it will become ever harder to tells these two domains apart, because there will more 'hybrid' publications and services, and because these will eventually (and rightly) come to form a single integrated whole.
Nature brings out a series of podcasts for different domains in Science. Can you briefly explain the rationale and purpose behind these podcasts? How effective are these in reaching the target audience?
We launched the weekly Nature Podcast in 2005, and from the beginning it was aimed at our core readership: professional scientists. But the fact that Nature is an interdisciplinary journal -- together with the special characteristics of the audio medium -- meant that it reached a much broader audience than the journal. Here's why: When you have written content, whether in print or on the web, it's easy for readers to dip in and out, skipping parts that don't interest them. With audio (and video) you need to keep people's interest from the first minute of the show until the last without losing their interest. For Nature, that means covering physics in a way that biologists can understand, and vice versa. The result is that the show is also accessible to more or less anyone with a close interest in science who want something a bit deeper than they can get from the mainstream media. I think it's also significant that the core of the show is composed of our authors talking about their work, so while it's accessible it's not dumbed down either, and it comes straight from the experts' mouths. I think our listeners value that.
One of the most frequent requests we received after launching the Nature Podcast was to have more in-depth coverage of certain fields, so we've also launched a range other regular and one-off shows in about various areas of science (see http://www.nature.com/podcast/ ). I particularly like NeuroPod, but that might only be because I used to be a neuroscientist.
In recent years blogs have become a useful supplement to more traditional forms of scientific communication such as journals and conferences? Even though, Scientific blogging is still a niche activity. Your comments please. Will these blogs become part of the “scientific record” and be accessible when users search for information?
I think that blogs -- or something like them -- will eventually come to be seen a perfectly legitimate, indeed indispensable, means of scientific communication. Researchers will track them and search them as they currently do journals. But it will take a long time, perhaps decades, for this to happen. That's because the scientific community generally changes quite slowly, and there are some quite strong counter-incentives created by the fact that important parts of the scientific establishment ignore, or even frown on, blogging. I'm optimistic in the long run because it seems to me self-evident that the kind of immediate, globally accessible communication that the web enables -- and that blogging epitomises -- can bring a lot of value to the process of scientific discovery, as it already does in other fields, notably technology and economics. In time, tools for filtering out the interesting stuff from the rest will also get better, which is another common complaint among scientists who might otherwise read more blogs.
NPG seems to have entered Web 2.0/social networking in a big way. What long term strategy is the company adopting for initiatives such as Connotea, Nature Network, the Nature Blog, Second Nature?
Our strategy is first to create things that scientists will find useful, then to try and make them economically self-sustaining. Trying lots of different things relatively cheaply and remaining adaptable in response to user feedback are also key. Compared to our database and audio-video projects, our Web 2.0 initiatives are mostly quite immature businesses, but that's mainly because they're newer. Sites like Nature Network already generate significant revenue through advertising and sponsorship, and we have a small but growing events business in Second Life.
Connotea is Nature’s own social bookmarking service for clinicians and scientists. Can you briefly tell us how is it different from other general social bookmarking tools?
Connotea was obviously inspired by the original (and wonderful) social bookmarking site, del.icio.us. We were among the earliest fans of del.icio.us and it seemed to us that it could be made more useful for scientists (and perhaps other academics) by adding certain features that were unlikely ever to be provided on a generic site like del.icio.us. So we built Connotea, though initially just as an internal proof-of-concept.
For example, Connotea recognises certain academic websites and automatically imports the bibliographic details (title, author, journal, publication date, etc.) if you bookmark a paper on one of them. It also recognises Digital Object Identifiers (DOIs) and can use them to look up the details of papers (via the CrossRef database). So if you happen to be reading a paper in print rather than in your browser then you can bookmark it in Connotea simply by finding the DOI on the page (it's usually somewhere near the head or foot of the article) and entering it into Connotea -- you don't need to go to the trouble of finding the article online first. We've also introduced specific features for certain fields, such as geo-tagging and output in KML (the data format used by Google Earth) after epidemiologists started using Connotea to track outbreaks of avian flu and other diseases.
The Protein Structure Initiative recently relaunced the PSI-Nature Structural Genomics Knowledgebase in collaboration with NPG. How will researchers benefit from this new collaboration?
In general, our database collaborations (see http://www.nature.com/databases/ for other examples) involve us working with academic groups to create resources for particular research communities that neither party could readily create on their own. Our partners usually bring the scientific and technical expertise, and we provide traditional publishing skills -- such as peer-review, curation, writing, design and promotion -- albeit in the non-traditional context of a database rather than a journal.
As for the PSI specifically, they are working in a specialized field, but one that has wide ramifications. They produce insights and technologies that address basic biological processes and drive the development of new medicines. One of our key aims in partnering with them is to reach out to the broad research community, helping different kinds of researchers to appreciate the usefulness and relevance of structural genomics, and encouraging feedback to the PSI for help in solving tricky biological problems. We’re connecting the producers with the consumers, if you like, through a professional, editorially rich online database.
Tell us a little about your Manuscript Deposition Service. How is this service expected to help authors meet funder and institutional mandates?
Our Manuscript Deposition Service (http://www.nature.com/authors/author_services/deposition.html) makes it quick and simple for authors to comply with funder and institutional mandates, by depositing manuscripts to open-access repositories on their behalf.
Authors opt in to the service via a simple form during our usual submission process. This has the advantage that the author will have a lot of the necessary information to hand, and can be assured that this requirement will be taken care of. On acceptance, NPG automatically deposits the accepted version of the author's manuscript to their specified repository, setting a public release date of 6-months post-publication. All the author should need to do is validate the submission with the repository when asked to do so.
We currently offer this service for depositions to PubMed Central and UK PubMed Central on about 40 of our journals. We're working to expand this to all NPG journals that publish research content.
Also, NPG's self-archiving policy allows the author's final version to be made freely accessible six months after publication, so authors can be confident that they can comply with the requirements of all major funders, even for repositories to which we can't currently deposit on their behalf.
Talking of open access journals or free content on the web, how much is this a challenge to you.
Open access is just another aspect of online publishing that makes it so much more multi-faceted and interesting than print publishing. Whether it's a threat or an opportunity depends on how individual organisations respond. Some publishers have certainly tried to resist it, but the common claim that publishers in general have been resistant (at least since I came into the industry just over a decade ago) is a long way from the truth. Some publishers have embraced it while others haven't, though there's an increasing realisation that it's not going away. (Incidentally, I think much the same can be said about the attitudes of scientists themselves).
It's also important to recognise that there are multiple routes to open access. A lot of attention has been given to author-pays journal publishing, but this model isn't currently sustainable for journals with high rejection rates and heavy editorial input, so at best we're going to end up with a mixture of business models, not all of them open access. This is what we see in the industry today, and it's what we have at NPG too. Some of our journals publish papers that are free to readers, paid for by author fees, but most of them continue to charge subscription fees because that's the only model that's currently sustainable for high-end journals. Personally, I wish it were otherwise.
The most likely way in which content from across the full range of different journals will be made available for free is through funder-mandated self-archiving, most notably the NIH's PubMed Central project and its British counterpart, UKPMC. Eventually these kinds of initiatives are likely to result in the majority of research content becoming freely available in some form 6-12 months after publication in a journal. Nature has been a strong supporter of these initiatives -- see my earlier comments about our Manuscript Deposition Service.
What do you think is the future of print journal publishing in the coming years?
For scientific journals, print is already dying as a distribution medium. Most people access the content online, even if they then print it to read. Scientists, among others, are also gradually overcoming the kind of emotional attachment to print that results in them preferentially submitting to journals that have print editions even if no one actually reads them. As a result, I think the print editions of most scientific journals will disappear over the next decade or so. (This creates some interesting questions about how the content should be archived for posterity, and by whom, but they're solvable).
There will be exceptions to this rule, of course, not least Nature itself, which is as much a magazine as a journal, so a lot of people continue to value being able to browse it in print form. Even in those cases, however, I think we'll increasingly migrate from dead-tree paper to electronic paper. But as so often with technological and social change, I find the rate of progress much harder to forecast than the general direction, so I wouldn't want to try and predict exactly when the last print issue of Nature will roll off the presses. It's certainly a while away yet.