it generally seems that working scientists have to spend a sizeable proportion of their time keeping up with other people's research, since after all that's at least theoretically the point of publishing in the first place and unless you're a genius in a tiny field you'll get further that way than by ignoring everyone and striking out on your own! But scientific publications are generally pretty information-dense and there are a lot of people publishing in most fields, so I'm guessing that just keeping up with your reading could use up all your time if you let it. What strategies do people use for selecting out the most important things and keeping the firehose of incoming information under control?It's a good question, so let me give it a go, albeit belatedly. I'm not sure I can talk about what strategies people in general use, only what I do, but I don't think I'm that much of an outlier.
So yes, keeping up with other people's research is exactly the point. Lone geniuses basically don't exist any more, you have cranks and you have actual scientists who pay attention to others working in the same field.
Point the first, I read fast. That's part of how I got access to academic science in the first place. I am not as fast as a skilled speed-reader, but I can get information out of written text faster than most people I know, and really my acquaintance is biased towards intellectuals and academics. This is also the secret of how I keep up with ~250 journals in my subscription lists across DW and LJ, and yes, I do read every post and don't filter my reading page.
On top of that I did an undergraduate degree which had its problems but one of the things it was very good for was giving me a lot of practice at reading, absorbing and summarizing scientific articles. Basically I had four years of writing three tutorial essays every two weeks, each covering a reasonable bibliography of the key articles relevant to the title. If I wanted to have a social life at all, I had to get pretty damn fast at doing that.
It's not just experience of reading fast, though. Although cjwatson described research articles as
generally pretty information-dense, and that's true, in practice there's a lot of overlap in the information. If I'm reading within my own field, I can skim a lot of the introduction section because it's, quite intentionally, a summary or review of recent findings in the field and an explanation of why the research is interesting. Unless something's really directly comparable to my specific research, I won't read the methods in detail, because I'm familiar with the sorts of techniques cell biologists and people who collaborate with us generally use, and I'm pretty unlikely to care whether something was incubated for half an hour or forty-five minutes. And in a good paper the first third of the results should be establishing that the technique is valid, which is important to have but not actually a source of information I need beyond verifying it's there. For example: I deleted this gene, and this experiment shows that when I think I'm deleting the gene, it's actually deleted. Good stuff, and I'd worry if it were absent, but I don't in fact need to slog through it. So if someone with a general academic background, but knowing nothing about tumour suppressors, were to read a recent paper relevant to my research, it might well take them several hours to understand it properly. For me, a typical paper has about half an hour worth of actually new to me stuff.
But even with that, yes, it's a firehose. One of my pet tumour suppressors, the p53 pictured in my icon, is notoriously the cancer factor with the biggest share of the literature, and I often like to quote the fun fact that there's been 80 thousand papers mentioning it in my lifetime, the protein having been discovered the year of my birth. And really, I don't only need to read p53 papers, I need to read about cell proliferation and death, and mechanisms of chemotherapy, and protein synthesis and destruction, and other things that may behave similarly to p53 in whatever respect, and methods that may be relevant to me that haven't been used for p53 yet, and and and.
The answer to this is a combination of automated tools, and peer networks. I tend to lean quite heavily towards the latter. Colleagues working on the same stuff recommend me articles to read, and when I am in a reading phase I do the same for them. And I make sure I go to every scientific seminar I possibly can that might be even remotely relevant to me, because stuff that my colleagues a couple of hops away are interested in enough to invite a speaker is exactly the kind of region to look for articles I'd need to know about but wouldn't find by constructing searches based on what I already know. This worked better when I worked at big famous places like Oxford and the Karolinska Institute (which awards the Physiology and Medicine Nobel prizes, so anyone thinks they're in with a shot really wants to bring their work to the attention of the prize committee), but it's still useful even in a small institution.
Right now we don't have a formal "journal club", a regular meeting where colleagues take it in turns to recommend and give a short presentation on the most exciting article they've read recently, but I do definitely hear about what my PhD students are reading. And some of it is getting up to speed with the foundations of the field, but often when they're doing that they find interesting things which I again wouldn't have thought of. The slight randomness of people following their interests as opposed to automated searches is really good for increased breadth of coverage. And it also means that we're essentially crowd-sourcing the reading time, we don't necessarily all read everything, but people read their own stuff and summarize it for colleagues.
There are quite sophisticated automated tools out there for making sure people notice new stuff that's published in their field, but that doesn't really answer the question of how to go about
selecting out the most important things. I have alerts on a smallish number of keywords; with p53 it needs to be in combination with several others, otherwise I couldn't keep up at all, whereas with my #1 PhD student's completely novel cancer regulator, she and I have read every single paper ever published on it, even when they're talking about chlamydia or embryo development instead of cancer. There's a small number of names of researchers I essentially subscribe to, as they're close colleagues in my field and consistently publish good stuff, but even then, I don't read everything they put out. I also have alerts on people citing my work, which is partly a vanity thing, but mainly anyone who thinks my previous work is relevant to them, well, their work conversely is likely to be relevant to me! Of the alerts I get, I screen out about a third of them as being what I think of as workhorse papers; for example if someone's working on one of my drugs, and they try it in a new patient population with a slightly different type of cancer, that's good and valuable work but I don't necessarily need to read it. Whereas if someone proposes the drug has a totally different mechanism from what we previously thought, then I really need to pounce on that pronto.
After that I do a kind of cascading thing. As I read each paper, I'll mark up the bibliography with anything else that is directly relevant. That means not stuff that's ancient, it's really good to cite when a phenomenon was first observed, but if the whole field has accepted it as fact for 10 years, there's usually no point me reading the original paper. And stuff that directly impacts on my own work, for example, if a paper says, p53 causes apoptosis in such-and-such a system, I'm probably not going to follow up citations about the mechanism of apoptosis because I already know that, and I'm not going to follow up detailed descriptions of other features the system has, because I don't work on that system, but I might well need to read the paper that explains what else p53 needs to kill the cells as that could be true in my own systems as well. That usually nets me about half a dozen papers per starting article, except that once I start reading them I'll find that they all cite eachother, so it tends to converge rather than expand.
The other way I limit how much time I spend reading is that I let my reading by guided by my research, rather than by my curiosity. Like, when I was doing my PhD the experimental evidence pointed to the idea that p53 is involved in regulating the cellular machinery for making new proteins. So I went and read a bunch of papers about ribosomes and nucleoli, which would always have been interesting, but it became worth the investment of time once I'd got a novel result showing that p53 has something to do with ribosomes. Or perhaps I see a new phenomenon I haven't encountered before, and I'll go and read about other people who have seen similar things, so I can design experiments to test whether their observations are relevant to my own results or not.
In practice this often means that my reading tends to go in phases; when I've done an experiment which shows something new and surprising, or when I've completed a series of experiments and I'm writing this up for publication, I do in fact spend most of my working hours reading. Oh, and when I'm designing a project and applying for funding, I have to do a fair amount of checking that what I'm doing isn't duplicating effort, as well as justifying why my research question is important. Other times I might only read one or two articles a week; the field moves fast, but not that fast, so if I'm not aware of something new for a few months, it's usually not a disaster. Indeed, sometimes it means I show some conclusion independently, and that can be valuable in itself.
Does that help? Please feel free to ask more questions, including the rest of my readers beyond the ones who asked me in the first place.
I prefer comments at Dreamwidth. There are currently comments there. You can use your LJ address as an OpenID, or just write your name.