When I’ve had a bad day, I sometimes watch cute YouTube videos. Nothing melts away stress like watching a toddler adorably mouth the words to a heavy metal song, or a panda bear fall helplessly on its noggin. YouTube is like the 21st century version of relaxing into some knitting or a nice warm bath.
But I don’t often go straight to YouTube and trawl through random videos. I find my dose of cuteness through networks. I watch clips that have been posted to social media by friends & family; or I view collections of videos that have been curated by a Buzzfeed journo; or I find new videos that are ‘suggested’ after watching the last one, based on other YouTubers’ viewing habits. In any case, the clips I watch have been presented to me in a mediated way, influenced by other people’s choices. This is how clips go viral – as they are watched, shared, and disseminated, their popularity breeds more popularity, and their views increase exponentially.
The same thing happens in research. Think about the last source you cited. How did you find it? Did you see it cited in another book or paper? Did your supervisor recommend it? Did you find it by searching a database that places often-cited papers highest in its search results? In each of these cases, other people’s endorsements – whether direct or indirect – influence your exposure to the source material. Citations are, like YouTube views, prone to snowballing. A research paper that gets a good few citations in its early release is likely to rank higher in search database algorithms, and so citations breed citations. Some papers go ‘viral’ and others get lost in the fray.
Trouble is, this can skew the way in which academics collectively build knowledge. Popular papers become more visible than papers which are initially overlooked, and important or contradictory findings may be forgotten. To examine this phenomenon, researchers are starting to use network theory to analyse how citations can build up in some parts of the academic landscape, and not in others.
In 2009, Steven Greenberg developed an arbitrary hypothesis about the role of a particular protein for a given medical condition. He conducted a search for papers which referenced the relationship between the protein and the medical condition, and found hundreds of papers which collectively expressed an apparent consensus that there was a link. However, going back to the primary literature from actual lab experiments, he found that only four out of ten experiments showed a link; the other six reported no such link. Over time, the four positive results had been cited and recited; whereas the six negative results had been forgotten. The more often the positive results were cited, the more convincing it became that the protein had a role to play in the medical condition, even though actual lab results were split.
So citing research, much like sharing an online video, can have a snowballing effect on the number of eyeballs that will encounter it. When we cite a piece of research, we contribute to its growing prominence; when we don’t cite a piece of research, we contribute to its disappearance. That’s why it’s so important to cite judiciously, and to look a little further than the conventional wisdom.
Greenberg, S. A. (2009). How citation distortions create unfounded authority: Analysis of a citation network. British Medical Journal, 339, b2680.