How our bias towards recency in scientific discovery hurts our understanding

The white lab rat moved towards the food dispenser. It evidently heard the tone and correctly interpreted its meaning. The ten electrodes implanted deep within its brain were each recording the fingerprints of multiple neurons at millisecond time resolutionākey to deciphering the neural code. A wireless transmitter relayed the massive data train to an analog-to-digital converter, where it was fed into a computer, first sorted by neuronal fingerprint and then collated and curated to make sense for the human experimenters. The year was 1975. The computer was a DEC PDP-8/E minicomputerāabout as big as a dorm fridge and five orders of magnitude less powerful than your smartphone.
We are surprisingly bad at knowing when things began.
Iāve been thinking about this for a while, partly because I lived through several of the transitions we now misremember. In 1987, I used the Internet for early text-based email, file transfers, and reaching colleagues at other universities. In August of 1991, in the face of an impending direct hit of Hurricane Bob, I moved all of my image data from Woods Hole to NIH in Bethesda in a matter of minutes. This was entirely unremarkable at the time. And yet when I mention it today, people often look mildly startled, as if Iāve claimed to have owned a smartphone in 1987. In their minds, the Internet began sometime around 1994 or 1995, when the Web arrived and made it visible to everyone. Before that, apparently, there was nothing.
But of course, there was something. There was a rich, functional, and genuinely useful network that predated the Web by decades. And invented at the same time as the Web was Gopher, an ancient app for navigating and retrieving documents that worked elegantly and simply before the Web achieved wide public adoption. There were mailing lists, FTP archives, Usenet ā an entire ecology of networked communication that the Web didnāt replace so much as it superseded, in the way that online streaming superseded television programming. TV is still there if you know where to look. Most people donāt look.
This isnāt just a historical curiosity. When we misplace the origin of a technology, we lose something important: our understanding of why it evolved the way it did. The Web wasnāt designed in a vacuum. It was a solution to specific problems regarding the use of hyperlinks to navigate the nascent Internet. The decisions Tim Berners-Lee made in 1989 were shaped by what already existed. If you donāt know what already existed, you canāt understand those decisions. You inherit the outcomes without understanding the tradeoffs. And some of what was traded away was worth keeping. The now-forgotten contemporary of web browsers, Gopher, was simple and decentralized. And this looks appealing again now that weāve seen where social media and commercialization have taken us.
The same logic applies in science. The experiment I described at the top of this piece was real and took place in a Caltech lab. Multi-electrode neural recording, wireless transmission, real-time spike sorting ā these capabilities existed fifty years ago. The folks doing that work understood aspects of the neural code in the context of learning and memory that are often invisible to current neuroscience trainees, partly because the papers were published a long time ago and because all of science is biased towards the most recent, shiny things. The finding doesnāt disappear. It just becomes functionally unavailable.
The field of artificial intelligence may be the most dramatic case study in collective chronological confusion we have. Most people who interact with todayās language models and image generators believe they are witnessing something genuinely unprecedented ā a technology that sprang into being sometime around 2017. What happened is more complicated and more interesting.
The mathematical foundations for neural networks were laid in 1943, when Warren McCulloch and Walter Pitts published a paper describing how neurons could, in principle, compute logical functions. Frank Rosenblatt simulated a working perceptron at the Cornell Aeronautical Laboratory in 1958 ā a system that could learn from examples. The 1986 backpropagation paper by Rumelhart, Hinton, and Williams, which most practitioners treat as a founding document, was itself a rediscovery and refinement of ideas that had been circulating since the early 1970s. Yann LeCun was training convolutional neural networks to read handwritten digits for the U.S. Postal Service in 1989. The architecture underlying those systems is recognizably the ancestor of what powers modern computer vision.
None of this was secret. It was published, presented, and in some cases deployed in real systems. What happened instead was a kind of institutional forgetting, accelerated by two āAI wintersā ā periods when funding dried up, interest collapsed, and computer science turned its attention elsewhere. Researchers who had spent careers on neural approaches moved on or retired. Graduate students who might have built on their work were instead trained in other paradigms. When the hardware finally caught up with the ambitions of the 1980s, around 2012, the rediscovery felt like a revolution. In some ways, it was. But the conceptual foundations were not new, and the people who had laid them got less credit than they deserved, partly because so many of the fieldās new practitioners didnāt know they existed.
The practical cost here is the same as elsewhere: repeated investment in problems that had already been partially solved, frameworks that were novel mainly to their authors, and a set of origin myths that flatter the present at the expense of the past. The deeper cost is that we donāt understand what was tried and discarded and why ā which algorithms were abandoned for reasons of computational expense rather than theoretical inadequacy, and which might be worth revisiting now that the expense has fallen.
Climate science offers a different version of the same problem ā one with considerably higher stakes. The standard cultural narrative places the discovery of anthropogenic climate disruption sometime in the 1980s, anchored perhaps by James Hansenās 1988 Senate testimony, or by the formation of the IPCC. If you read serious journalism about the climate, you might push it back to the 1970s. If you are diligent, you might encounter the Keeling Curve, which has been tracking atmospheric COā from Mauna Loa since 1958.
The scientific recognition of the greenhouse effect and its potential consequences for global temperature dates to 1896. That year, Svante Arrhenius published a paper in which he calculated, with considerable accuracy, how much warming a doubling of atmospheric COā would produce. He arrived at a figure somewhere between 5 and 6 degrees Celsius ā higher than modern estimates, but in the right direction and for the right reasons. He then speculated, in print, that industrial combustion might one day alter the atmosphere’s composition enough to matter.
This was not forgotten in the way a 1975 neuroscience paper was. Arrhenius was a Nobel laureate; his work was well-known. What happened instead was that the question was considered, examined, and provisionally set aside ā partly because mid-century scientists underestimated how rapidly fossil fuel consumption would grow, and partly because they assumed the ocean would absorb most of the excess carbon. These were empirical mistakes, not failures of reasoning. The framework was sound. The inputs were wrong.
What we lose when we date climate science to Hansen or to the IPCC is the understanding that this is not a young field with tentative conclusions. The core physics has been understood for over a century. The measurement of its consequences has been underway for nearly seventy years. When people argue that science is āstill developingā or ātoo uncertain to act on,ā they are often unconsciously drawing on a mental model in which the field is young and its conclusions preliminary. Knowing the actual timeline does not resolve all the uncertainties ā science is always uncertain at its leading edge. But it changes how you should reason about the weight of evidence.
Economics has its own version of this confusion, though the consequences are harder to tabulate. The efficient market hypothesis is widely understood to have originated in the 1960s with Eugene Fama. The idea of index fund investing ā holding the market rather than trying to beat it ā is associated with John Bogle and the first retail index fund, launched in 1976. The behavioral critique of rational actor models, which demonstrated systematically that real human beings make predictable and consistent errors in judgment, is credited to Kahneman and Tverskyās work from the early 1970s.
All of this is broadly correct as a matter of attribution. What gets lost is the prior landscape of ideas these researchers were responding to. The observation that markets were difficult to beat systematically appeared in Louis Bachelierās 1900 doctoral thesis on the mathematics of speculation ā work so ahead of its time that it was largely ignored until Paul Samuelson encountered it in the 1950s and recognized what it contained. The psychological research on judgment and decision-making that Kahneman and Tversky formalized was in some respects a rigorous extension of observations that Herbert Simon had been making since the 1950s under the heading of ābounded rationalityā ā the recognition that human cognition operates under constraints that classical economics had simply assumed away.
Simon won the Nobel Prize in Economics in 1978. Kahneman won his in 2002. The ideas are connected clearly. And yet the field repeatedly had to be reminded that people are not rational actors, as if this were a new finding rather than a conclusion established, contested, partially absorbed, and then re-established over the course of half a century. Each rediscovery brought energy and refinement. But it also brought the inefficiency of not quite knowing what had been tried before.
This is the practical cost of chronological confusion: we reinvent. We pour resources into solving problems that are already solved, we fund theoretical frameworks that are novel mainly to us, and we write introduction sections that inadvertently misrepresent the state of the field by simply not knowing what came before the Internet made everything searchable.
But thereās a subtler cost, too. When we donāt understand how a technology or a scientific field evolved, we become poor navigators. We donāt know which roads were tried and abandoned and why. We donāt know which detours led to unexpected places. We canāt reason well about where to push next, because we donāt have an accurate map of where weāve already been.
There is also a political cost, which the climate case makes vivid. When the historical depth of a finding is obscured, it becomes easier to argue that the finding itself is uncertain or contested. The chronological error licenses a kind of epistemic innocence: we can treat as open questions things that have, in fact, been largely closed for a long time. This is not a problem unique to climate science. Wherever institutional memory is thin, motivated actors can exploit the gap between what has been established and what is widely understood to have been established.
Technological and scientific genealogy isnāt nostalgia. Itās a form of rigor. The rat in the 1975 experiment knew something. So did the Caltech scientist, looking at the brain recordings. Arrhenius knew something in 1896. Bachelier knew something in 1900. Rosenblattās perceptron knew something in 1958. We could stand to know it, too.