What is going on with global politics?

I spend most of my time thinking either about global issues somewhat removed from politics, or the politics of science and the academy. But it’s clear that geopolitically, something much larger and emergent is going on that is fostering an increase in violent conflict and political anger. And this is happening not just here in the United States, but globally with hot wars going on in Europe and the Middle East, to say nothing of new threatened conflict in South America.

Apart from the micro-details of each particular conflict, the bigger picture, from my point of view, is that the pandemic and climate disruption are global shocks that have led to this emergent manifestation of large-scale human-on-human violence. Apart from a complexity science approach, how would we study this notion? Are there any testable hypotheses here?

Asking for a friend…

Reproducibility redux…

The crisis of reproducibility in academic research is a troubling trend that deserves more scrutiny. I’ve blogged and written about this before, but as 2024 begins, it’s worth returning to the issue. Anecdotally, I’ve noticed that most of my scientist colleagues have experienced the inability to reproduce published results on at least one occasion. For a good review of the actual numbers, see here. Why are the findings from prestigious universities and journals seemingly so unreliable?

There are likely multiple drivers behind the reproducibility meme. Scientists face immense pressure to publish groundbreaking positive results. Null findings and replication studies are less likely to be accepted by high-impact journals. This incentivizes scientists to follow flashier leads before they are thoroughly vetted. Researchers must also chase funding, which increasingly goes to bold proposals touting novel discoveries over incremental confirmations. The high competition induces questionable practices to get an edge.

The institutional incentives seem to actively select against rigor and verification. But individual biases also contaminate research integrity. Remembering back to my postdoctoral experiences at NIH, it was clear even then that scientists get emotionally invested in their hypotheses and may unconsciously gloss over contrary signals. Or they may succumb to confirmation bias, doing experiments in ways that stack the deck to validate their assumptions. This risk seems to increase as the prominence of the researcher increases. It’s no surprise that findings thus tainted turn out to be statistical flukes unlikely to withstand outside scrutiny.

More transparency, data sharing, and independent audits of published research could quantify the scale of irreproducibility across disciplines. Meanwhile, funders and academics should alter incentives to emphasize rigor as much as innovation. Replication studies verifying high-profile results deserve higher status and support. Journals can demand stricter methodological reporting to catch questionable practices before publication. Until the institutions that produce, fund, publish and consume academic research value rigor and replication as much as novelty, the problem may persist. There seem to be deeper sociological and institutional drivers at play than any single solution can address overnight. But facing the depth of the reproducibility crisis is the first step.

Happy New Year!

Cognition, Neuroscience, Biology, Chemistry, Physics…

There is a continuum here. We believe that the same fundamental rules of nature (life) apply and that if we were clever enough, we could explain the first (cognition) from the rules of the last (physics). But we aren’t–at least yet. And so there is a gap which is filled by humans in a number of ways that may be comforting if not scientific.

And yet, there is the question of emergence, in which order arises from complexity in ways that may be rule-based locally, but not globally. Seizures and hurricanes are relatively simple examples of such emergence, but learning and memory may fall into that category also.

And so, while there are definitely quarks which obey rules, the phenomenon of Shakespeare’s plays may not be deduced from quark-laws. In fact, at an intuitive level, I very much doubt it. But my intuition is not science and therefore we may in the end be able to recreate the Bard from first principles. And if so, there goes free will.

Space-based solar

Peggy Hollinger’s fine piece behind the paywall in today’s FT Big Read, here. Space-solar was the raison d’etre for Gerard K O’Neill’s vision for huge space colonies back in the 1970s. The colonies would support themselves by exporting virtually unlimited electrical power to Earth. No need for fusion!

I’m amazed at how our global ambitions have shrunk into our iPhones over the years. About 15 years ago, Nicholas Carr wrote a piece in the Atlantic about how the ‘newish’ internet was changing our brains (and making us stupid). I remember he interviewed me for that article and I was a bit skeptical of the notion that our minds could be dumbed down by the likes of Google.

Today, not so skeptical. Our lack of an ambitious global investment strategy for a sustainable planet that improves lives, keeps the peace and gets us out in the Solar System is palpable.

David Leonhardt nails it…

In today’s NYT on-line, here. I remember those first passenger jets shrinking distances. And I remember driving coast-to-coast on America’s interstates in 52 hours during my twenties.

What could our country look like if we went long and big with the kind of investments that he’s talking about?

An AI wrote the ‘Broader Impacts’

As artificial intelligence systems grow more advanced, they are being applied in novel ways that raise ethical questions. One such application is using AI tools to assist in preparing grant proposals for submission to funding agencies like my old agency, the National Science Foundation (NSF). On the surface, this may seem innocuous – after all, AI is just a tool being used to help draft proposals, right? However, utilizing these systems raises several ethical concerns that must be considered.

First, there is the issue of authorship and originality. If an AI system generates significant portions of a grant proposal, can the researchers truly claim it as their own original work? And does it really matter? After all, the currency of science is discovery and the revealing of new knowledge, not filling out another post-doctoral management plan. My own sense is that AI in an assistive role is fine. And at some point in the not too distant future, the AI’s may act more as a partner, as they now do in competitive chess.

Relatedly, the use of AI grant writing tools risks distorting the assessment of the researchers’ true capabilities. Grant reviews are meant to evaluate the creativity, critical thinking, and scientific merit of the applicants. If parts of the proposal are artificially generated, it prevents a fair judgement, undermining the integrity of the review process. Researchers who utilize extensive AI assistance are gaining an unfair advantage over their peers. But what can we do to stop this. It seems to be that the horse has left the barn on that one. Perhaps it’s fairer to assess a scientist in a post-hoc fashion, based on what they’ve accomplished scientifically.

There are also concerns that AI-aided grant writing exacerbates social inequalities. These systems can be costly to access, disproportionately benefiting researchers at elite universities or with ample funding. This widens gaps between the research “haves” and “have nots” – those who can afford the latest AI tools versus those relying solely on their own writing. The playing field should be as even as possible.

Additionally, the use of grant-writing AI poses risks to the advancement of science. If the systems encourage generic proposals repeating current trends, rather than groundbreaking ideas, it could stifle innovation. Researchers may become over-reliant on the technology rather than developing their own critical thinking capacity. Shortcuts to generating proposals can inhibit big picture perspective.

AI is a powerful emerging technology, but it must be utilized carefully and ethically. Researchers adopting these tools have a duty to consider these ethical dimensions. Honesty and transparency about process, maintaining fairness, and upholding originality are critical to preserving research ethics. The ends do not necessarily justify the means – progress must not come at the cost of our values.

What if our understanding of transformer AI models is only marginally better than our understanding of biological brains?

At some level we do understand biological brains–we understand action potentials, synapses and some circuit networks. We understand neurotransmitters, plasticity, and signal transduction cascades. We even can look at the entire functional map of animal brains in real time as they behave. Unfortunately, the mapping function that goes from neuron to behavior or thought remains beyond us.

My question for my computer science friends is this: at what level do we understand how specific ‘neuron’ activations cause AI output that resembles a robot passing the Turing Test? I’m worried that it’s only marginally more than what we have for neurobiology and that’s why explainable AI doesn’t work.

Am I wrong?

Rethinking Retirement in the Interests of Academic Progress

Our universities face a dilemma surrounding professor retirement. Often there is an unspoken aim to invigorate departments, but at what cost? Such ‘pushed’ retirement may deprive many fields of pioneering researchers still intellectually vigorous. We must weigh this carefully if we hope to advance human knowledge.

Consider professors engaged for decades in complex lab work or longitudinal studies. Much institutional knowledge walks out the door when they retire. Can new hires readily pick up where they left off? Surely we lose momentum and continuity. The fruits of wisdom culled over years ought not be discarded lightly.

Of course some faculty transitions are inevitable—and valuable. Fresh perspectives periodically renew departments. But flexibility allows custom arrangements, preserving expertise while initiating change.

The accumulation of new knowledge progresses through deep specialization and accumulation of experience. Professorships should accommodate extended research timelines. Few major discoveries unfold on rigid schedules. Why impose them artificially via blanket retirement norms?

If we wish groundbreaking work to continue, enabling our most highly skilled professors is prudent. Their late-career contributions can be profound, as historical scientific luminaries have shown. Let us proceed carefully, evaluating productivity case by case.

With thoughtful policy, we gain from wisdom and vitality together. Human knowledge advances best when the discoveries of long-tenured faculty are passed to emerging talent enthusiastically. But forced exodus helps neither outgoing nor incoming. For the sake of innovation, let us rethink rigid retirement.