Space-based solar

Peggy Hollinger’s fine piece behind the paywall in today’s FT Big Read, here. Space-solar was the raison d’etre for Gerard K O’Neill’s vision for huge space colonies back in the 1970s. The colonies would support themselves by exporting virtually unlimited electrical power to Earth. No need for fusion!

I’m amazed at how our global ambitions have shrunk into our iPhones over the years. About 15 years ago, Nicholas Carr wrote a piece in the Atlantic about how the ‘newish’ internet was changing our brains (and making us stupid). I remember he interviewed me for that article and I was a bit skeptical of the notion that our minds could be dumbed down by the likes of Google.

Today, not so skeptical. Our lack of an ambitious global investment strategy for a sustainable planet that improves lives, keeps the peace and gets us out in the Solar System is palpable.

David Leonhardt nails it…

In today’s NYT on-line, here. I remember those first passenger jets shrinking distances. And I remember driving coast-to-coast on America’s interstates in 52 hours during my twenties.

What could our country look like if we went long and big with the kind of investments that he’s talking about?

An AI wrote the ‘Broader Impacts’

As artificial intelligence systems grow more advanced, they are being applied in novel ways that raise ethical questions. One such application is using AI tools to assist in preparing grant proposals for submission to funding agencies like my old agency, the National Science Foundation (NSF). On the surface, this may seem innocuous – after all, AI is just a tool being used to help draft proposals, right? However, utilizing these systems raises several ethical concerns that must be considered.

First, there is the issue of authorship and originality. If an AI system generates significant portions of a grant proposal, can the researchers truly claim it as their own original work? And does it really matter? After all, the currency of science is discovery and the revealing of new knowledge, not filling out another post-doctoral management plan. My own sense is that AI in an assistive role is fine. And at some point in the not too distant future, the AI’s may act more as a partner, as they now do in competitive chess.

Relatedly, the use of AI grant writing tools risks distorting the assessment of the researchers’ true capabilities. Grant reviews are meant to evaluate the creativity, critical thinking, and scientific merit of the applicants. If parts of the proposal are artificially generated, it prevents a fair judgement, undermining the integrity of the review process. Researchers who utilize extensive AI assistance are gaining an unfair advantage over their peers. But what can we do to stop this. It seems to be that the horse has left the barn on that one. Perhaps it’s fairer to assess a scientist in a post-hoc fashion, based on what they’ve accomplished scientifically.

There are also concerns that AI-aided grant writing exacerbates social inequalities. These systems can be costly to access, disproportionately benefiting researchers at elite universities or with ample funding. This widens gaps between the research “haves” and “have nots” – those who can afford the latest AI tools versus those relying solely on their own writing. The playing field should be as even as possible.

Additionally, the use of grant-writing AI poses risks to the advancement of science. If the systems encourage generic proposals repeating current trends, rather than groundbreaking ideas, it could stifle innovation. Researchers may become over-reliant on the technology rather than developing their own critical thinking capacity. Shortcuts to generating proposals can inhibit big picture perspective.

AI is a powerful emerging technology, but it must be utilized carefully and ethically. Researchers adopting these tools have a duty to consider these ethical dimensions. Honesty and transparency about process, maintaining fairness, and upholding originality are critical to preserving research ethics. The ends do not necessarily justify the means – progress must not come at the cost of our values.

What if our understanding of transformer AI models is only marginally better than our understanding of biological brains?

At some level we do understand biological brains–we understand action potentials, synapses and some circuit networks. We understand neurotransmitters, plasticity, and signal transduction cascades. We even can look at the entire functional map of animal brains in real time as they behave. Unfortunately, the mapping function that goes from neuron to behavior or thought remains beyond us.

My question for my computer science friends is this: at what level do we understand how specific ‘neuron’ activations cause AI output that resembles a robot passing the Turing Test? I’m worried that it’s only marginally more than what we have for neurobiology and that’s why explainable AI doesn’t work.

Am I wrong?

Rethinking Retirement in the Interests of Academic Progress

Our universities face a dilemma surrounding professor retirement. Often there is an unspoken aim to invigorate departments, but at what cost? Such ‘pushed’ retirement may deprive many fields of pioneering researchers still intellectually vigorous. We must weigh this carefully if we hope to advance human knowledge.

Consider professors engaged for decades in complex lab work or longitudinal studies. Much institutional knowledge walks out the door when they retire. Can new hires readily pick up where they left off? Surely we lose momentum and continuity. The fruits of wisdom culled over years ought not be discarded lightly.

Of course some faculty transitions are inevitable—and valuable. Fresh perspectives periodically renew departments. But flexibility allows custom arrangements, preserving expertise while initiating change.

The accumulation of new knowledge progresses through deep specialization and accumulation of experience. Professorships should accommodate extended research timelines. Few major discoveries unfold on rigid schedules. Why impose them artificially via blanket retirement norms?

If we wish groundbreaking work to continue, enabling our most highly skilled professors is prudent. Their late-career contributions can be profound, as historical scientific luminaries have shown. Let us proceed carefully, evaluating productivity case by case.

With thoughtful policy, we gain from wisdom and vitality together. Human knowledge advances best when the discoveries of long-tenured faculty are passed to emerging talent enthusiastically. But forced exodus helps neither outgoing nor incoming. For the sake of innovation, let us rethink rigid retirement.

Trust in science by the public: lessons from Theranos and Covid…

I had lunch with a colleague yesterday and we both agreed that science has a serious trust problem with the public and specifically members of Congress. While science has long been regarded as a beacon of knowledge and progress, doubts and skepticism have taken root in the public’s perception. What are the root causes for the problem? To address this concerning issue, I think it’s useful to review the multifaceted factors that contribute to this dilemma and explore the path ahead to rebuild faith in the scientific enterprise, drawing insights from the infamous case of Theranos and our awful three years of pandemic.

One of the primary reasons behind the trust problem in science is the alarming rise of misinformation. In the digital age, information spreads like…well a virus… through social media platforms, often without undergoing rigorous scrutiny. Misleading articles, exaggerated claims, and distorted research findings can easily explode, leading to public confusion and distrust. The Theranos scandal serves as a stark reminder of how charismatic personalities and flashy presentations can deceive the public, perpetuating skepticism towards other scientific endeavors.

The replication crisis has significantly impacted the credibility of scientific research. Numerous studies have failed to replicate previously published findings, raising concerns about the robustness of scientific conclusions. Publication bias, where only positive or statistically significant results are published, exacerbates this issue, skewing our understanding of the true scientific landscape. The Theranos case, which involved fraudulent claims backed by insufficient data, highlights the need for greater scrutiny and verification of scientific claims to rebuild public confidence.

In an era where research funding often relies on private sources, conflicts of interest have become a pervasive issue. Scientists may face pressure to produce results that align with the interests of their funders, compromising the objectivity of their work. Similarly, the influence of industries on research outcomes can raise doubts about the independence of scientific findings. The Theranos scandal demonstrates the potential dangers of unchecked C-suite influence, underscoring the urgency for transparency and accountability in scientific research. The same is often true in academia where the leadership is looking for breakthroughs to impress alums and raise money.

Scientists often struggle to effectively communicate their work to the general public. Complex jargon and technical language can alienate the public and create a disconnect between scientific advancements and their real-world implications. Bridging this gap requires investing in science communication training for researchers, encouraging them to engage with the public through accessible language and relatable examples. In the case of Theranos, miscommunication and overhyped promises conned a pretty distinguished board.

Science is not immune to political polarization, and ideological biases can influence the interpretation and dissemination of scientific research. When scientific findings clash with deeply held beliefs, individuals may reject or distort the evidence, further undermining trust in science. Our collective recent experience with COVID and the mRNA vaccine technology crystallizes this problem.

The erosion of trust in science is a multifaceted issue that demands a collective effort from scientists, institutions, and the public. Drawing lessons from the Theranos scandal and what happened with the pandemic, we are reminded of the importance of combating misinformation, addressing replication challenges, promoting transparency, enhancing science communication, and fostering evidence-based reasoning. Rebuilding trust in the scientific enterprise requires persistent dedication and a willingness to learn from past failures. By acknowledging the uphill battle ahead and implementing measures to restore credibility, I’m hopeful we can reignite faith in science and its potential to shape a better future.

Another COVID lesson: don’t forget to invest in Bioinformatics…

I read the other day that the US excess deaths rate is back down to where it was before the pandemic. We’ve really turned the corner and that’s why it’s critical to begin pinpointing the lessons from the past three years that will help us manage the next pandemic better. One of them has been floating under the radar: bioinformatics. To my mind, investing in bioinformatics research and infrastructure could help strengthen defenses against future threats.

So what might we invest in? For one, advanced DNA sequencing and computational epidemiology allow earlier identification of outbreaks before widespread escalation. Bioinformatics enables tracking of pathogen spread and evolution in near real-time to inform containment plans. Simulation models can evaluate intervention scenarios and quantify outcomes.

For another, bioinformatics can accelerate development of medical countermeasures needed to combat novel pathogens. For vaccines, computational genomics and immunoinformatics enable rapid design of candidates based on the genomic profile and evolution of the pathogen. Researchers can construct customized mRNA and DNA vaccines within days once the genetic code is available. High-throughput in silico screening also allows existing drug libraries to be scanned for molecules with potential antiviral properties that could be repurposed. Promising hits can undergo rapid in vitro confirmation. Beyond repurposing existing drugs, bioinformatics can identify molecular targets for designing new broad-spectrum antiviral drugs. As the pandemic spread, researchers leveraged bioinformatics to adjust RT-PCR diagnostic assays to detect the emerging SARS-CoV-2 virus. Going forward, integrative analytics of disparate datasets along with artificial intelligence modeling may provide critical epidemiological insights for contingency planning against future threats. Overall, bioinformatics provides a valuable toolkit for developing tailored diagnostics, treatments, and vaccines in response to novel pathogens.

And…Large Language Models married to bioinformatics and in-silico modeling of molecular dynamics may have the potential to accelerate progress. Already, we are hearing of such transformer models trained on genes instead of words.

Bioinformatics is clearly only a piece of the pie. Robust traditional public health systems remain essential. However, bioinformatics infrastructure offers complementary capabilities to help monitor, model, and respond to outbreaks. Supporting bioinformatics R&D and next-generation sequencing infrastructure could provide valuable capabilities. As policy is developed, the potential of bioinformatics could be considered as part of a comprehensive biopreparedness strategy.