Research assistance from AJ Kourabi, Sabrina Singh, and Neiman Mathew.
A shorter, self-contained version of this document was submitted to the Open Philanthropy Cause Exploration Prizes.
In three sentences
Neurotechnology could have extremely positive or negative impacts on the wellbeing of humanity and other beings in the near- and long-term future.
Almost no efforts have the stated goal of differential neurotechnology development.
There are fundable projects that could help preferentially advance the development of beneficial and risk-reducing neurotechnologies.
Summary
A neurotechnology is any tool that directly, exogenously observes or manipulates the state of biological nervous systems, especially the human brain. Brain-computer interfaces and antidepressant drugs are familiar examples.
Importance
Neurotechnologies could have profound impacts on the near- and long-term future.
In the positive direction, neurotechnologies have the potential to:
- Address the growing ~21% share of global disease burden from neurological and neuropsychiatric disorders
- Generally reduce suffering and improve subjective wellbeing
- Improve individual reasoning and cognition
- Inform decision-making on topics like the suffering of non-human minds
- Help develop safe AI systems
In the negative direction, neurotechnologies have the potential to:
- Lead to addiction or other personal harms
- Exacerbate authoritarianism
- Irreparably corrupt human values
- Increase risks from AI systems
Differential neurotechnology development is the project of preemptively creating the most beneficial or risk-reducing neurotechnologies before any others.
Investing in differential neurotechnology development now could be timely. On current trajectories, neurotechnologies in clinical trials today could have large-scale impacts in 1-5 decades, with 30 years as a mean estimate. And with concerted effort, neurotechnologies currently in clinical and preclinical development could be advanced in 10 to 20 years to the point where they might meaningfully benefit AI safety, in addition to other, potentially less-urgent benefits.
Neglectedness
Of the ~$20B/year that goes toward funding neuroscience overall, ~$4B/year goes toward non-drug, central nervous system neurotechnology development. But almost no efforts have the stated goal of differential neurotechnology development.
Tractability
Fundable opportunities today for a new philanthropist that might help achieve differential neurotechnology development include:
- Research and forecasting relevant to differential neurotechnology development, particularly on the cost-effectiveness of specific interventions (<$100k in the near-term)
- R&D infrastructure like open-source research software and clinical cohort recruitment (tens of thousands to millions USD)
- A patent pool (<$100k total)
- Startups or Focused Research Organizations to directly build differentially beneficial neurotechnologies (up to millions of USD per year per project)
Importance
A neurotechnology is any tool that directly, exogenously observes or manipulates the state of biological nervous systems, especially the human brain.[1] Readers may be familiar with neurotechnologies like electrode-based brain-computer interfaces (BCIs), antidepressant drugs, MRI, transcranial magnetic stimulation, or optogenetics.
Neurotechnologies alter the most intrinsic part of ourselves: our minds. So it’s perhaps unsurprising that neurotechnology could have extremely positive or negative impacts on the welfare of humanity and other beings in the near- and long-term future.
Differential neurotechnology development is the project of preemptively creating the most beneficial or risk-reducing neurotechnologies before any others (Bostrom, 2002).
Investing in differential neurotechnology development now could be timely. On current trajectories, neurotechnologies in clinical trials today could have large-scale impacts in 1-5 decades, with 30 years as a mean estimate. And with concerted effort, neurotechnologies currently in clinical and preclinical development could be advanced in 10 to 20 years to the point where they might meaningfully benefit AI safety, in addition to other, potentially less-urgent benefits.
The Potential Impacts of Neurotechnology
This section describes some potential impacts of neurotechnology without considering the timeframes in which they may occur. We discuss timelines in the next section.
An implicit assumption in what follows is that arbitrary degrees of control over neural activity are achievable with sufficiently advanced, but physically possible, technology.
Treating neurological and neuropsychiatric disorders
Much R&D in neurotechnology today is focused on treating neurological and neuropsychiatric disorders. The two are different,[2] but for simplicity we’ll collectively call them “neuro disorders”.
Based on data from the Institute for Health Metrics and Evaluation in 2019 (database, publication, our calculations), neurological and neuropsychiatric disorders account for ~21% of global disease burden. This is ~530M DALYs (disability-adjusted life years) out of ~2.54B total. Using Open Philanthropy's estimate of $100k USD/DALY (Open Philanthropy, 2021, section 3.4), the value of having provided cures for all neuro disorders in 2019 would have been $53 trillion.
This is an overestimate in that some people kept alive by cures for neuro disorders would have been afflicted by other diseases, but the global burden of neuropsychiatric disease may also be significantly underestimated, as suggested in a recent report (Happier Lives Institute, 2021, section 3.2). Reasons for underestimation include that suicide and self-harm aren’t counted toward neuropsychiatric disease burden (though we count them in our estimates above), that self-report and diagnosis of emotional symptoms appear to be lower in non-Western countries, and that disability weights for mental disorders are underestimated.
For comparison, ischemic heart disease alone accounted for ~7% of global disease burden in 2019, and communicable, maternal, neonatal, and nutritional diseases combined accounted for ~26%.
The percentage of global disease burden caused by neuro disorders has steadily increased from 16% in 1999. It will presumably keep increasing as communicable disease treatment and maternal health improve globally. In high socio-demographic-index countries, neuro disorders accounted for ~30% of disease burden in 2019, barely up from ~29% in 1999.
We haven’t thoroughly vetted the methods used to obtain the IHME’s data, and the IHME came under strong criticism for its COVID-19 modeling (Piper, 2020). As some corroboration, the WHO’s global disease burden estimates roughly match (±5% of top-line numbers) the IHME’s estimates. The WHO’s estimates include the IHME’s data as one source, but also claim to include data from national health registries, UN partners, and other studies.
Estimating the cost-effectiveness of neurotechnology development for the treatment of neuro disorders requires more research than we can manage here, since it involves at least five substantial questions:
- Which neurotechnologies are likely to treat or cure which neuro disorders?
- How much will neurotechnologies developed for neuro disorders also help treat non-neuro disease, e.g. by improving peripheral organ function via the nervous system or giving people the willpower to eat healthily?
- Conversely, how much will treatments for non-neuro disorders, e.g. better drugs for cardiovascular disease, help treat neuro disorders?
- How much burden from neuro disorders can be eliminated through better access to
current treatments or
through public health initiatives?
- We note that current treatment options for neuro disorders are quite poor compared to those for infectious disease or neonatal disorders. There are no curative treatments for any neurological diseases, only ways to manage symptoms to greater or lesser degree (Cassano et al., 2021). And there are no reliably curative treatments for most neuropsychiatric disorders (Howes et al., 2021).
- What are other externalities of treating neuro disorders, like reducing long-term risks from malevolent actors?
Direct manipulation of subjective wellbeing
The subjective wellbeing of a conscious organism is, to the best of our knowledge, exclusively a function of the physiological state of its nervous system. Thus future neurotechnologies could alleviate suffering and maximize subjective wellbeing beyond what is possible by any other means.
Depression and chronic pain are proof that subjective wellbeing isn’t entirely dependent on external circumstances. And lived experience suggests that most of us spend much of our lives experiencing unnecessary and unproductive suffering, however intermittently, from work stress to bad sleep to crippling guilt to destructive habits. (This is not to say that all suffering is unnecessary or unproductive.) The suffering of non-human animals may be much worse.
On the positive end, we have no idea how good lived experience can get. The upper limits of how much subjective wellbeing is possible to achieve via direct manipulation of brain states is unknown. What we consider a good life today might be considered torture by the standards of a civilization with adequate neurotechnology.
It’s hard to quantify the value of increasing subjective wellbeing. DALYs aren't designed for it. In fact, “an intervention that made mentally healthy people happier would avert zero DALYs.” Measures that take subjective wellbeing into account, like “Wellbeing adjusted life years” (WELLBYs) or other estimates based on self-reports have the benefit of being direct, but face calibration challenges. For example, reporting life satisfaction on a 0-10 scale assumes life satisfaction is a bounded quantity. This isn’t to impugn these attempts at measurement - they are commendable steps toward an important goal. But for now we consider measuring subjective wellbeing to be an open problem.
Worryingly, neurotechnology capable of maximizing subjective wellbeing may also be capable of causing great suffering.
Substance abuse is the most familiar example of neurotechnologies causing suffering in the modern world. We are not aware of rigorous estimates of the subjective suffering caused by substance abuse, but estimates of the economic impacts range from 10s to 100s of billions USD per year in the U.S. depending on which factors are included (value of lives lost, lost work productivity, health care costs, crime, etc.) (NIDA, HHS, Council of Economic Advisers, 2017). Substance abuse is an example of the more general phenomenon of wireheading: using neurotechnology to directly manipulate pleasure or motivation systems in the brain, usually in a way that’s harmful.
Sufficiently advanced neurotechnology could also be used for horrific malicious purposes, such as making a victim feel intense pain while masking outward signs of suffering. Just as we don’t know how good subjective wellbeing can be, we also don’t know how bad it can be.
Enhancement and value shift
Neurotechnology might offer many ways to enhance human abilities:
- Improved control of memory formation/erasure/reconsolidation, including possibly accelerated learning
- Improved access to digital content using BCIs
- Improved manipulation of devices and tools using BCIs
- Higher-bandwidth interaction with AI systems (more below)
- Improved communication and forms of connection between people
- Improved concentration
- Control of energy level
- Control of emotions, including in decision-making
- Improved impulse control
- Improved introspection
- Altering personality traits
- Flagging or eliminating cognitive biases (future discounting, status quo bias, etc.)
Some more speculative enhancements — including those perhaps better described as new abilities rather than enhancements — are listed here.
Many people would benefit from cognitive, behavioral, or emotional enhancement in their personal lives or careers. More importantly, new neurotechnologies could improve individual reasoning and cognition to a greater degree than existing methods previously examined by Effective Altruist organizations. For example, neurotechnologies could alert a user when they are falling prey to a particular cognitive bias, identify which of a user’s semantic memories justify a particular belief, or preempt undesired emotional reactions to new information.
More research is warranted on how such enhancement would affect areas of interest to Open Philanthropy, such as global governance and mitigating risks from great power conflict. Initiatives like The Center for Applied Rationality are motivated in part by the value of improved individual reasoning and cognition. A widespread increase in wisdom and rationality might be useful for improving and safeguarding humanity (Bostrom and Shulman, 2020, Security and Stability section). More speculatively, new modes of communication enabled by neurotechnology, e.g. more direct sharing of memories, could facilitate cooperation and generosity.
One risk is that neurotechnologies that seem like enhancements could degrade individuals’ or society’s reasoning ability. The analogy here is to the internet, which some argue is worsening society's reasoning ability despite its initial promise to improve it. Such a trend could be exacerbated if, for example, a BCI was developed that was essentially a more direct connection between a user’s brain and the internet.
More concerningly, many desirable neurotechnological enhancements achieve their effects by manipulating a user’s beliefs and motivation systems, which are tied in complex ways to their goals and values. Thus value shift is a risk with the adoption of any neurotechnology.
Value shifts could be accidental or malicious. For example, a neurotechnology that increases empathy (of which several are in clinical trials) could accidentally lead society to become dangerously tolerant of antisocial behavior. Or an authoritarian government could maliciously use neurotechnologies to monitor or instill changes in the values of its citizens (Caplan, 2008, Rafferty, 2021). Imagine a reeducation camp that actually reeducated people with 99% success, or what would have happened if MKUltra had achieved its goals. Between these two extremes there is a question of the morality of using neurotechnology to reform criminal behavior, perhaps as an alternative to incarceration.
How should the legal system — or we as individuals, for that matter — treat a person who accidentally changes their values to ones they a priori would not have wanted, but a posteriori want to maintain? How can we distinguish between persuasion and coercion in a world where brains communicate directly, without speech? Such questions are relevant to today’s technology to some degree, but neurotechnology could significantly raise their stakes.
Governance, public education, and differential neurotechnology development may all help avoid undesirable value shift. But as with the internet, it will be hard to resist value shifts that occur as a side effect of using a desirable technology, like a neurotechnology that improves someone’s earning potential but also subtly reduces their empathy.
Consciousness and welfarism
While conscious experience may or may not be a determinant of moral patienthood, there are questions about consciousness whose answers would affect welfarist reasoning, and neurotechnology might help answer them. Open Philanthropy has expressed interest in questions like this in its 2017 Report on Consciousness and Moral Patienthood.
Examples include:
- How can consciousness be measured?
- This may be an irresolvable philosophical question, but neurotechnologies can help evaluate at least some claims made by theories of consciousness, thereby guiding measurement (Seth and Bayne, 2022, Evaluating theories of consciousness section).
- What are the neural correlates of suffering? Can we identify them in
non-humans? What neural processes are necessary or sufficient to experience
suffering in humans, e.g. short- or long-term memory formation?
- Neuroimaging technology may help identify correlates of suffering.
- Is consciousness substrate-dependent?
- “Partial uploading” experiments are relevant to this question. In these, a region of the brain is anesthetized and researchers attempt to neurotechnologically mimic its input-output behavior such that an awake subject can’t tell the difference. The utility of such experiments has been debated.
- What is the map of the landscape of conscious experience? Are concepts like
emotions, moods, or hedonic valence reliable axes of variation of conscious
states? What are their neural correlates? Is there a continuum of “more” or
“less” consciousness?
- The finer degree of control neurotechnology gives us over neural activity, the better we can answer this question. The better an answer we have, the better we may be able to define and measure subjective wellbeing.
- How many independent consciousnesses exist, or can exist, in a single human
brain?
- Neurotechnology could be used to isolate regions or processes in the brain and interact with them independently, as was supposedly done historically after callosotomy operations.
- How continuous in time is conscious experience? What are the shortest and
longest intervals of conscious experience? Do different brains, or different
parts of the same brain, run at different "clock speeds"?
- Better neuroimaging might allow improved operationalization of experiments assessing subjective duration.
- Is consciousness necessary for moral patienthood?
- This isn’t an empirical question, but findings made using neurotechnology may be relevant to our beliefs about it. For example, can we use neurotechnology to induce p-zombie-like states? Mimicking the neural processes responsible for sleepwalking or bipolar blackouts might enable subjects to exhibit phenomena we associate with moral patients — like engaging in conversation — despite them later reporting themselves as unconscious.
We won’t attempt to estimate the dollar value of consciousness research, and some of these questions may turn out to be irresolvable by scientific inquiry. But information relevant to these questions could alter welfarist priorities. E.g. suppose a series of neuroscientific results drastically increased our belief that certain deep learning systems suffer?
Impacts on AI Safety
Neurotechnology may aid the development of safe AI, but also presents risks. The interplay between AI and neurotechnology has been discussed previously (niplav, Long, Bensinger, Byrnes, Eth), but merits much more detailed investigation. At minimum, differential neurotechnology development is worth considering in a portfolio of AI safety efforts because it mostly doesn’t compete for talent with other areas of AI safety research. (Neurotechnology development relies mainly on neurobiology, biomedical engineering, and medical device engineering expertise.)
Getting more data on human values
AI alignment may not be a well-posed task given that “human values” may not be a well-defined concept. But even if it is, it’s not clear from what evidence an AI could or should infer humanity’s values. Neurotechnology could provide greater quantity and quality of data on human values for use in training AI systems.
Moral judgments are one source of data about human values. A number of proposals for building safe AI rely on access to this kind of human feedback. Typically moral judgments are obtained via language or other consciously-expressed feedback like voting. Neuroimaging technology could be developed to detect moral intuitions, uncertainties, or judgments without requiring conscious expression, increasing the amount of data available on moral judgments. This could be done passively in daily life combined with e.g. smart glasses to record the situation in which the moral judgment is being rendered. Neuroimaging could also increase the quality of moral judgments obtained by disentangling them from neural processes like motivated reasoning and memory reconsolidation.
Subjective wellbeing is another type of data relevant to human values. Increasing subjective wellbeing is a primary goal of many welfarist moral systems. Neuroimaging technology could drastically increase our ability to measure and track subjective wellbeing. This is important given how poorly we remember and predict our own wellbeing,[3] not to mention how often we fail to act in ways that will increase it. Accurately-measured subjective wellbeing could form a component of an optimization objective for an AI system, given that it’s easier to maximize things we can measure. Wireheading and Goodharting would need to be carefully avoided with such an approach.
Algorithm design
The human brain may be the closest thing we have to an optimizer that is aligned with human values, so emulating aspects of its operation might prove useful for designing safe AI.
In the limit of perfect mimicry of the brain one would achieve whole-brain emulation, which is aligned with at least one human's values by definition. But less-than-perfect mimicry might still yield more aligned systems. For example, it might be the case that AI systems whose optimization algorithm mimics that of human brains will be more aligned (Byrnes, 2022, Jilk et al., 2017). Or perhaps human cognition can be decomposed into sub-behaviors such that AI systems can be built to perform only the safer sub-behaviors. E.g. perhaps different neural circuits control exploration and self-preservation behaviors.
AIs built to emulate the operation of the brain might be easier to test for alignment. For example, an AI and a human could be asked the same complex moral question, and the AI’s computations could be directly compared to advanced neuroimaging data of the human’s neural computations. This would of course require a degree of “human interpretability”, as opposed to just AI interpretability, to understand how the human’s neural computations were producing the moral judgment. Neuroscience, enabled by better neurotechnology, could help attain this level of human interpretability.
Or it might be the case that hybrid human-AI systems can be built where key decision-making or goal-setting parts of the architecture are delegated to circuitry in real human brains. This may be partly what Neuralink’s mission of “merging humans with AI” refers to. Hybrid systems are typically assumed to not be competitive with pure AI systems in the long run. But they may be useful during early stages of AI development, especially in takeoff scenarios that are sensitive to initial conditions.
Collective values
It’s unclear whether “human values” can be derived from any one person’s brain at all. They might be a property of a collection of human minds, possibly even contradicting individual values in some cases.
Neurotechnology could provide data to train models that reproduce individual human value judgments. This can be thought of as partial whole-brain emulation (Gwern, 2018). It’s conceivable that these models could predict the moral judgments of individual humans with significantly less error than the variance in judgments between humans. One AI safety scheme where this might be useful is if a sufficient number of such partial emulations could serve as a “moral parliament” to an advanced AI system. (Having real humans serve as this moral parliament would presumably be impractically slow.)
Human Enhancement and AI Safety
Even in a world where safe AI is developed, it only takes one defector building an unsafe AI to cause bad outcomes. Neurotechnology could potentially enhance coordination on AI safety.
For example, if high-accuracy lie detection was developed, companies in control of AGI-enabling hardware could choose to only sell to customers who neurotechnologically verified their commitment to not building certain risky AI technologies. Stricter means of enforcing coordination might be possible with neurotechnology that can monitor or modify behavior. While such solutions may sound draconian, they don’t require coercion.
Improving individual human reasoning or communication ability with neurotechnology, as discussed above, might also help society understand and perform well in coordination problems.
Risks and uncertainties
Neurotechnology might offer AI systems additional “attack surface” by which to influence human actions and values. If humans are using digital neurotechnologies to modify their experience and behavior, it’s possible an AI system could compromise these technologies and influence users for its own purposes. For example, a neurotechnology meant to prevent impulsive behavior could be compromised to increase risk-taking or retribution in government officials, leading to escalation of human conflict.
Neurotechnologies that can sense but not manipulate neural activity might mitigate most of this risk while still being useful for alignment. But even pure-sensing neuroimaging technologies could give a malicious AI system a clearer measure of whether it successfully altered human values by persuasion or other means. Then again, an AI might need to be so intelligent to manipulate human values in this way that it wouldn’t need neurotechnology to cause existential harm.
Another risk is that neuroscientific knowledge could accelerate the development of transformative AI systems by giving researchers ideas for more powerful algorithms. Deep learning systems, for example, were inspired in part by integrate-and-fire neuron models and other ideas from neuroscience. As mentioned above, it's possible neuroscientific knowledge could differentially accelerate the development of AI systems that are aligned with human values. But it’s not clear how one could be confident in advance this would happen.
Ultimately, the diversity of neurotechnologies makes blanket speculation about the risk-benefit tradeoff of neurotechnology and AI safety of limited value. Much more research is warranted to assess how specific neurotechnologies are likely to impact AI safety and compare them to the risks and benefits of other specific AI safety strategies.
Timelines
Rather than make forecasts for each potential impact described in the previous section (though we think this would be worthwhile), we’ll review emerging neurotechnologies and consider what capabilities they might afford on what timelines.
Our tentative conclusions from what follows are:
- Neurotechnologies currently in clinical trials could have large-scale impacts in 1-5 decades, with a mean estimate of 30 years.
- With concerted effort, neurotechnologies currently in clinical and preclinical development could be advanced in 10 to 20 years to the point where they might meaningfully benefit AI safety, in addition to other, potentially less-urgent benefits.
What neurotechnologies exist or are in development?
The following sections summarize the current state of neurotechnology R&D, with neurotechnologies grouped by their stage of maturity, from most mature to least.
Neurotechnologies that are currently FDA-approved or widely used
- Small molecule drugs (too many to name individually):
- Diffuse through neural tissue and physically interact with biological substrates on the nanometer scale
- Examples:
- Stimulants (caffeine, Adderall)
- Antidepressants (Prozac, Wellbutrin)
- Anesthetics (morphine)
- Anxiolytics/sedatives (Xanax, Valium)
- Psychedelics (LSD, psilocybin)
- Empathogens/entactogens (MDMA)
- Electroencephalography (EEG)
- Noninvasively records voltage changes caused by neural activity with electrodes on the scalp with centimeter spatial (though superficial) and millisecond temporal resolution
- Examples:
- Electrocorticography (ECoG)
- Invasively records voltage changes caused by neural activity with electrodes on or in the cortex with millimeter spatial and millisecond temporal resolution
- Examples:
- (functional) Magnetic Resonance Imaging ((f)MRI)
- Transcranial Magnetic Stimulation (TMS)
- Electroconvulsive therapy (ECT)
- Induces seizures in sedated patients by running 100s of mA of current across the brain
- FDA-approved for depression and many other indications
- Deep brain stimulation (DBS)
- Peripheral and spinal nerve stimulation
- Stimulates nerves outside the brain, usually with electrodes
- Examples:
- LivaNova (vagus nerve stimulation, approved for epilepsy and depression)
- Cala Health (median and radial nerve stimulation, approved for tremor)
- Precision Novi (spinal cord stimulation, approved for chronic intractable pain)
- Surgical tools (too many to name, but important ones include):
- Neurovascular stents
- Stereotactic surgical equipment
- Skull reconstruction implants
- Cochlear implants
- Retinal implants
- Deliver signals from an external digital camera to electrodes connected to nerves in the retina to treat blindness
- Examples:
- Second Sight (now defunct, also did a visual cortical implant)
Neurotechnologies currently in or enrolling human clinical trials
- Intracortical motor BCI
- An array of hundreds or thousands of micrometer-scale electrodes is implanted in motor (and sometimes somatosensory) cortex, recording at tens of microsecond temporal resolution, transmitting data to control external devices like prosthetics or computer interfaces.
- Examples:
- Endovascular motor BCI
- An array of micrometer-scale electrodes is placed in a blood vessel near neural tissue in the cortex. Implantation doesn't require opening the skull -- the electrodes are delivered on a catheter run through blood vessels from the neck or other part of the periphery.
- Examples:
- Peripheral BCI
- Nerve signals from peripheral nerves are used to control prosthetics or computer interfaces
- Examples:
- BrainRobotics
- CTRL-labs (now part of Meta)
- Cortical stimulation for memory enhancement
- Electrode wires are implanted in the brain and stimulate neuron firing
- Examples:
- Retinal implants
- See description above
- Examples:
- Functional ultrasound neuroimaging
- Use ultrasound pulses to detect changes in cerebral blood volume, which correlates with neural activity, at millimeter spatial and millisecond temporal resolution
- Examples:
- Functional photoacoustic neuroimaging
- Use the photoacoustic effect to detect changes in blood oxygen level, which correlates with neural activity, at millimeter spatial and hundreds-of-millisecond temporal resolution
- Examples:
- Transcranial electrical stimulation
- Noninvasively run electric currents across the skull to modulate neural activity
- Examples:
- Transcranial ultrasound stimulation
- Focus ultrasound waves on neurons to modulate their firing rates
- Examples
- Ultrasound-mediated blood-brain barrier opening
- Use ultrasound along with injected microbubbles to temporarily open the blood-brain barrier in a specific location, allowing selective spatial delivery of drugs or biologics
- Examples:
- (functional) Near-Infrared Spectroscopy ((f)NIRS)
- Use near-infrared light to noninvasively detect changes in blood oxygen level, which correlate with neural activity, at centimeter spatial and millisecond temporal resolution on the surface of the brain
- Examples:
- Magnetic Resonance Imaging (MRI)
- See description above
- Examples:
- Hyperfine (portable MRI)
- Magnetoencephalography (MEG)
- Noninvasively records magnetic field changes caused by neural activity with magnetometers on the scalp with centimeter spatial and millisecond temporal resolution
- Examples:
- Audiovisual stimulation
- Drive changes in neural firing in the brain by exposing the patient to specific light and sound patterns
- Examples:
- Peripheral nerve stimulation
- Stimulating nerves outside the brain, usually with electrodes
- Examples:
- Sharper Sense (vagus nerve stimulation)
- Setpoint (vagus nerve stimulation)
- Galvani (splenic nerve stimulation)
- Neurovalens (vestibular nerve stimulation)
- Onward (spinal cord stimulation)
- Gene therapy
- Introduce exogenous genetic material or gene edits into the body’s cells to produce novel proteins or otherwise alter gene expression
- Examples:
- Cell therapy
- Introduce cells into the body to perform a specific function
- Examples:
- Cell therapies for stroke and Parkinson's
- Small molecule drugs (too many to name individually):
- Psychedelics are experiencing an unusually fast pace of development due to recent regulatory and social changes in the U.S.
Neurotechnologies in preclinical development
Preclinical development means a technology hasn't yet been (to our knowledge) deployed in humans.
- Next-generation BCI
- Examples:
- Precision Neuroscience: minimally invasive introduction of electrode arrays, and/or placement of them in the brain’s ventricles
- Integrated neurophotonics: use light rather than electricity to stimulate neurons, in combination with genetic modification of neurons to make them light-sensitive, allowing finer resolution than electrode-based BCI
- Science Corporation: high-resolution manipulation of optical, spinal cord, or other nerves entering the brain
- Iota: distribute sub-millimeter, wirelessly-controlled electrodes throughout the brain (“neural dust”)
- Examples:
- Endovascular stimulation
- See description above
- Examples:
- Next-generation fNIRS
- Examples:
- Openwater: use ultrasound in combination with NIRS to get better spatial resolution and deeper signals than standard NIRS
- CoMind
- Multispeckle diffuse correlation spectroscopy from Reality Labs (Meta)
- Examples:
- Next-generation MEG
- See description above
- Examples:
- Sonomagnetic stimulation
- Apply a magnetic field to neural tissue and use ultrasound to move charged particles within it, creating a current via the Lorentz force that can stimulate neural activity
- Examples:
- Structural ultrasound
- Noninvasively use ultrasound waves to reconstruct anatomic features of the brain with millimeter resolution
- Examples:
- Gene therapy
- See description above
- Examples:
- Optogenetics: engineer neurons to express light-sensitive proteins on their surface, allowing control of their firing by light
- Sonogenetics: engineer neurons to express ultrasound-sensitive proteins on their surface, allowing control of their firing by ultrasound
- Chemogenetics: engineer neurons to express proteins on their surface that only activate in the presence of a particular molecule, allowing control of their firing by specific drugs
- US-mediated viral delivery:
delivery of adeno-associated virus (AAV) gene therapies to the brain
using ultrasound to get through the blood-brain barrier
- This can be combined with other approaches like chemogenetics for additional control.
- Cell therapy
- See description above
- Examples:
- Too many small molecule drug candidates to name
Outside view on development timelines
As a prior, 20 years has been given as a rough estimate for how long it takes an invention to translate from initial demonstration into an adopted technology.
Reference class estimate
Here are summaries of the development of several neurotechnologies and related technologies:
Deep Brain Stimulators (source)
- Building on extant stereotactic neurosurgical tools and cardiac pacemaker technology, prototype DBS systems were first implanted in humans in the late 1960s.
- DBS performed in numerous patients until 1976, when the FDA was established. FDA stopped DBS sales until clinical trial data is submitted.
- No company was willing to run trials until the neurology field established clearer standards for patient improvement.
- Once they did, in 1997 Medtronic ran trials and got FDA approval for essential tremor and some Parkinson’s cases.
- FDA approved DBS for all Parkinson’s cases in 2002 after more trials.
- 40k individuals treated with DBS within 10 years of approval.
- Note: developing DBS for other indications like depression has been slow, due in large part to the slow pace of clinical research on such an invasive technology.
Summary: ~40 years from demonstration in humans to consistent human use, including ~20 year pause to convince FDA.
Cochlear Implants (source)
- First implantation of electrodes to explore restoration of hearing loss in 1957.
- By 1977 twenty-two patients had prototype implants.
- FDA granted approval for adults in 1984.
- Slow adoption because the adult deaf community was generally not interested in, and sometimes hostile to, the idea of removing deafness.
- Pediatric cochlear implants were approved in 1990, where there was stronger uptake. (90% of deaf children have hearing parents.)
- By 2009 there had been in the 100ks of implants total. This may be only 10% of the total addressable market (source).
Summary: on the order of 50 years from demonstration in humans to consistent human use, but ~15 years from FDA approval in a market with demand.
Intracortical electrode array BCIs (iBCIs)
- Originally conceived of in 1980, the hardware necessary to enable multiple-electrode neuron recording in cortex was developed through the 1990s (source).
- In 1997, patient Johnny Ray controlled a computer cursor with a single implanted electrode (not array) (source).
- In 2002, two groups demonstrated cortical array BCI in monkeys (source , source).
- In 2004, patient Matt Nagle controlled an artificial hand with a cortical array BCI called the BrainGate system (source).
- From 2004 to 2009 the Cyberkinetics company worked on commercializing the BrainGate system, but failed to raise continued funding after 2009. The IP eventually went to Blackrock Microsystems who continued developing cortical arrays for research use (source).
- From 2009 to the present a trial called BrainGate2 has continued in academia (source).
- Paradromics founded in 2015 and Neuralink founded in 2016 to commercialize higher-density cortical arrays (source).
- As of 2022, academic-led trials using Blackrock implants have accrued over 30,000 days of in-patient BCI research (source).
Summary: ~15 years from conception to first animal studies, ~10 more years until demonstration in humans, in the 18 years since no commercial BCI has yet been FDA-approved.
Stentrode (endovascular BCI)
- Building on extant neurovascular stent technology, Synchron (originally named SmartStent) was founded in 2012 and developed their stent-based BCI prototype with funding from DARPA, and others (source).
- First publication in 2016 demonstrated Stentrode in sheep (source).
- Synchron got IDE approval for clinical trials from the FDA in 2021 and performed their first human implantation in 2022.
Summary: ~6 years from conception to (published) sheep and ~6 years from sheep to first human.
Transcranial Magnetic Stimulation (source)
- First demonstration of magnetic stimulation in 1896
- Single-pulse system demonstrated in humans in 1985
- Repeated-pulse system developed and effects on depression reported by 1994
- FDA approval for depression treatment in 2008. Arguably this would have gone faster had IP been handled better.
Summary: ~9 years to development basic invention to demonstration in humans, ~12 years to get approved, widely used today but still a small fraction of neuropsychiatric treatments.
Transcranial Electrical Stimulation (source)
- People have been running electricity through their heads since antiquity, including FDA approvals for electroconvulsive therapy and devices for treating migraine.
- Two papers around 1998 reignited interest in low-output (<10 mA) transcranial electrical stimulation for modifying cortical excitability
- By 2006 a few articles about these systems made it into newspapers
- By 2012 DIY kits were being sold on the internet
- By 2014 startups like Halo and Thync were launched
- No FDA approvals have been made for non-ECT systems to date
Summary: ~6 years from popularization to DIY systems, with startups following immediately after.
Prozac (fluoxetine) (source)
- First synthesized at Lilly in 1972
- FDA approved for depression in 1987, the first SSRI to be marketed.
- Hailed as a breakthrough, eventually became 1/4 of Lilly’s revenue, over 40M patients received it by 2002. (Many more had taken other SSRIs.)
- Consistently in the top 30 most-prescribed drugs in the U.S. by one estimate.
Summary: 15 years from synthesis to approval, followed by widespread adoption almost immediately.
LSD (source)
- First synthesized in 1938.
- First ingested in (source).
- Sandoz started marketing the drug in 1947 for a variety of uses.
- The CIA reportedly bought the world’s entire supply in the early 1950’s for use in the MK-ULTRA program (source).
- Became popular recreationally from the 1960s onward.
- Made illegal in the U.S. in (source).
- Recently use has reportedly increased, and has been decriminalized in one state.
- An estimated ~10% people in the U.S. have used LSD in their lifetime (source). Similar rates are reported for Australia (source).
Summary: ~5 years from synthesis to demonstration of effects in humans, ~15 years until popular use began, despite tortuous history remains widely used.
Mobile phones (source)
- First mobile phone demonstrated in 1973.
- First commercial offering 1983 (source).
- Usage in U.S. households rose from 10% in 1994 to 63% in 2004 (source).
Summary: ~10 years from working prototype to commercial product, ~20 more years to ubiquity, with a significant inflection.
Personal computers (source)
- Xerox Alto demonstrated in 1973
- Apple Macintosh released in 1984
- Usage in U.S. households rose from 20% in 1992 to 63% in 2003 (source).
Summary: ~10 years from working prototype to mass commercial product, ~10 years to ubiquity.
Breast augmentation (source)
- First breast implant surgery in 1962.
- FDA bans silicone implants in 1992, saline implants become dominant (source).
- ~100k breast augmentation surgeries in 1997 (source).
- ~300k breast augmentation surgeries in 2018 and 2019
- An estimated 4% of women in the U.S. have had breast augmentation as of 2014.
Summary: ~30 years to becoming a standard procedure from demonstration in humans, with fairly linear growth.
LASIK eye surgery (source)
- Building on knowledge from existing non-laser keratotomy surgeries, LASIK was conceived in 1988. (Similar procedures were being developed concurrently.)
- First LASIK surgery performed in U.S. in 1992.
- FDA approved devices for LASIK in 1998.
- Adoption rapidly increased to ~1.2M surgeries per year in the 2000s, then tapered to ~700k/yr in the 2010s (source).
Summary: ~4 years from conception to demonstration in humans, ~8 more years to become a standard procedure.
The timelines above vary widely from 1 to 5 decades from initial demonstration to adoption and don’t have a consistent definition of “adoption” between them. But we can at least say that adoption tends to occur on the decade timescale, not single years.
We can also say that it would be unprecedented for a neurotechnology to have widespread adoption sooner than 10 years after its initial demonstration in humans.
The 20 year prior stated above from initial demonstration to adoption seems short. 30 years from initial demonstration in humans to adoption is a reasonable mean estimate, though it would not be surprising if any particular neurotechnology’s development timeline varied from this by 20 years in either direction.
Expert surveys and forecasts
We could only find surveys for BCIs rather than neurotechnology as a whole, and both asked questions too nonspecific to extract meaningful information from (Vansteense et al., 2017, Nijboer et al., 2011).
Inside view on development timelines
The following are key factors that can influence the development timelines of a neurotechnology.
- Desirability of effects and user burden: How much utility does the
neurotechnology provide, and how easy is it to adopt, wear, implant, use, or
maintain?
- Historically, potency of a neurotechnology has been in tension with regulatory burden: the more potent, the more regulated. This is especially true of neurotechnologies that are pleasurable to use.
- Noninvasiveness and reversibility of a neurotechnology is often in tension with ease of use. E.g. an implant is burdensome to get, but in the long run may be preferable to wearing a headset all day.
- Market size: how many users will the neurotechnology have?
- Neurotechnologies treating specific medical indications have the advantage of a known market size and nearly-guaranteed financial payoff: medical insurance reimbursement. Neurotechnologies for consumers may have larger markets and currently face less regulatory burden than medical devices, but with much more uncertainty about adoption.
- Even if every adult diagnosed with a mental health disorder received a neural implant within 10 years, that would still only be ~25% of the population.
- Regulation: How much regulatory help or hindrance does the neurotechnology
face?
- Healthcare system reimbursement
- Healthcare systems reimbursing patients for using neurotechnologies is a substantial boost to adoption.
- In the U.S., the norm is that once the FDA approves a medical device,
public and private insurers will reimburse patients for it.
- Exceptions occur. The drug Aduhelm was FDA-approved for Alzheimer’s in 2022, but insurance companies are reimbursing almost no patients for it, citing poor evidence of efficacy.
- In the UK, the National Institute for Health and Care Excellence determines whether a technology will be offered using metrics like whether the technology improves health outcomes at £25,000/QALY or better (Borton et al., 2020).
- Pharmaceutical and drug regulation
- In the U.S. chemical and biologic neurotechnologies are regulated by the FDA for medical uses.
- Technically, anyone can create a new chemical or biologic with any effect and sell it in the U.S. without interference from the FDA, provided they make no claims about it treating or curing any disease. In practice neurotechnologies with potent effects are regulated by the DEA.
- Medical device regulation
- In the U.S. the FDA decides which neurotechnologies count as medical devices and what degree of clinical evidence they require to be marketed.
- European medical device regulation is considered less burdensome.
- Surgical regulation
- In the U.S. surgical procedures aren’t regulated directly at the federal level. States sometimes have laws around specific procedures like abortion or cosmetic body modifications.
- Surgeons can have their licenses revoked for performing operations outside their scope of practice.
- Consumer goods regulation
- In the U.S. the FTC takes action against “hazardous…products…without adequate disclosures”
- The CPSC takes action against “unreasonable risks of injury”, though not in the FDA’s remit.
- The FCC regulates devices that emit RF signals.
- Healthcare system reimbursement
- Market power: how much ability do single companies have to manipulate the
direction of the field?
- Unlike in the software industry, intellectual property affords single actors significant control over the availability of neurotechnologies.
- Several large companies and university technology-transfer offices are widely considered “bad actors” in the neurotechnology space at present, stifling competition and new market entrants.
- Cultural resistance: beyond simply not attracting many users, will the
neurotechnology face active resistance from the public?
- Cultural resistance can lead to regulation, as with the U.S. War on Drugs, or to societal pushback, as with the BrainCo headband that was piloted to increase focus in Chinese schools.
- Iteration speed in humans: how quickly can new advances be designed,
built, and tested in humans?
- In general, the less time, money, and effort it takes to develop each new iteration of a technology, the faster the technology will improve.
- A neurotechnology that allows app-store-style, open-market development of new features or a home-brewing degree of end-user customization will yield new capabilities faster than one treated as a medical device, to which all changes must be re-approved by a regulatory agency and justified with clinical data.
- Faster iteration times yield more serendipitous discovery and capabilities development, but also pose safety concerns.
- Extrinsic influences: how will events and trends outside the field of
neurotechnology influence it?
- If humanity is decimated by a global nuclear war or pandemic, neurotechnology development will presumably stop.
Conclusions
Development timelines in the absence of intervention
No foreseeable neurotechnology is agentic or contagious, so neurotechnology is unlikely to cause rapidly escalating catastrophes analogous to AI misbehavior or engineered pandemics. Unless an extrinsic shock like global nuclear war stops its development, most effects of neurotechnology will occur at the pace of technology adoption.
Based on the reference class timelines above, large-scale impacts of a new neurotechnology would almost certainly not occur for at least 10 years after its initial demonstration in humans, more likely in 1-5 decades, with 30 years as a mean estimate.
Which neurotechnologies may be at such a point in their development today?
The best-publicized examples are cortical BCIs. Benefiting from over 30,000 hours of clinical data establishing their safety and efficacy for computer interaction, they have garnered in the 100s of $M in commercial investment in the past 5 years. Their development may be further accelerated by minimally invasive approaches like stentrodes. However, the surgical invasiveness of cortical BCIs will likely lead to slow consumer adoption if the technology is ever offered beyond medical applications. While the impact of cortical BCIs on disabled populations in the near-term are likely to be extremely positive, their potential integration with digital communications raises questions about value shift, and their security implications regarding AI are unknown.
Ultrasound neuromodulation is another example. While it hasn't yet been FDA-approved for a clinical indication, its clearly-perceptible effects may elicit a large consumer market. And the user-steerability of transcranial (i.e. noninvasive) ultrasound systems and their DIYability may enable rapid iteration and development. Regulatory intervention could easily attenuate its uptake, however, especially if pleasurable or potentially habit-forming uses are discovered. While it shows great promise for treating neurological and neuropsychiatric disorders, its potential to noninvasively alter mood, affect, and other contributors to subjective wellbeing raises questions about unintended value shift, wireheading, and abuse by governments or corporations.
Biologic neurotechnologies like monoclonal antibodies, gene therapies, and cell therapies (and arguably biosynthesized small molecules) are still early in preclinical development, but more than other neurotechnologies they have a rapid potential path from scientific publication directly to DIY use. Neurotechnologies like cortical BCIs rely on generally inaccessible technologies like microfabrication and neurosurgery. But mail-order biotechnology equipment and reagents may be sufficient for an individual to replicate biologic neurotechnologies. This could facilitate rapid development and adoption. Much of this might be beneficial, such as being able to reproduce the effects of seemingly-beneficial mutations like FAAH-OUT, whose carriers reportedly feel pain sensations but don’t experience suffering from them. But rapid development and adoption also risks unintended value shift.
In addition to these examples, there is the possibility of serendipitous discovery or stealthy development of neurotechnologies with potential for large-scale impact. Several companies known to the authors under NDA are developing neurotechnologies not listed above.
Differential development timelines
The previous section was concerned with the impact neurotechnology might have if it develops along its current trajectory. But to what degree could concerted effort alter this trajectory and differentially accelerate the creation of neurotechnologies relevant to pressing concerns like AI safety?
Neuroimaging technologies currently being tested in humans could potentially be useful for AI safety even with only a small number of users. A takeoff scenario sensitive to initial conditions, like imitation learning on a single human’s values, could play out radically differently in the presence or absence of a neuroimaging system affording better access to a single human’s moral judgments.
Using neuroimaging and neurostimulation technologies to better understand the brain also is possible with only a few human subjects. The risk-benefit tradeoff for AI safety here is complex. As discussed above, better understanding of the brain could accelerate AI timelines in a dangerous way, but also might yield AI systems that are better-aligned. A great deal may depend on who is developing neurotechnologies relevant to AI safety and what they use them for.
Based on (1) the reference class timelines above, which suggest neurotechnologies tend to go from preclinical development to successful demonstration in humans in 10 to 20 years, and (2) the fact that widespread adoption isn’t necessary for a neuroimaging technology to contribute to AI safety, it's not unreasonable to estimate that with concerted effort, neurotechnologies currently in clinical and preclinical development could be advanced in 10 to 20 years to the point that they might meaningfully benefit AI safety, in addition to other, potentially less-urgent benefits. Ultrasound neuroimaging is a candidate for such a technology, prototypes for which achieve 10x improvements in spatial resolution, temporal resolution, and sensitivity over fMRI.
Neglectedness
Of the ~$20B/year that goes toward funding neuroscience overall, ~$4B/year goes toward non-drug, central nervous system neurotechnology development. But almost no efforts have the stated goal of differential neurotechnology development.
Neuroscience (not neurotechnology) research landscape
Estimates of global government funding for neuroscience aren’t readily available. The major funder in the U.S. is the National Institutes of Health (NIH). The NIH provides funding for neuroscience research in the range of ~$5B to ~$10B per year. The U.S. Defense Advanced Research Projects Agency (DARPA) also funds neuroscience projects in the $100M per year range. The European Research Council provides funding for neuroscience in the €100M per year range. The Human Brain Project in Europe committed ~€1B to neuroscience in 2013, though it is regarded as not having produced valuable outcomes. China’s funding landscape is more opaque. This report from CSET is the most detailed analysis we are aware of. It suggests China is spending in the 100s of $M per year on neuroscience research, with infrastructure outlays in the billions of USD to establish research centers in some cases.
Some non-governmental organizations also fund neuroscience research. Examples include The Allen Institute for Brain Science, which spends about $100M per year (some of which is from the from the NIH); two Max Planck Institutes, which as a rough estimate maybe spend ~$100M per year between them; the Howard Hughes Medical Institute’s Janelia Research Campus, which has a ~$130M annual operating budget; the Kavli foundation, which has endowed various neuroscience institutes in the tens of $M range; and a number of disease-specific groups like the Michael J. Fox Foundation, which has funded of $1B in Parkinson’s research since 2000.
Altogether a reasonable lower-bound estimate of global governmental and non-profit funding for neuroscience (not neurotechnology) is $20B/year over the past five years. This figure doesn’t include research into neuroscience-enabling technologies like machine learning algorithms or chip fabrication.
Neurotechnology research landscape
While all neuroscience research is relevant to the development of neurotechnology to some extent, most effort in neuroscience isn’t directly focused on developing neurotechnologies.
In terms of government funding specifically for neurotechnology, the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative disburses funding in the $400M/year range, increasing over time, having given ~$2.4 billion to date. The BRAIN initiative is itself funded by the NIH, National Science Foundation, DARPA, and several nongovernmental organizations.
While the BRAIN initiative funds some basic neuroscience research, it is mostly focused on research relevant to neurotechnology development, and exists in great part because other U.S. government funding was not facilitating this. So we may consider the BRAIN initiative’s budget as a rough estimate of the amount of U.S. government funding devoted specifically to neurotechnology.
China’s neuroscience funding is reportedly more focused on neurotechnology than basic research, especially BCIs, including investments in key research infrastructure like nonhuman primates. Chinese government funding for neurotechnology-specific R&D, including infrastructure investment, could be in the 100s of millions to 1s of billions USD per year (CSET, 2020, page 31).
Of the nonprofit funders listed above, it’s difficult to estimate how much is devoted to translational research relevant to neurotechnology development. The Allen Institute has historically focused on basic neuroscience research, while Janelia has focused on neurotechnology tool development throughout its history, e.g. the Neuropixels project and GECI development. Disease-specific donors often fund projects characterizing diseases or doing other basic research rather than building neurotechnologies. A generous estimate would be that ~$300M (roughly double the Janelia budget) per year of nongovernmental funding goes toward neurotechnology development.
Altogether this suggests an estimate of ~$2B/year of global noncommercial funding for neurotechnology development, assuming the U.S. government, Chinese government, and high-profile nonprofits make up the majority of this funding. This is around 10% of the total spent on neuroscience. That figure broadly accords with the experience of the authors that most neuroscience effort goes toward using existing tools to explore the brain rather than building new tools that advance neurotechnology.
The majority of investment into neurotechnology development comes from for-profit enterprises. Most of this investment is focused on relatively incremental development of drugs for neurological and neuropsychiatric disorders. Funding for neuro drug development is on the order of 10s of billions USD per year (Munro and Dowden, 2018, Fig. 1a), though much of this is conditional on achieving milestones and will never actually be disbursed. (Neuro drugs historically have about a 15% success rate.) Around 200 drugs are currently in development for various mental health disorders and around 500 for neurological disorders. Investment into non-drug neurotechnologies is significantly less. By one estimate, of whose quality we are unsure, of the ~$3.4B in neurotechnology investment announced in Q4 2021, only around ~$850M (~25%) was to companies focused on non-drug, central nervous system neurotechnology (source, page 12).
In sum, a conservative estimate of global funding from all sources for non-drug, central nervous system neurotechnology development is ~$4B/year.
Efforts toward differential neurotechnology development
While all work on neurotechnology is motivated by a desire to benefit humanity, differential neurotechnology development refers specifically to efforts whose stated goal is to preemptively develop the most beneficial or risk-reducing neurotechnologies before any others.
The most prominent stated effort in differential neurotechnology development is Neuralink. Explicitly motivated by AI safety concerns, Neuralink's mission is to allow the "merging" of humans with AI. What precisely this means and whether it's likely to be beneficial has been a subject of debate. Neuralink has received $363M in investment since its founding six years ago.
Grants, prizes, and advance market commitments motivated by differential neurotechnology development appear to be absent from the neurotechnology funding landscape. Open Philanthropy funded Prof. Ed Boyden’s lab, which focuses on neurotechnology, in 2016 and 2018 for around $6M total, but these grants don’t seem to have been motivated by differential neurotechnology development per se (Naik).
Government funders of biomedical neurotechnology research, like the NIH, select proposals based in part on a risk-benefit calculus for patients, particularly the patients directly involved in clinical trials. This could be considered a sort of differential neurotechnology development, but it's restricted to biomedical neurotechnology applications and doesn’t typically include broader, second-order, or long-term effects on humanity.
In the U.S., the Food and Drug Administration (FDA) has jurisdiction over neurotechnologies that it considers to be for medical purposes. The Drug Enforcement Administration (DEA) in the U.S. has a mandate to reduce the availability of chemical neurotechnologies deemed public health risks. Similar agencies exist in most countries. These agencies could also be loosely interpreted as efforts toward differential neurotechnology development, but the net benefits of their actions — the U.S. War on Drugs in particular — are unclear.
There seems to have been little proactive legislation or regulation around consumer, non-drug neurotechnologies. The only concrete example we are aware of is Chile adding "neurorights" to their constitution in 2021. However, regulators like the U.S. Consumer Product Safety Commission could likely bring such technologies under their jurisdiction after their development.
The field of academic neuroethics is arguably a proto-governance effort, though its influence is unclear, as is how much funding it receives. The Neurorights Foundation, funded at less than $50k/yr, is the only organization we are aware of principally focused on neuroethics. Professional associations like the IEEE also have efforts in neuroethics.
In total, the only effort whose stated goal is differential neurotechnology development seems to be Neuralink, but more research is needed to establish how much the efforts of funders, legislation, other companies, and neuroethicists contribute to differential neurotechnology development.
Tractability
Fundable opportunities today for a new philanthropist that might help achieve differential neurotechnology development include:
- Research and forecasting relevant to differential neurotechnology development, particularly on the cost-effectiveness of specific interventions (<$100k in the near-term)
- R&D infrastructure like open-source research software and clinical cohort recruitment (tens of thousands to millions USD)
- A patent pool (<$100k total)
- Startups or Focused Research Organizations to directly build differentially beneficial neurotechnologies (up to millions of USD per year per project)
Fund research
Uncertainty about the global disease burden of neurological and neuropsychiatric disorders ranges over orders of magnitude. The importance of neurotechnology development for addressing it is also highly uncertain. Research on these topics could change funding priorities if it revealed severe underestimation of neuro disease burden. A great deal of uncertainty stems from the challenge of quantitatively estimating subjective wellbeing, which is worth more research in its own right for cause prioritization purposes.
Almost no neurotechnologists work on neurotechnology for AI safety. Funding AI safety researchers, consciousness researchers, and neurotechnologists to jointly develop concrete experimental plans with existing and near-term neurotechnology, and then funding those experiments, could be a valuable addition to humanity’s AI safety portfolio. For example, for tens of thousands USD a new philanthropist could organize a workshop to plan ways of experimentally validating some of Steve Byrnes’s hypotheses relevant to AI safety.
More research into how neurotechnological enhancements affect decision-making, like this, could inform whether neurotechnology is likely to have any impact on existential risk by increasing coordination.
Proper forecasting of neurotechnology development timelines would help resolve uncertainty about the urgency of differential neurotechnology development. Examples of specific questions that deserve careful forecasting include:
- When will the Information Transfer Rate (or ideally a better metric) from a BCI exceed what’s achievable by typing and speech?
- When will a neuroimaging system be able to preemptively predict a user’s moral judgments (in a binary prediction task) with >90% accuracy?
- What is the probability that by 2050 there exists a frontline treatment for anhedonia with >90% success rate?
- When will the first non-drug, consumer neurotechnology reach 1M users?
Initiatives like the STI initiative recommended by CSET would help monitor neurotechnology developments in China, where new neurotechnologies might be adopted more quickly than in other countries, as has been observed with AI surveillance technologies.
Perform advocacy
Advocating that funders like the NIH or private foundations direct their research funding toward differentially useful neurotechnologies may be worthwhile.
Engagement with the FDA on their categorization of devices for general wellness or with the DOJ on their regulation of future neurotechnologies with abuse potential could help avoid a drug-war-like setback to beneficial neurotechnologies.
Industry groups like the IEEE Neurotechnologies for Brain Interfacing Group, which may have significant influence over industry standards in the future, may be influenceable given how few stakeholders are engaged with them at present. Surgical boards are another potential advocacy target to encourage their acceptance of neurotechnologies with beneficial applications beyond medicine.
The first step for a new philanthropist interested in advocacy might be to fund research into identifying policy and regulatory levers on neurotechnology, possibly through an organization like CSET if it has capacity.
Build infrastructure
Building better R&D infrastructure could allow a new philanthropist to facilitate beneficial neurotechnology research. It could also prevent coordination failures in the neurotechnology industry that might lead to bad outcomes.
One type of infrastructure is open-source software. Releasing top-quality open-source software for use in neurotechnology products can keep aspects of the operation of neurotechnologies transparent to the public, improve security, and avoid multi-homing costs. Examples include open-source BCI operating systems or biophysical simulation software for estimating safety of new stimulation patterns. EA-aligned software organizations like AE studio have fundable projects ranging from $100k to a few $M. The degree to which this infrastructure is valuable will depend on the specific neurotechnology and threat model.
Another piece of critical “infrastructure” is clinical cohorts. The pace of neurotechnology development largely depends on being able to recruit and maintain cohorts of people who are willing to try as-yet unproven technologies. Providing clinical cohort recruitment services to beneficial neurotechnology projects is a means of differentially accelerating their development. This is especially true for neurotechnologies that aren’t targeting diseases and will require clinical cohorts of healthy subjects, which are harder to recruit. The 1Day Sooner challenge trial project, funded with $3M from Open Philanthropy in addition to other donors, is an analogous healthy-subject clinical recruitment effort.
Creating a biobank of neural tissue samples would also facilitate research on neurological disease, and we’re aware of (currently private) fundable projects to do so.
Steward intellectual property
Unlike in the field of AI algorithm development, control over key intellectual property (IP) in neurotechnology affords private actors significant influence over the use of that technology. Historically the power of IP rights in neurotechnology has enabled anticompetitive behavior, with large companies sometimes acquiring and shelving patents from smaller competitors.
Stewardship of key IP can facilitate positive developments in neurotechnology. This is similar to software patent reform that Open Philanthropy has looked into in the past. A new philanthropist could, for example, fund a patent pool, which doesn’t yet exist for neurotechnology. Starting one would require <$100k, and we can introduce potential funders to parties willing to do this work. Starting a patent pool becomes challenging if any one company in a market has outsized market power, so early action may be prudent.
Build beneficial neurotechnology
The most straightforward way to ensure beneficial neurotechnologies are built first is to build them yourself or fund their creation.
As of writing, founding teams for three Focused Research Organizations pursuing longtermism-aligned neurotechnology projects stand ready to launch, pending funding in the 10s of $M over ~5 years. These proposals are currently private, but are available to potential funders upon request.
Summary of Key Uncertainties and Questions
Importance
- What is the cost-effectiveness of neurotechnology development for treating
neurological and neuropsychiatric disorders? In particular:
- Which neurotechnologies are likely to treat or cure which disorders?
- How much can the burden of these disorders be reduced simply by improving access to the current best treatment options or through public health initiatives?
- How feasible is it to develop neurotechnologies that generally improve subjective wellbeing without becoming opportunities for wireheading?
- To what degree will neurotechnological enhancements, especially those that improve rationality or coordination ability, reduce or increase risks to humanity?
- What specific neuroscientific findings (derived from new neurotechnology) could convince us to radically change our minds on non-human suffering or moral patienthood in general?
- What is the risk-benefit tradeoff of developing neurotechnology for AI safety?
In particular:
- How much do various neurotechnologies increase the “attack surface” an AI has on human behavior or values, and does this matter for AI safety?
- How much does knowledge about the brain accelerate the development of AI capabilities? Are AIs built with this knowledge more or less dangerous than those without?
- Is our outside-view estimate of 30 years from initial demonstration to adoption of a neurotechnology reasonable, and given its high variance, how relevant is it to any particular neurotechnology?
- Can neurotechnologies currently in clinical and preclinical development be advanced in 10 to 20 years to the point where they could meaningfully benefit AI safety?
Neglectedness
- How much funding goes toward neuroscience overall?
- It would not be surprising if our estimate of $20B/year was off by 25% in either direction.
- How much funding goes toward non-drug, central nervous system neurotechnology
development specifically?
- It would not be surprising if our estimate of $4B/year was off by 50% in either direction.
- Have we omitted any important differential neurotechnology development efforts?
- How much should we expect government regulatory oversight of neurotechnologies to be a force for differential neurotechnology development?
Tractability
- Given our extreme uncertainty about how to assign dollar values to outcomes like improving subjective wellbeing and aiding in AI safety, what framework can we use to estimate the cost-effectiveness of the fundable opportunities we've presented?
Acknowledgements
Many thanks to
- Parth Ahya
- Tyson Aflalo
- Trenton Bricken
- Steve Byrnes
- Nick Cammarata
- Sasha Chapin
- Mackenzie Dion
- Daniel Eth
- Mina Fahmi
- Aleš Flídr
- Quintin Frerichs
- Ozzie Gooen
- Hamish Hobbs
- Raffi Hotter
- Robert Long
- Eliana Lorch
- Vishal Maini
- Stephen Malina
- Mike McCormick
- Evan Miyazono
- niplav
- Sumner Norman
- Jack Rafferty
- Sam Rodriques
- Peter Wildeford
- Wojciech Zaremba
for helpful comments and suggestions.
This definition includes not just electrode-based brain-computer interfaces but also chemical, biological, mechanical, and other modalities. Activities like exercise, media consumption, or AR/VR glasses are not neurotechnologies, since their effects on cognition are mediated through endogenous pathways. Meditation, hypnosis, and other modalities that influence brain function exclusively via unusual forms of conscious engagement could be considered neurotechnologies, but we won't consider them here. ↩︎
The former generally refers to diseases like Alzheimer’s, Parkinson’s, or epilepsy, which have observable pathologies of the structure or activity of neurons. The latter includes diseases like depression and ADHD that don’t, yet. ↩︎
For example, the IHME’s data suggest people estimate moderate depression to be about 2-4x worse than living with a limp (source, mild or moderate major depressive disorder vs. conditions with limp as a symptom). However, people who have suffered from both mobility issues and depression report that depression is 10x worse for their wellbeing (source, section 4.4, derived from Table 2 here). ↩︎
I think this is a really comprehensive report on this space! Nothing against the report itself, I think you did a great job.
I'm pretty cynical about current brain imaging/BCI methods.
+ I don't think getting high quality structural images of the brain is useful from an EA perspective, though it has substantial medical benefits for the people who need brain scans/can afford to get them. This just doesn't strike me as one of the most effective cause areas, the same way a cure for Huntington's disease would be a wonderful thing, but might not qualify as a top EA cause area.
+ I don't think getting measures of brain activity via EEG or fMRI has yet produced results that I would consider worth funding from an EA perspective. Again, I'm not saying some results aren't useful (I'm especially impressed with how EEG helped us understand sleep). But I don't think any of this research is substantially relevant to preventing civilizational or existential risks.
+ I don't think our current brain stimulation methods (e.g., TMS, TDCS) have any EA relevance. The stimulation provided from these procedures (in healthy subjects) just doesn't seem to have huge cognitive effects compared to more robust methods (education, diet, exercise, sleep, etc.). Brain stimulation might have much bigger impacts for chronically depressed and Parkinson's patients via DBS. But again I don't think this stuff is relevant to civilizational or existential risks, and I think there are probably much more cost effective ways of improving welfare.
There may still be useful neurotechnology research to be done. But I think the highest impact will be in computational/algorithmic stuff instead of things that directly probe the human brain.
Super great to get a practitioner's perspective - thanks!
Completely agree that structural imaging, EEG and fMRI, and existing stim methods are likely not differentially important (except as enablers for other tech, e.g. structural imaging being used for targeting for transcranial approaches like TUS).
I only included these methods for completeness in the review of current R&D. They're absent from the recommendations.
My contentions above are: (1) that more advanced neurotechnologies, which are currently in clinical and preclinical development, could have large-scale impacts in 30 years (1-5 decade range) (2) neurotechnologies whose performance vastly exceeds the methods you mentioned above that might be differentially beneficial could be developed in 1-2 decades with concerted effort. Some of which we have fundable ideas for.
I think computational neuroscience will eventually be useful, but its utility is dependent on the quality of measurement and manipulation we can achieve of the brain. Neurotechnology is the key to better measurement and manipulation.
Makes sense! Thanks again for writing such a comprehensive report!
I think this is a solid report[1] of a really interesting area.
I previously gave this as feedback; but wanted to bring it up here:
My hunch is that the #1 potential benefit of this is the gains to Wisdom and Intelligence. The main goal I see is for humans (especially near effective altruists) to become smart enough to us over the hurdle of the upcoming X-risks.
It's possible that improvements to benevolence or coordination could be great as well.
If brain-computer interfaces could become really feasible in the next 10-40 years, they seem like a really huge deal to me; much more exciting than many pharmaceuticals, education advances, and many other interesting industries.
It would be great to see forecasts on this, for anyone interested.
This doesn't mean I've verified the facts, I'm not an expert in this field.
I'd love to see more work done on this question, especially how people might operationalize wisdom.
Very thorough report, reminds of my thesis. Sadly, I wrote that thesis just as enthusiastically a decade and a half ago. (I want to clarify this comment is only referring to the parts of this document that discuss central nervous system BCI being used outside of medical contexts.)
I started my career in central nervus system invasive and non-invasive BCI almost a decade and a half ago. I left the field to go into VC and PE when I realized how stuck it was. I read this report excitedly to see if any new technology had broken through the core barriers that explain why the field has moved so slowly and my take away is that at least in those areas (central nervous system recreational BCI) we are at a complete stand still. Most of the tech you list as new in central nervous system BCI was around or at least being talked about regularly when I was in the field so while it is "new" in neuroscience terms it is nowhere close to AI timelines.
Why is the central nervous system BCI so stuck? The non-invasive stuff can’t get around the stuff that surrounds our brain acting as a low pass filter, (ECOG solves for this but no one is going to get a BCI put under their scull for recreational use). As for the invasive stuff astrocytic scar formation is still the major block of long-term use (basically your brain starts to build scar tissue around the input device and the device needs to create "louder signals" to get through the scar tissue causing yet more scar tissue). This is solvable with (and has been for a while) with immunosuppressants but again that largely rules out recreational use (which was the only thing that interested me).
Still, I am glad to see people still pushing ahead at the field. If everyone allowed themselves to become as dejected as I did it just because it moves so much slower than other EA relevant fields nothing would ever advance.
Seems like minimally invasive ultrasound, endovascular BCI, and optogenetic cell therapy all get around the "core barriers" you cite to CNS BCI.
None were around 15 years ago.
https://pubmed.ncbi.nlm.nih.gov/33756104/
https://clinicaltrials.gov/ct2/show/NCT03834857?term=synchron&draw=2 (endovascular BCI in humans)
https://www.biorxiv.org/content/10.1101/333526v1.full
I think I did a bad job highlighting the recent successes in my attempt to be comprehensive. And I probably shouldn't have listed the iBCIs first, b/c readers will think that's the state-of-the-art.
Definitely agree that non-medical BCI will have a slow adoption curve. As I discuss, though, they don't need a fast adoption curve to be relevant to AI safety.