Bio

Participation
4

I currently work with CE/AIM-incubated charity ARMoR on research distillation, quantitative modelling and general org-boosting to support policy advocacy for market-shaping tools to incentivise innovation and ensure access to antibiotics to help combat AMR

I previously did AIM's Research Training Program, was supported by a FTX Future Fund regrant and later Open Philanthropy's affected grantees program, and before that I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, e-commerce) and departments (commercial, marketing), after majoring in physics at UCLA and changing my mind about becoming a physicist. I've also initiated some local priorities research efforts, e.g. a charity evaluation initiative with the moonshot aim of reorienting my home country Malaysia's giving landscape towards effectiveness, albeit with mixed results. 

I first learned about effective altruism circa 2014 via A Modest Proposal, Scott Alexander's polemic on using dead children as units of currency to force readers to grapple with the opportunity costs of subpar resource allocation under triage. I have never stopped thinking about it since, although my relationship to it has changed quite a bit; I related to Tyler's personal story (which unsurprisingly also references A Modest Proposal as a life-changing polemic):

I thought my own story might be more relatable for friends with a history of devotion – unusual people who’ve found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It’s the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life].

How others can help me

I'm looking for "decision guidance"-type roles e.g. applied prioritization research.

How I can help others

Do reach out if you think any of the above piques your interest :)

Comments
187

Topic contributions
3

This writeup by Vadim Albinsky at Founders Pledge seems related: Are education interventions as cost effective as the top health interventions? Five separate lines of evidence for the income effects of better education [Founders Pledge] 

The part that seems relevant is the charity Imagine Worldwide's use of the "adaptive software" OneBillion app to teach numeracy and literacy. Despite Vadim's several discounts and general conservatism throughout his CEA he still gets ~11x GD cost-effectiveness. (I'd honestly thought, given the upvotes and engagement on the post, that Vadim had changed some EAs' minds on the promisingness of non-deworming education interventions.) The OneBillion app doesn't seem to use AI, but they already (paraphrasing) use "software to provide a complete, research-based curriculum that adapts to each child’s pace, progress, and cultural and linguistic context", so I'm not sure how much better Copilot / Rori would be?

Quoting some parts that stood out to me (emphasis mine):

This post argues that if we look at a broad enough evidence base for the long term outcomes of education interventions we can conclude that the best ones are as cost effective as top GiveWell grants. ... 

... I will argue that the combined evidence for the income impacts of interventions that boost test scores is much stronger than the evidence GiveWell has used to value the income effects of fighting malaria, deworming, or making vaccines, vitamin A, and iodine more available. Even after applying very conservative discounts to expected effect sizes to account for the applicability of the evidence to potential funding opportunities, we find the best education interventions to be in the same range of cost-effectiveness as GiveWell’s top charities. ...

When we apply the above recommendations to our median recommended education charity, Imagine Worldwide, we estimate that it is 11x as cost effective as GiveDirectly at boosting well-being through higher income. ...

Imagine Worldwide (IW) provides adaptive software to teach numeracy and literacy in Malawi, along with the training, tablets and solar panels required to run it. They plan to fund a six-year scale-up of their currently existing program to cover all 3.5 million children in grades 1-4 by 2028. The Malawi government will provide government employees to help with implementation for the first six years, and will take over the program after 2028. Children from over 250 schools have received instruction through the OneBillion app in Malawi over the past 8 years. Five randomized controlled trials of the program have found learning gains of an average of 0.33 standard deviations.  The OneBillion app has also undergone over five additional RCTs in a broad range of contexts with comparable or better results.

That's heartbreaking. Thanks for the pointer.

I just learned that Trump signed an executive order last night withdrawing the US from the WHO; this is his second attempt to do so. 

WHO thankfully weren't caught totally unprepared. Politico reports that last year they "launched an investment round seeking some $7 billion “to mobilize predictable and flexible resources from a broader base of donors” for the WHO’s core work between 2025 and 2028. As of late last year, the WHO said it had received commitments for at least half that amount".

Full text of the executive order below: 

WITHDRAWING THE UNITED STATES FROM THE WORLD HEALTH ORGANIZATION 

By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered: 

Section 1.  Purpose.  The United States noticed its withdrawal from the World Health Organization (WHO) in 2020 due to the organization’s mishandling of the COVID-19 pandemic that arose out of Wuhan, China, and other global health crises, its failure to adopt urgently needed reforms, and its inability to demonstrate independence from the inappropriate political influence of WHO member states.  In addition, the WHO continues to demand unfairly onerous payments from the United States, far out of proportion with other countries’ assessed payments.  China, with a population of 1.4 billion, has 300 percent of the population of the United States, yet contributes nearly 90 percent less to the WHO.  

Sec. 2.  Actions.  (a)  The United States intends to withdraw from the WHO.  The Presidential Letter to the Secretary-General of the United Nations signed on January 20, 2021, that retracted the United States’ July 6, 2020, notification of withdrawal is revoked.

(b)  Executive Order 13987 of January 25, 2021 (Organizing and Mobilizing the United States Government to Provide a Unified and Effective Response to Combat COVID–19 and to Provide United States Leadership on Global Health and Security), is revoked.

(c)  The Assistant to the President for National Security Affairs shall establish directorates and coordinating mechanisms within the National Security Council apparatus as he deems necessary and appropriate to safeguard public health and fortify biosecurity.

(d)  The Secretary of State and the Director of the Office of Management and Budget shall take appropriate measures, with all practicable speed, to:

(i)    pause the future transfer of any United States Government funds, support, or resources to the WHO;

(ii)   recall and reassign United States Government personnel or contractors working in any capacity with the WHO; and  

(iii)  identify credible and transparent United States and international partners to assume necessary activities previously undertaken by the WHO.

(e)  The Director of the White House Office of Pandemic Preparedness and Response Policy shall review, rescind, and replace the 2024 U.S. Global Health Security Strategy as soon as practicable. 

Sec. 3.  Notification.  The Secretary of State shall immediately inform the Secretary-General of the United Nations, any other applicable depositary, and the leadership of the WHO of the withdrawal.

Sec. 4.  Global System Negotiations.  While withdrawal is in progress, the Secretary of State will cease negotiations on the WHO Pandemic Agreement and the amendments to the International Health Regulations, and actions taken to effectuate such agreement and amendments will have no binding force on the United States.  

Sec. 5.  General Provisions.  (a)  Nothing in this order shall be construed to impair or otherwise affect: 

(i)   the authority granted by law to an executive department or agency, or the head thereof; or 

(ii)  the functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals. 

(b)  This order shall be implemented consistent with applicable law and subject to the availability of appropriations. 

(c)  This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person. 

THE WHITE HOUSE,

    January 20, 2025.

Many heads are more utilitarian than one by Anita Keshmirian et al is an interesting paper I found via Gwern's site. Gwern's summary of the key points: 

  • Collective consensual judgments made via group interactions were more utilitarian than individual judgments.
  • Group discussion did not change the individual judgments indicating a normative conformity effect.
  • Individuals consented to a group judgment that they did not necessarily buy into personally.
  • Collectives were less stressed than individuals after responding to moral dilemmas.
  • Interactions reduced aversive emotions (eg. stressed) associated with violation of moral norms.

Abstract: 

Moral judgments have a very prominent social nature, and in everyday life, they are continually shaped by discussions with others. Psychological investigations of these judgments, however, have rarely addressed the impact of social interactions.

To examine the role of social interaction on moral judgments within small groups, we had groups of 4 to 5 participants judge moral dilemmas first individually and privately, then collectively and interactively, and finally individually a second time. We employed both real-life and sacrificial moral dilemmas in which the character’s action or inaction violated a moral principle to benefit the greatest number of people. Participants decided if these utilitarian decisions were morally acceptable or not.

In Experiment 1, we found that collective judgments in face-to-face interactions were more utilitarian than the statistical aggregate of their members compared to both first and second individual judgments. This observation supported the hypothesis that deliberation and consensus within a group transiently reduce the emotional burden of norm violation. 

In Experiment 2, we tested this hypothesis more directly: measuring participants’ state anxiety in addition to their moral judgments before, during, and after online interactions, we found again that collectives were more utilitarian than those of individuals and that state anxiety level was reduced during and after social interaction.

The utilitarian boost in collective moral judgments is probably due to the reduction of stress in the social setting.

I wonder if this means that individual EAs might find EA principles more emotionally challenging than group-level surveys might suggest. It also seems a bit concerning that group judgments may naturally skew utilitarian simply by virtue of being groups, rather than through improved moral reasoning (and I say this as someone for whom utilitarianism is the largest "party" in my moral parliament). 

I think you're not quite engaging with Johan's argument for the necessity of worldview diversification if you assume it's primarily about risk reduction or or diminishing returns. My reading of their key point is that we don't just have uncertainty about outcomes (risk), but uncertainty about the moral frameworks by which we evaluate those outcomes (moral uncertainty), combined with deep uncertainty about long-term consequences (complex cluelessness), leading to fundamental uncertainty in our ability to calculate expected value at all (even if we hypothetically want to as EV-maximisers, itself a perilous strategy), and it's these factors that make them think worldview diversification can be the right approach even at the individual level.

Much appreciated, thanks again Vasco.

I don't see why acausal trade makes infinite ethics decision-relevant for essentially the reasons Manheim & Sandberg discuss in Section 4.5 – acausal trade alone doesn't imply infinite value; footnote 41's "In mainstream cosmological theories, there is a single universe, and the extent can be large but finite even when considering the unreachable portion (e.g. in closed topologies). In that case, these alternative decision theories are useful for interaction with unreachable beings, or as ways to interact with powerful predictors, but still do not lead to infinities"; physical limits on information storage and computation would still apply to any acausal coordination. 

I'll look into Wilkinson's paper, thanks.

Manheim and Sandberg address your objection in the paper persuasively (to me personally), so let me quote them, since directly addressing these arguments might change my mind. @MichaelStJules I'd be keen to get your take on this as well. (I'm not quoting the footnotes, even though they were key to persuading me too.)

Section 4.1, "Rejecting Physics":

4.1.1 Pessimistic Meta-induction and expectations of falsification

The pessimistic meta-induction warns that since many past successful scientific theories were found to be false, we have no reason expect that our currently successful theories are approximately true. Hence, for example, the above constraints on information processing are not guaranteed to imply finitude. Indeed, many of them are based on information physics that is weakly understood and liable to be updated in new directions. If physics in our universe does, in fact, allow for access to infinite matter, energy, time, or computation through some as-yet-undiscovered loophole, it would undermine the central claim to finitude.

This criticism cannot be refuted, but there are two reasons to be at least somewhat skeptical. First, scientific progress is not typically revisionist, but rather aggregative. Even the scientific revolutions of Newton, then Einstein, did not eliminate gravity, but rather explained it further. While we should regard the scientific input to our argument as tentative, the fallibility argument merely shows that science will likely change. It does not show that it will change in the direction of allowing infinite storage. Second, past results in physics have increasingly found strict bounds on the range of physical phenomena rather than unbounding them. Classical mechanics allow for far more forms of dynamics than relativistic mechanics, and quantum mechanics strongly constrain what can be known and manipulated on small scales.

While all of these arguments in defense of physics are strong evidence that it is correct, it is reasonable to assign a very small but non-zero value to the possibility that the laws of physics allow for infinities. In that case, any claimed infinities based on a claim of incorrect physics can only provide conditional infinities. And those conditional infinities may be irrelevant to our decisionmaking, for various reasons.

4.1.2 Boltzmann Brains, Decisions, and the indefinite long-term

One specific possible consideration for an infinity is that after the heat-death of the universe there will be an indefinitely long period where Boltzmann brains can be created from random fluctuations. Such brains are isomorphic to thinking human brains, and in the infinite long-term, an infinite number of such brains might exist [ 34]. If such brains are morally relevant, this seems to provide a value infinity.

We argue that even if these brains have moral value, it is by construction impossible to affect their state, or the distribution of their states. This makes their value largely irrelevant to decision-making, with one caveat. That is, if a decision-maker believes that these brains have positive or negative moral value, it could influence decisions about whether decisions that could (or would intentionally) destroy space-time, for instance, by causing a false-vacuum collapse. Such an action would be a positive or negative decision, depending on whether the future value of a non-collapsed universe is otherwise positive or negative. Similar and related implications exist depending on whether a post-collapse universe itself has a positive or negative moral value.

Despite the caveat, however, a corresponding (and less limited) argument can be made about decisionmaking for other proposed infinities that cannot be affected. For example, inaccessible portions of the universe, beyond the reachable light-cone, cannot be causally influenced. As long as we maintain that we care about the causal impacts of decisions, they are irrelevant to decisionmaking.

Section 4.2.4 more directly addresses the objection I think. (Unfortunately the copy-pasting doesn't preserve the mathematical formatting, so perhaps it'd be clearer to just look at page 12 of their paper; in particular I've simplified their notation for $1 in 2020 to just $1):

4.2.4 Bounding Probabilities

As noted above, any act considered by a rational decision maker, whether consequentialist or otherwise, is about preferences over a necessarily finite number of possible decisions. This means that if we restrict a decision-maker or ethical system to finite, non-zero probabilities relating to finite value assigned to each end state, we end up with only finite achievable value. The question is whether probabilities can in fact be bounded in this way.

We imagine Robert, faced with a choice between getting $1 with certainty, and getting $100 billion with some probability. Given that there are two choices, Robert assigns utility in proportion to the value of the outcome weighted by the probability. If the probability is low enough, yet he chooses the option, it implies that the value must be correspondingly high. 

As a first argument, imagine Robert rationally believes there is a probability of 10^−100 of receiving the second option, and despite the lower expected dollar value, chooses it. This implies that he values receiving $100 billion at approximately 10^100x the value of receiving $1. While this preference is strange, it is valid, and can be used to illustrate why Bayesians should not consider infinitesimal probabilities valid.

To show this, we ask what would be needed for Robert to be convinced this unlikely event occurred. Clearly, Robert would need evidence, and given the incredibly low prior probability, the evidence would need to be stupendously strong. If someone showed Robert that his bank balance was now $100 billion higher, that would provide some evidence for the claim—but on its own, a bank statement can be fabricated, or in error. This means the provided evidence is not nearly enough to convince him that the event occurred. In fact, with such a low prior probability, it seems plausible that Robert could have everyone he knows agree that it occurred, see newspaper articles about the fact, and so on, and given the low prior odds assigned, still not be convinced. Of course, in the case that the event happened, the likelihood of getting all of that evidence will be much higher, causing him to update towards thinking it occurred.

A repeatable experiment which generates uncorrelated evidence could provide far more evidence over time, but complete lack of correlation seems implausible; checking the bank account balance twice gives almost no more evidence than checking it once. And as discussed in the appendix, even granting the possibility of such evidence generation, the
amount possible is still bounded by available time, and therefore finite.

Practically, perhaps the combination of evidence reaches odds of 10^50:1 that the new money exists versus that it does not. Despite this, if he truly assigned the initially implausibly low probability, any feasible update would not be enough to make the event, receiving the larger sum, be a feasible contender for what Robert should conclude. Not only that, but we posit that a rational decision maker should know, beforehand, that he cannot ever conclude that the second case occurs. 

If he is, in fact, a rational decision maker, it seems strange to the point of absurdity for him to to choose something he can never believe occurred, over the alternative of a certain small gain. 

Generally, then, if an outcome is possible, at some point a rational observer must be able to be convinced, by aggregating evidence, that it occurred. Because evidence is a function of physical reality, the possible evidence is bounded, just as value itself is limited by physical constraints. We suggest (generously) that the strength of this evidence is limited to odds of the number of possible quantum states of the visible universe — a huge but finite value — to 1. If the prior probability assigned to an outcome is too low to allow for a decision maker to conclude it has occurred given any possible universe, no matter what improbable observations occur, we claim the assigned probability is not meaningful for decision making. As with the bound on lexicographic preferences, this bound allows for an immensely large assignment of value, even inconceivably so, but it is again still finite.

Thanks Vasco. While I agree with what I interpret to be your actionable takeaway (to ethically act as if our actions' consequences are finitely circumscribed in time and space), I don't see where your confidence comes from that the effects of one's actions decay to practically 0 after at most around 100 years, especially given that longtermists explicitly seek and focus on such actions. I'm guessing you have a writeup on the forum elaborating on your reasoning, in which case would you mind linking to it?

Load more