huw

Co-Founder & CTO @ Kaya Guides
1867 karmaJoined Working (6-15 years)Sydney NSW, Australia
huw.cool

Bio

Participation
2

I live for a high disagree-to-upvote ratio

Comments
261

No, but I think that’s reasonable in most cases (although hard to figure out exactly how to allocate it).

I didn’t. It evidently works—as do cooperatives, which I was also excited to found—but I think the big worry is up the top end. It’s very hard to imagine a FAANG company structured this way. And some of the average-case calculations above are skewed upwards by a handful of top success stories.

huw
4
0
0
2

I looked at a handful of mental health startups to inform my guesses on impact. I looked the most deeply into BetterHelp, but you can clearly see through their numbers that their prices have almost doubled in 5 years (and steadily, too, this wasn’t a COVID thing). From the research I did, my sense was that it wasn’t getting passed back onto their counsellors, nor fuelling an increase in growth spending. There’s no way it got twice as expensive to deliver therapy.

I think if we had to point to a single mechanism, once you run out of user growth—as BetterHelp have—your investors push you to find an equilibrium price. That price is necessarily going to be higher than the price that guarantees the broadest impact, and likely to be higher than the price for the most (breadth x depth) impact.

My best guess is regional pricing can act as a crude form of means-testing, but this probably comes with a perverse incentive to ignore the cheaper regions (as BetterHelp have—almost all of their users are in the U.S.).

(All of that goes out the window if you don’t go direct to consumers—I think deeper forms of healthtech might be quite value-aligned!)

what does these numbers represent?

How much money a CE charity might be able to raise on average. (This makes the assumption that deployed cash from a charity is roughly equivalent to donated cash from Founding to Give which is what the other numbers represent).

huw
58
2
0
1
4

I actually made this exact decision, just in the opposite direction! Last year, I had a pending offer from AIM’s Founding to Give programme, and another to be CTO at Kaya Guides (my current role).

Odds & magnitude of success

For AIM incubatees, using historical data, I calculated that:

  • 30% of CE incubatees received funding from a top funder (OP or GiveWell)
  • About 2/3 of funding for CE incubatees came from OP and GiveWell, so we can multiply by 3/2 to get the total funding amount they would’ve received
  • Based on real-world grant data from OP and GiveWell, the median AIM charity could expect to receive US$1.4M in lifetime funding, the mean could get US$20M, and the top handful (i.e. a GiveWell Top Charity) could get hundreds of millions

For FTG incubatees, I borrowed from AIM’s own BOTEC, but substituted more real-world data when I had it to model distributions. This models:

  • The rate of incubator (ex. YC) acceptance should be about 20–60%, given that ~40% of CE charities don’t languish.
  • The historical odds of a YC company getting a substantial exit is about 11–16%
  • AIM then model out company valuations and founder shares at exit.
  • The median YC startup might yield about US$2M donations at exit, and the mean about US$56M
  • The median YC healthtech startup might yield about US$2M at exit, and the mean about US$4M. (Much less of a long tail)

A healthtech startup might also have significant positive impact through its work, so I decided to model that in too. It increased the potential value by about 5× on average.

Mutliplying through, the summary statistics I came up with for each option (in value-equivalent US dollars):

CE incubatee

  • p5: ~$100k
  • Median: ~$500K
  • Mean: ~$6M
  • p95: ~$23M

FTG incubatee

  • p5: ~$0 (actually negative in the model, but intuitively, many startups fail, make no money, and have no positive impact)
  • Median: ~$2M
  • Mean: ~$56M
  • p95: ~$11B

FTG incubatee (healthtech)

  • p5: ~$6M
  • Median: ~$14M
  • Mean: ~$21M
  • p95: ~$64M

For Kaya Guides specifically, I also added a counterfactual impact factor that bumped it up a little bit, because I think we have an unusually unique opportunity to pioneer a new intervention globally.

Here’s the notebook I used to calculate all this. Maybe I should write this up as a full post someday.

Personal fit / corporate pressure

I spoke to a great deal of people, and one thing that came up repeatedly is that if you’re in the for-profit game to have a direct impact, you should probably forget about it. Opinions differ a lot here, but I’ve seen the inside of a handful of tech companies—and many more products from the outside—and I do generally believe that at the point you’re big enough to have a significant impact, it’s quite likely that your investors will pressure you to squeeze money out of it in a way that will likely ruin said impact. This is especially true if you’re relying on some of your impact coming from donated exit money, which will, in all likelihood, reduce your control over the company.

The rest of my decision here is personal, but I’m still not fully sold on Earning to Give. I tried it earlier in my career, and I found that the toll of not doing direct work was so mentally debilitating that I wasn’t able to effectively work that hard. I think that being a highly successful startup founder is probably really hard, and I’m evidently not constitutionally motivated enough to just make money.

Conclusion

Broadly, I would see the decision like this: I think founding a startup and founding a charity have highly transferrable skillsets, but require extremely different personalities and constitutions. For me, then, this narrowed the FTG paths I would be effective at to just healthtech, which had a much lower ceiling. Frankly, my model for healthtech wasn’t meaningfully larger than my model for Kaya Guides, and at this point, I just went with my gut feeling.

(I know this comment is a bit structurally messy but I hope it elucidates something of value and provides some harder numbers to Nick’s intuitions)

huw
9
2
0
1

The plan for FTG is to get the founders into those programmes or to good seed funding, it’s trying to fill a different niche in the market which is value-aligned co-founder matching (harder than you think!)

huw
67
8
1
1

Per Bloomberg, the Trump administration is considering restricting the equivalency determination for 501(c)3s as early as Tuesday. The equivalency determination allows for 501(c)3s to regrant money to foreign, non-tax-exempt organisations while maintaining tax-exempt status, so long as an attorney or tax practitioner claims the organisation is equivalent to a local tax-exempt one.

I’m not an expert on this, but it sounds really bad. I guess it remains to be seen if they go through with it.

Regardless, the administration is allegedly also preparing to directly strip environmental and political (i.e. groups he doesn’t like, not necessarily just any policy org) non-profits of their tax exempt status. In the past week, he’s also floated trying to rescind the tax exempt status of Harvard. From what I understand, such an Executive Order is illegal under U.S. law (to whatever extent that matters anymore), unless Trump instructs the State Department to designate them foreign terrorist organisations, at which point all their funds are frozen too.

These are dark times. Stay safe 🖤

For you or others reading this, I can really recommend protesting if you’re not already. I also doubt it passes the ITN test (although, I wouldn’t discount it!), but it does provide (a) a good outlet for your feelings and (b) a real sense that you’re not alone, that there are people out there who are gonna fight alongside you. I come back from protests feeling a mix of emotions, but depressed and disempowered is rarely one of them.

huw
12
6
1
1

A few quotes I wanna speak on:

Richard Hanania specifically was coming, and Hanania was one of the several speakers cited in the Guardian article as like a, person who had like a troubled background.

I just think Hanania himself as a person has been, growing. He recently voiced strong support for the Shrimp welfare project.

I think it’s heavy downplaying—potentially even disingenuous—to leave racism out of the discussion when talking about Hanania. It’s a demonstrable fact that he wrote for neo-Nazi and white supremacist organisations in the past, but when Austin talks about him ‘growing’, it’s not that he has denounced this work (FWIW, he has), but that he now supports animal welfare. It’s a bit of a non sequitur, nobody is arguing he used to be racist against shrimp?

it is the case that we care a lot about things that are outside the traditional overton window, you might say, like modifying, genetics of, embryos, for example, or like doing screening on embryos. It's like a kind of thing that we had people come and talk about, and sometimes it's very controversial

The same goes for the other speakers. They aren’t controversial because of their opinions on embryo selection. They are controversial because they routinely endorse human biodiversity. Austin knows this, because all of the controversy around Manifest was about the topic of human biodiversity.

I think it is probably the case that like, because of the speakers, we chose some people who are more on the, oh, I want to like, argue about race. thought oh, this is the conference for me

I'm worried actually about something like an evaporative cooling effect where people who are more sensitive to this, stop showing up to manifests and people who feel like, oh yeah, I want to argue about, I do, things do show up.

Evidently, Austin understands something about the dynamics here. But the language such as ‘people who are more sensitive to this’ feels indicative that he doesn’t believe that the racism is the problem; rather, it is the reactions of a particular profile of person.

I don’t feel like Austin has internalised that people aren’t merely offended or sensitive to racism; they are harmed by it, and want to both avoid spaces that cause them harm, and prevent future harm caused by spreading those ideas. The difference is that offence is a reaction that you can behaviourally train yourself out of, but harm is a thing that is done to you.

More broadly, Austin repeatedly speaks about trade-offs between ‘winning’ (success, sometimes framed as harmony) and ‘standing up for what’s right’, which is sometimes framed as a form of truth-seeking. But this implicitly frames inquiry into and discussion of human biodiversity as a form of truth-seeking. David Thorstad has already written at length about why that’s harmful, so I’ll defer to his work on that.

Microsoft continue to pull back on their data centre plans, in a trend that’s been going on for the past few months, since before the tariff crash (Archive).

Frankly, the economics of this seem complex (the article mentions it’s cheaper to build data centres slowly, if you can), so I’m not super sure how to interpret this, beyond that this probably rules out the most aggressive timelines. I’m thinking about it like this:

  • Sam Altman and other AI leaders are talking about AGI 2027, at which point every dollar spent on compute yields more than a dollar of revenue, with essentially no limits
  • Their models are requiring exponentially more compute for training (ex. Grok 3, GPT-5) and inference (ex. o3), but producing… idk, models that don’t seem to be exponentially better?
  • Regardless of the breakdown in relationship between Microsoft and OpenAI, OpenAI can’t lie about their short- and medium-term compute projections, because Microsoft have to fulfil that demand
  • Even in the long term, Microsoft are on Stargate, so still have to be privy to OpenAI’s projections even if they’re not exclusively fulfilling them
  • Until a few days ago, Microsoft’s investors were spectacularly rewarding them for going all in on AI, so there’s little investor pressure to be cautious

So if Microsoft, who should know the trajectory of AI compute better than anyone, are ruling out the most aggressive scaling scenarios, what do/did they know that contradicts AGI by 2027?

Load more