This is a special post for quick takes by Joseph_Chu. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I tried asking ChatGPT, Gemini, and Claude to come up with a formula that converts between correlation space to probability space while preserving the relationship 0 = 1/n. I came up with such a formula a while back, so I figure it shouldn't be hard. They all offered formulas, all of which were shown to be very much wrong when I actually graphed them to check.

As a utilitarian, I think that surveys of happiness in different countries can serve as an indicator of how well the various societies and government systems of these countries serve the greatest good. I know this is a very rough proxy and potentially filled with confounding variables, but I noticed that the two main surveys, Gallup's World Happiness Report, and Ipsos' Global Happiness Survey seem to have very different results.

Notably, Gallup's Report puts the Nordic model countries like the Netherlands (7.403) and Sweden (7.395) near the top, with Canada (6.961) and the United States (6.894) scoring pretty well, and countries like China (5.818) scoring modestly, and India (4.036) scoring poorly.

Conversely, the Ipsos Survey puts China (91%) at the top, with the Netherlands (85%) and India (84%) scoring quite well, while the United States (76%), Sweden (74%), and Canada (74%) are more modest.

I'm curious why these surveys seem to differ so much. Obviously, the questions are different, and the scoring method is also different, but you'd expect a stronger correlation. I'm especially surprised by the differences for China and India, which seem quite drastic.

As you've pointed out, the questions are very different. The Gallup Poll asks people to rank their current position in life from "the best possible" to the "worst possible" on a ten point scale which implies that unequal opportunities and outcomes matter a lot.

The IPSOS poll avoids any sort of implicit comparison with how much better things could otherwise have been or actually is for others, and simply asks whether they would describe themselves as (very) happy or not (at all) on a simpler 4 point scale which is collapsed to a yes/no answer for the ranking

So Chinese and Indian people aren't being asked whether they're conscious of the many things they lack which could make their life better like in the Gallup poll, they're being asked whether they feel so bad about their life they wish to describe themselves as unhappy (or, for various other questions "unsatisfied"). People tend to be biased towards saying they're happy and there's likely to be a cultural component to how willing people are to say they're not too

And to add to the complications, the samples are non-random and not necessarily equivalent. IPSOS acknowledge their developing country samples are significantly more affluent, urban and educated than the population, which might explain why even when it comes to their personal finances they're often more "satisfied" than inhabitants of countries with much higher median incomes. Gallup doesn't admit that sampling bias, but even if it's present to exactly the same extent (it's bound to be present to some extent; poor, rural illiterate people are hard to randomly survey) it probably doesn't have the same effect. Indian professionals can simultaneously be "happy" with their secure-by-local standards position in life and aware that their life outcomes could have been a whole lot better.

Think the stark differences are a good illustration of the limits to subjective wellbeing data, but arguably neither survey captures SWB particularly well anyway, the former because it asks people to make a comparison of [mainly objective] outcomes and the latter because the scale is too simple to capture hedonic utility.

I've been looking at the numbers with regards to how many GPUs it would take to train a model with as many parameters as the human brain has synapses. The human brain has 100 trillion synapses, and they are sparse and very efficiently connected. A regular AI model fully connects every neuron in a given layer to every neuron in the previous layer, so that would be less efficient.

The average H100 has 80 GB of VRAM, so assuming that each parameter is 32 bits, then you have about 20 billion per GPU. So, you'd need 10,000 GPUs to fit a single instance of a human brain in RAM, maybe. If you assume inefficiencies and need to have data in memory as well you could ballpark another order of magnitude so 100,000 might be needed.

For comparison, it's widely believed that OpenAI trained GPT4 on about 10,000 A100s that Microsoft let them use from their Azure supercomputer, most likely the one listed as third most powerful in the world by the Top500 list.

Recently though, Microsoft and Meta have both moved to acquire more GPUs that put them in the 100,000 range, and Elon Musk's X.ai recently managed to get a 100,000 H100 GPU supercomputer online in Memphis.

So, in theory at least, we are nearly at the point where they can train a human brain sized model in terms of memory. However, keep in mind that training such a model would take a ton of compute time. I haven't done to calculations yet for FLOPS so I don't know if it's feasible yet.

Just some quick back of the envelope analysis.

Also, even if we can train and run a model the size of the human brain, it would still be many orders of magnitude less energy efficient than an actual brain. Human brains use barely 20 watts. This hypothetical GPU brain would require enormous data centres of power, and each H100 GPU uses 700 watts alone.

I have some ideas and drafts for posts to the EA Forums and Less Wrong that I've been sitting on because I feel somewhat intimidated by the level of intellectual rigor I would need to put into the final drafts to ensure I'm not downvoted into oblivion (particularly on Less Wrong, where a younger me experienced such in the early days).

Should I try to overcome this fear, or is it justified?

For the EA Forums, I was thinking about explaining my personal practical take on moral philosophy (Eudaimonic Utilitarianism with Kantian Priors), but I don't know if that's actually worth explaining given that EA tries to be inclusive and not take particular stands on morality, and it might not be relevant enough to the forum.

For Less Wrong, I have a draft of a response to Eliezer's List of Lethalities post that I've been sitting on since 2022/04/11 because I doubted it would be well received given that it tries to be hopeful and, as a former machine learning scientist, I try to challenge a lot of LW orthodoxy about AGI in it. I have tremendous respect for Eliezer though, so I'm also uncertain if my ideas and arguments aren't just hairbrained foolishness that will be shot down rapidly once exposed to the real world, and the incisive criticism of Less Wrongers.

The posts in both places are also now of such high quality that I feel the bar is too high for me to meet with my writing, which tends to be more "interesting train-of-thought in unformatted paragraphs" than the "point-by-point articulate with section titles and footnotes" style that people in both places tend to employ.

Anyone have any thoughts?

Short form/quick takes can be a good compromise, and sources of feedback for later versions.

As someone who most of my time here critiquing EA/rationalist orthodoxy, I don't think you have much to worry about, besides annoying comments. A good faith critique presented politely is rarely downvoted. 

Also, I feel like there's selection bias going on around the quality of posts. The best, super highly upvoted posts may be extremely high quality, but there are still plenty of posts that aren't (and that's fine, this is an open forum, not an academic journal). 

I'd be interested in reading your list of lethalities response. I'm not sure it would be that badly recieved, for example, this response by quinton pope got 360 upvotes. List of lethalities seems to be a fringe view even among AI x-risk researchers, let alone the wider machine learning community. 

This is one reason why it's very common for people to write a Google doc first, share it around, update it based on feedback and then post. But this only works if you know enough people who are willing to give you feedback.

An additional option: if you don't know people who are willing to review a document and give you feedback, you could ask people in the Effective Altruism Editing and Review Facebook group to review it.

On this Forum, it is rather rare for good-faith posts to end up with net negative karma. The "worst" reasonably likely outcome is to get very little engagement with your post, which is still more engagement than it will get in your drafts folder. I can't speak to LW, though.

I also think that the appropriate reference point is not the median level of the average post here, but much of the range of first posts from people who have developed into recognized successful posters.

From your description, my only concern would be whether your post sufficiently relates to EA. If it's ~80-90 percent a philosophy piece, maybe there's a better outlet for it. If it's ~50-70 percent, maybe it would work here with a brief summary of the philosophical position upfront and an internal link for the reader who wants to jump directly to the more directly EA-relevant content?

I encourage you to share your ideas.

I've often felt a similar my thoughts aren't valuable enough to share feeling. I tend to write these thoughts as a quick take rather than as a normal forum post, and I also try to phrase my words in a manner to indicate that I am writing rough thoughts, or observations, or something similarly non-rigorous (as sort of a signal to the reader that it shouldn't be evaluated by the same standard).

Either it'll be recieved well, or you get free criticism on your ideas, or a blend of the two. You win in all cases. If it get down voted into oblivion you can always delete it; how many deleted posts can you tie to an author? I can't name one. 

Ultimately, nobody cares about you (or me, or any other random forum user). They're too busy worrying about how they'll be perceived. This is a blessing. You can take risks and nobody will really care if you fail.

Either it'll be recieved well, or you get free criticism on your ideas, or a blend of the two.

 

A tough pill for super-sensitives like me to swallow, but I can see it as an exceptionally powerful one. I surely sympathize with OP on the fear of being downvoted—it's what kept me away from this site for months and from Reddit entirely—but valid criticism on many occasions has influenced me for the better, even if I'm scornful of the moments. Maybe my hurt with being wrong will lessen someday or maybe not, but knowing why can serve me well in the end, I can admit that.

My view is you should write/post something if you believe it's an idea that people haven't sufficiently engaged with in the past. Both of your post ideas sound like that to me.

If you have expertise on AI, don't be shy about showing it. If you aren't confident, you can frame your critiques as pointed questions, but personally I think it's better to just make your argument.

As for style, I think people will respond much better to your argument if it's clear. Clear is different from extensive; I think your example of many-sections-with-titles-and-footnotes conflates those two. That format is valuable for giving structure to your argument, not for being a really extensive argument that covers every possible ground. I agree that "interesting train of thought in unformatted paragraphs" won't likely be received well in either venue. I think it's good communication courtesy to make your ideas clear to people who you are trying to convey them to. Clear structure is your friend, not a bouncer keeping you out of the club.

Post links to google docs as quick takes if posting posts proper feels like a high bar?

I'm starting to think it was a mistake for me to engage in this debate week thing. I just spent a good chunk of my baby's first birthday arguing with strangers on the Internet about what amounts to animals vs. humans. This does not seem like a good use of my time, but I'm too pedantic to resist replying to comments I feel the need to reply to. -_-

In general, I feel like this debate week thing seems somewhat divisive as well. At least, it doesn't feel nice to have so many disagrees on my posts, even if they still somehow got a positive amount of karma.

I really don't have time to make high-effort posts, and it seems like low-effort posts do a disservice to people who are making high-effort posts, so I might just stop.

So, I read a while back that SBF apparently posted on Felicifia back in the day. Felicifia was an old Utilitarianism focused forum that I used to frequent before it got taken down. I checked an archive of it recently, and was able to figure out that SBF actually posted there under the name Hutch. He also linked a blog that included a lot of posts about Utilitarianism, and it looks like, at least around 2012, he was a devoted Classical Benthamite Utilitarian. Although we never interacted on the forum, it feels weird that we could have crossed paths back then.

His Felicifia: https://felicifia.github.io/user/1049.html
His blog: https://measuringshadowsblog.blogspot.com/

I'm wondering what people's opinions are on how urgent alignment work is. I'm a former ML scientist who previously worked at Maluuba and Huawei Canada, but switched industries into game development, at least in part to avoid contributing to AI capabilities research. I tried earlier to interview with FAR and Generally Intelligent, but didn't get in. I've also done some cursory independent AI safety research in interpretability and game theoretic ideas my spare time, though nothing interesting enough to publish yet.

My wife also recently had a baby, and caring for him is a substantial time sink, especially for the next year until daycare starts. Is it worth considering things like hiring a nanny, if it'll free me up to actually do more AI safety research? I'm uncertain if I can realistically contribute to the field, but I also feel like AGI could potentially be coming very soon, and maybe I should make the effort just in case it makes some meaningful difference.

It's really hard to know without knowledge of how much a nanny costs, your financial situation and how much you'd value being able to look after your child yourself.

If you'd be fine with a nanny looking after your child, then it is likely worthwhile spending a significant amount of money in order to discover whether you would have a strong fit for alignment research sooner.

I would also suggest that switching out of AI completely was likely a mistake. I'm not suggesting that you should have continued advancing fundamental AI capabilities, but the vast majority of jobs in AI relate to building AI applications rather than advancing fundamental capabilities. Those jobs won't have a significant effect on shortening timelines, but will allow you further develop your skills in AI.

Another thing to consider: if at some point you decide that you are unlikely to break into technical AI safety research, it may be worthwhile to look at contributing in an auxiliary manner, ie. through mentorship or teaching or movement-building.

So, a while back I came up with an obscure idea I called the Alpha Omega Theorem and posted it on the Less Wrong forums. Given how there's only one post about it, it shouldn't be something that LLMs would know about. So in the past, I'd ask them "What is the Alpha Omega Theorem?", and they'd always make up some nonsense about a mathematical theory that doesn't actually exist. More recently, Google Gemini and Microsoft Bing Chat would use search to find my post and use that as the basis for their explanation. However, I only have the free version of ChatGPT and Claude, so they don't have access to the Internet and would make stuff up.

A couple days ago I tried the question on ChatGPT again, and GPT-4o managed to correctly say that there isn't a widely known concept of that name in math or science, and basically said it didn't know. Claude still makes up a nonsensical math theory. I also today tried telling Google Gemini not to use search, and it also said it did not know rather than making stuff up.

I'm actually pretty surprised by this. Looks like OpenAI and Google figured out how to reduce hallucinations somehow.

I ran out of the usage limit for GPT-4o (seems to just be 10 prompts every 5 hours) and it switched to GPT-4o-mini. I tried asking it the Alpha Omega question and it made some math nonsense up, so it seems like the model matters for this for some reason.

I recently interviewed with Epoch, and as part of a paid work trial they wanted me to write up a blog post about something interesting related to machine learning trends. This is what I came up with:

http://www.josephius.com/2022/09/05/energy-efficiency-trends-in-computation-and-long-term-implications/

[comment deleted]1
0
0
Curated and popular this week
Relevant opportunities