This is a special post for quick takes by Omega. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Brief reflections on the Conjecture post and it's reception
(Written from the non-technical primary author)
Reception was a lot more critical than I expected. As last time, many good points were raised that pointed out areas where we weren't clear
We shared it with reviewers (especially ones who we would expect to disagree with us) hoping to pre-empt these criticisms. The gave useful feedback.
However, what we didn't realize was that the people engaging with our post in the comments were quite different from our reviewers and didnt share the background knowledge that our reviewers did
We included our end line views (based on feedback previously that we didn't do this enough) and I think it's those views that felt very strong to people.
It's really, really hard to share the right level of detail and provide adequate context. I think this post managed to be both too short and too long.
Short: because we didn't make as many explicit comparisons benchmarking research
Long: we felt we needed to add context on several points that weren't obvious to low context people.
When editing a post it's pretty challenging to figure out what assumptions you can assume and what your reader won't know, because there's a broad range of knowledge. I think nested thoughts could be helpful for making posts reasonable length
We initially didn't give as much detail in some areas because the other (technical) author is time-limited and didn't think it was critical. The post editing process is extremely long for a post of this size and gravity, so we had to make decisions on when to stop iterating.
Overall, I think the post still generated some interesting and valuable discussion, and I hope it at the very least causes people to think more critically about where they end up working.
I am sad that Conjecture didn't engage with the post as much as we would have liked.
I think it's difficult to strike a balance of 'say what you believe to be true' and 'write something people aren't put off by'
I think some people expected their views to be reflected in our critique. I think I'm sympathetic to that to some extent, but I think you can err to far in that direction (and I've seen pushback the other way as well). It feels like with this post, people felt very strongly (many comments were pretty strongly stated) such that it wasn't just a disagreement but people felt it was a hitpiece. Il
I think I want to get better at communicating that, ultimately, these are the views of a very small group of people, these topics are very high uncertainty, and there will be disagreements, but that doesn't mean we have a hidden agenda or something we are trying to push. (I'll probably add this to our intro).
I'd be thrilled to see others write their own evaluations of these or other orgs.
We didn't do some super basic things which feel obvious in retrospect e.g. explain why we are writing this series. But context is important when people are primed to respond negatively to a post.
Changes we plan to make:
Recruiting "typical" readers for our next review round
Hiring a copyeditor so we can spend more time on substance
Figuring out other ways to save time. Ideally, getting other technical contributors on board would be great (it would improve the quality and hopefully provide a slightly different perspective). Unfortunately, it's hard to get people to do unpaid, anonymous work that might get a lot of pushback.
Posting an intro post with basic context we can point people to
The next post will be on anthropic and have (substantively) different critiques. I'd ideally like to spend some time figuring out how to murphyjitsu it so that we can meet people where they are at
I want to ensure that we get more engagement from anthropic (although I can imagine they might not engage much for different reasons to Conjecture - e.g. NDAs and what they are allowed to say publicly)
On a personal note, the past few days have been pretty tough for me. I noticed I took the negative feedback pretty hard.
I hope we have demonstrated that we are acting in good faith, willing to update and engage rigorously with feedback and criticism, but some of the comments made me feel like people thought we were trying to be deceptive or mislead people. It's pretty difficult to take that in when it's so far from our intentions.
We try not to let the fact that our posts are anonymous mean we can say things that aren't as rigorous, but sometimes it feels like people don't realize that we are people too. I think comments might be phrased differently if we weren't anonymous.
I think it's especially hard when this post has taken many weekends to complete, and we've invested several hours this week in engaging with comments, which is a tough trade off against other projects.
I want to say I really sympathize with your feelings you've expressed. I really get the sense when reading your writing that you're trying to do the EA thing, and I can't help but love that.
I also want to express some appreciation for what you are doing. I am really glad to see this series being posted and I think it is generating a lot of useful conversation. <3
I wouldn't take the negative feedback too seriously, people can get tribal about topics like this and by my lights the quality of arguments they presented seemed low.
I had a kind of mixed reaction to your post, which I felt quite sad about because I've been considering writing up my own post with my own substantial concerns about Conjecture. I would be happy to provide feedback on any future posts of yours and would love to help you with your mission.
I think good critique posts are really essential for a healthy AI Alignment field, and I really deeply appreciate the effort you put into your posts. I also know how hard it can be to deal with the pushback to critiques like this, and am really sorry things feel stressful to you.
I personally disagree quite strongly with the your critiques of both Redwood and Conjecture, and also at a meta-level feel like a bunch of things are off about those critiques, but I also think that especially post-FTX, I really want to see more people to poke at organizations in the space, and also discussing things like character-evidence for prominent figures in EA/Rationality/AI Alignment, which was I think the most important section of your Conjecture post.
Although I upvoted because I think these critiques are really healthy. The visceral feeling of reading this post was quite different to the first one. This one feels more judgemental on a personal level and gave me information that felt was too privacy violating but I can't quite articulate why. A lot of it feels like dunks on Conjecture for being young, ambitious, and for failing at times (I will not I know this is not the core of the critique it just FEELS that way).
I just do not feel like the average forum user is in a place where we can adjudicate the personal things regarding the interpersonal issues named in the Conjecture post. I also feel confused about how to judge a VC funded entity given as both the critique and the response notes that these are often informal texts and slack channel messages.
While we're taking a short break from writing criticisms, I (the non-technical author) was wondering if people would be find it valuable for us to share (brief) thoughts what we've learnt so far from writing these first two critiques - such as how to get feedback, balance considerations, anonymity concerns, things we wish would be different in the ecosystem to make it easier for people to provide criticisms etc.
Especially keen to write for the audience of those who want to write critiques
Keen to hear what specific things (if any) people would be curious to hear
We're always open to providing thoughts / feedback / inputs if you are trying to write a critique. I'd like to try and encourage more good-faith critiques that enable productive discourse.
Hi Omega, I'd be especially interested to hear your thoughts on Apollo Research, as we (Manifund) are currently deciding how to move forward with a funding request from them. Unlike the other orgs you've critiqued, Apollo is very new and hasn't received the requisite >$10m, but it's easy to imagine them becoming a major TAIS lab over the next years!
I love this series and I'm sorry to see that you haven't continued it. The rapid growth of AI Safety organizations and the amount of insider information and conflicts of interest is kind of mind boggling. There should be more of this type of informed reporting, not less.
Our next critique (on Conjecture) will be published in 2 weeks.
The critqiue after that will be on Anthropic. If you'd like to be a reviewer, or have critiques you'd like to share, please message us or email anonymouseaomega@gmail.com.
Some quick thoughts from writing the critique post (from the perspective of the main contributor / writer w/o a TAIS background)
If you're a non-subject matter expert (SME) who can write, but who knows that other SME's have good/thoughtful critiques, I think it's worth sitting down with them and helping them write it. Often SME's lack the time and energy to write a critique. I think not being a SME gave me a bit of an outsider's perspective and I think I pushed back more on pieces because they weren't obvious to non-technical people, which I think made some of the technical critiques more specific.
Overall, we are all really happy with the response this post has gotten, the quality of critiques / comments, and the impact it seems to be making in relevant circles. I would be happy to give feedback on others' critiques, if they share similar goals (improving information asymmetry, genuinely truth seeking).
Writing anonymously has made this post better quality because I feel less ego / attached to the critiques we made, and feel like i can be more in truth seeking mode rather than worrying about protecting my status / reputation. On the flipside, we put a lot of effort into this post and i feel sad that this won't be recognized, because i'm proud of this work.
Things we will change in future posts (keen to get feedback on this!)
We will have a section which states our bottom-line opinions very explicitly and clearly (e.g. org X should receive less funding, we don't recommend people work at org Y) and then cite which reasons we think support each critique. I think a handful of comments raised points that we had thought about, but weren't made clear on the page. I feel a little hesitatnt to not say the bottom-line view because I worry people will think we are being overly negative, but I think if we can communicate our uncertainties and cavesat them, it could be okay.
There were several contributors to this post. I think (partly due to being busy, time constraints and not wanting to delay publishing or be bottlenecked on a contributor getting back to me) I didn't scrutinize some contributions as thoroughly as I should have prior to publishing. I will aim to reduce that in future posts.
I will be sharing all future drafts with 5-10 other SME reviewers (both people we think would agree & disagree with us) prior to publication, because I think a the comments on this post improved it substantialy.
(minor) I would add a little more context on the flavor of feedback we are aiming to get from the org we are critiquing
(written by the non-technical contributor to the critique posts)
One challenge of writing critiques (understandably) is that they are really time consuming, and my technical co-author has a lot of counterfactual uses of their time. I have a lot of potential posts that would be pretty valuable but a lot of the critiques need to be fleshed out by someone more technical.
I would love to find someone who has a slightly lower opportunity cost, but still has the technical knowledge to be able to make meaningful contributions. It's hard to find someone who can do that and cares deeply about effects of high-effort critiques on the broader EA / TAIS ecosystem (that can also be trusted and we can de-anonymize ourselves to).
If you'd like to help edit our posts (incl. copy-editing - basic grammar etc, but also tone & structure suggestions and fact-checking/steel-manning), please email us at anonymouseaomega@gmail.com!
We'd like to improve the pace of our publishing and think this is an area that external perspectives could help us
Make sure our content & tone is neutral & fair
Save us time so we can focus more on research and data gathering
Brief reflections on the Conjecture post and it's reception
(Written from the non-technical primary author)
We didn't do some super basic things which feel obvious in retrospect e.g. explain why we are writing this series. But context is important when people are primed to respond negatively to a post.
Changes we plan to make:
(personal, emotional reflection)
On a personal note, the past few days have been pretty tough for me. I noticed I took the negative feedback pretty hard.
I hope we have demonstrated that we are acting in good faith, willing to update and engage rigorously with feedback and criticism, but some of the comments made me feel like people thought we were trying to be deceptive or mislead people. It's pretty difficult to take that in when it's so far from our intentions.
We try not to let the fact that our posts are anonymous mean we can say things that aren't as rigorous, but sometimes it feels like people don't realize that we are people too. I think comments might be phrased differently if we weren't anonymous.
I think it's especially hard when this post has taken many weekends to complete, and we've invested several hours this week in engaging with comments, which is a tough trade off against other projects.
I want to say I really sympathize with your feelings you've expressed. I really get the sense when reading your writing that you're trying to do the EA thing, and I can't help but love that.
I also want to express some appreciation for what you are doing. I am really glad to see this series being posted and I think it is generating a lot of useful conversation. <3
I wouldn't take the negative feedback too seriously, people can get tribal about topics like this and by my lights the quality of arguments they presented seemed low.
I really liked and appreciated both of your posts. Please keep writing them, and I hope that future feedback will be less sharp.
I had a kind of mixed reaction to your post, which I felt quite sad about because I've been considering writing up my own post with my own substantial concerns about Conjecture. I would be happy to provide feedback on any future posts of yours and would love to help you with your mission.
I think good critique posts are really essential for a healthy AI Alignment field, and I really deeply appreciate the effort you put into your posts. I also know how hard it can be to deal with the pushback to critiques like this, and am really sorry things feel stressful to you.
I personally disagree quite strongly with the your critiques of both Redwood and Conjecture, and also at a meta-level feel like a bunch of things are off about those critiques, but I also think that especially post-FTX, I really want to see more people to poke at organizations in the space, and also discussing things like character-evidence for prominent figures in EA/Rationality/AI Alignment, which was I think the most important section of your Conjecture post.
Thanks for your offer to help Oli, we really appreciate it. We'll reach out via DM.
Although I upvoted because I think these critiques are really healthy. The visceral feeling of reading this post was quite different to the first one. This one feels more judgemental on a personal level and gave me information that felt was too privacy violating but I can't quite articulate why. A lot of it feels like dunks on Conjecture for being young, ambitious, and for failing at times (I will not I know this is not the core of the critique it just FEELS that way).
I just do not feel like the average forum user is in a place where we can adjudicate the personal things regarding the interpersonal issues named in the Conjecture post. I also feel confused about how to judge a VC funded entity given as both the critique and the response notes that these are often informal texts and slack channel messages.
Thanks for sharing your experience, this kind of information is really helpful for us to know.
While we're taking a short break from writing criticisms, I (the non-technical author) was wondering if people would be find it valuable for us to share (brief) thoughts what we've learnt so far from writing these first two critiques - such as how to get feedback, balance considerations, anonymity concerns, things we wish would be different in the ecosystem to make it easier for people to provide criticisms etc.
We're always open to providing thoughts / feedback / inputs if you are trying to write a critique. I'd like to try and encourage more good-faith critiques that enable productive discourse.
Hi Omega, I'd be especially interested to hear your thoughts on Apollo Research, as we (Manifund) are currently deciding how to move forward with a funding request from them. Unlike the other orgs you've critiqued, Apollo is very new and hasn't received the requisite >$10m, but it's easy to imagine them becoming a major TAIS lab over the next years!
I love this series and I'm sorry to see that you haven't continued it. The rapid growth of AI Safety organizations and the amount of insider information and conflicts of interest is kind of mind boggling. There should be more of this type of informed reporting, not less.
I'd be interested to read about what you've learnt so far from writing these critiques.
Quick updates:
The post is now live!
Some quick thoughts from writing the critique post (from the perspective of the main contributor / writer w/o a TAIS background)
(written by the non-technical contributor to the critique posts)
One challenge of writing critiques (understandably) is that they are really time consuming, and my technical co-author has a lot of counterfactual uses of their time. I have a lot of potential posts that would be pretty valuable but a lot of the critiques need to be fleshed out by someone more technical.
I would love to find someone who has a slightly lower opportunity cost, but still has the technical knowledge to be able to make meaningful contributions. It's hard to find someone who can do that and cares deeply about effects of high-effort critiques on the broader EA / TAIS ecosystem (that can also be trusted and we can de-anonymize ourselves to).
If you'd like to help edit our posts (incl. copy-editing - basic grammar etc, but also tone & structure suggestions and fact-checking/steel-manning), please email us at anonymouseaomega@gmail.com!
We'd like to improve the pace of our publishing and think this is an area that external perspectives could help us