Going to take a stab at this (from my own biased perspective). I think Peter did a very good job, but Sarah was right that I don't think this quite answered your question. I think it's difficult to think of what counts as 'generating ideas' vs rediscovering new ones, many new philosophies/movements can generate ideas but they can often be bad ones. And again, EA is a decentral-ish movement and it's hard to get centralised/consensus statements on it.
With enough caveats out of the way, and very much from my biased PoV:
"Longtermism" is dead - I'm not sure if someone has gone 'on record' for this, but I think longtermism, especially strong longtermism, as a driving idea for effective altruism is dead. Indeed, to the extent that AI x-risk and Longtermism went hand-in-hand is gone because AI x-risk proponents increasingly view it as a risk that will be played out in years and decades, not centuries and millenia. I don't expect future EA work to be justified under longtermist framing, and I think this reasonably counts as the movement 'acknowledging it was wrong' in some collective-intelligence sort of way.
The case for Animal Welfare is growing - In the last 2 years, I think the intellectual case for Animal Welfare as a leading, and perhaps the EA cause has actually strengthened quite a bit. Rethink published their Moral Weight Sequence which has influenced much subsequent work, see Ariel's excellent pitch for Animal Welfare to dominate nearttermist spending. On radical new ideas to implement, Matthias' pitch for screwworm eradication sounded great to me, let's get it happening! Overall, Animal Welfare is good and EA continues to be directionally ahead on it, and the source of both interesting ideas and funding in this space, in my non-expert opinion.
Thorstad's Criticism of Astronomical Value - I'm specifically referring to David's sequence of 'Existential Risk Pessimism', which I think is broadly part of the EA-idea ecosystem, even if from a critical perspective. The first few pieces, which argues that actually longtermists should have low x-risk probabilities, and vice versa, was really novel and interesting to me (and I wish more people had responded to it). I think that being able to openly criticise x-risk arguments and defer less is hopefully becoming more open, though it may still be a minority view amongst leadership.
Effective Giving is Back - My sense is that, over the last years, and probably spurred by the FTX collapse and fallout, that Effective Giving is back on the menu. I'm not particularly sure why it left, or what extent it did, but there are a number of posts (e.g. see here, here, and here) that indicate it's becoming a lot more of a thing. This is sort of a corrolary of 'longtermism is dead', people realised that perhaps earning-to-give, or even just giving, is something which is still valuable that a can be a unifying thing in the EA movement.
There are other things that I could mention but I ran out of time to do so fully. I think there is a sense that there are not as many new, radical ideas as there were in the opening days of EA - but in some sense that's an inevitable part of how social movements and ideas grow and change.
This looks pretty much right, as a description of how EA has responded tactically to important events and vibe shifts. Nevertheless it doesn't answer OP's questions, which I'll repeat:
Your reply is not about new ideas, or the movement acknowledging it was wrong (except about Bankman-Fried personally, which doesn't seem like what OP is asking about), or new organizations.
It seems important, to me, that EA's history over the last two years is instead mainly the story of changes in funding, in popular discourse, and in the social strategy of preexisting institutions. e.g. the FLI pause letter was the start of a significant PR campaign, but all the *ideas* in it would've been perfectly familiar to an EA in 2014 (except for "Should we let machines flood our information channels with propaganda and untruth?", which is a consequence of then-unexpected developments in AI technology rather than of intellectual work by EAs).
I'm not sure I understand the expectations enough about what these questions are looking for to answer.
Firstly, I don't think "the movement" is centralized enough to explicitly acknowledge things as a whole - that may be a bad expectation. I think some individual people and organizations have done some reflection (see here and here for prominent examples), though I would agree that there likely should be more.
Secondly, It definitely seems very wrong to me though to say that EA has had no new ideas in the past two years. Back in 2022 the main answer to "how do we reduce AI risk?" was "I don't know, I guess we should urgently figure that out" and now there's been an explosion of analysis, threat modeling, and policy ideas - for example Luke's 12 tentative ideas were basically all created within the past two years. On top of that, a lot of EAs were involved in the development of Responsible Scaling Policies which is now the predominant risk management framework for AI. And there's way more too.
Unfortunately I can mainly only speak to AI as it is my current area of expertise, but there's been updates in other areas as well. For example, at just Rethink Priorities, welfare ranges, CRAFT... (read more)
Nit - I'm pretty sure you mean 'overrate'.
So you'd say the major shift is:
Also this seems notable: