Thanks for taking the time to run and analysis this survey.
Are there plans to include questions about income and/or financial stability in next year’s survey?
Rationale: I believe this data would be valuable in providing individuals with a clearer understanding of the financial security of others in the EA community and could help newcomers assess whether the advice they receive is relevant to their own financial situation. Many recommendations and norms within EA - such as unconventional career choices, significant donation pledges, or risk-taking in pursuit of impact - can have vastly different implications depending on who's making the recommendation or reinforcing the norms.
If a significant portion of the community has financial security, it’s possible that commonly shared advice assumes a level of stability that not all newcomers have. Understanding the financial realities of EA members could help provide more contextually appropriate guidance and ensure that discussions around impact, risk, and career planning are inclusive of people from diverse economic backgrounds.
Would love to hear your thoughts on this!
TLDR:
CAPM-certified project manager, software developer, and process improvement engineer obsessed with building systems, processes, and tools to help teams and organizations thrive. Keen on landing a role in a mid-size startup (100-1000 people) where I can serve as a bridge between technical and non-technical stakeholders within a high-performing, high-integrity team.
Skills & background:
As a technical generalist who excels in dynamic and innovative environments, I love blending information technology, human-centered design, systems thinking, and compassionate leadership to craft and lead change initiatives that produce lasting positive outcomes. My career journey includes a rich array of roles spanning mid-size startups, non-profits, and small companies, where I've:
In the EA sphere, I have served as a community organizer of EA Philadelphia for the past two years and briefly worked as a contractor for Impactful Animal Advocacy in a product management / tech administration capacity.
Location/remote:
Philadelphia, Pennsylvania, USA (also open to remote)
Availability & type of work:
Full-time, contract. Available to start immediately.
Resume/CV/LinkedIn:
https://quinnmchugh.net/resume
https://linkedin.com/in/quinnpmchugh/
Email/contact:
qpmchugh@gmail.com
https://cal.com/quinnm
Other notes:
Cause agnostic, but especially interested in institutional decision-making, animal advocacy, and meta-EA.
My work is characterized by the following 3 pillars:
Great list, Kyle! Thanks for sharing. :)
I wasn't aware of The Life You Can Save's Helping Women & Girls Fund until I read your post. It's wonderful to know something like this exists.
Hi Rakafet,
Welcome to the EA Forum!
I never knew the Abstinence Violation Effect had a name - I think that's something I'll have to add to my lexicon. :)
While reading through your post, I was having a bit of trouble understanding your arguments and, the evidence behind why you think this intervention is particularly important and neglected.
If I'm understanding correctly, your argument is:
More funding should be directed towards providing vegan food to soldiers, who experience difficulties maintaining a vegan diet. By providing this support, we could reduce the likelihood of soldiers falling victim to the Abstinence Violent Affect and abandoning their vegan diet altogether, which could affect ~4000 animals over the span of a given soldier's life.
Would you say that's accurate?
I recently came across this great introductory talk from the Center for Humane Technology, discussing the less catastrophic, but still significant risks of generative large language models (LLMs). This might be a valuable resource to share with those unfamiliar with the staggering pace of AI capabilities research.
A key insight for me: Generative LLMs have the capacity to interpret an astonishing variety of languages. Whether those languages are traditional (e.g. written or verbal English) or abstract (e.g. images, electrical signals in the brain, wifi traffic, etc) doesn't necessarily matter. What matters is the events in that language can be quantified and measured.
While this opens up the door to numerous fascinating applications (e.g. translating animal vocalizations to human language, enabling blind individuals to see), it also raises some serious concerns regarding privacy of thought, mass surveillance, and further erosion of truth, among others.
Hi EAlly,
It seems like there are numerous questions to unpack here. If I'm understanding you correctly, it seems like you're generally curious about how others have sought to increase their impact through an EA lense, given a background in IT. Is that right?
If so, I think your questions might be better answered by searching for, reaching out to, and scheduling informational interviews with people working at the intersection of EA and IT. I previously came across a helpful framework for doing this sort of thing here: [Webinar] The 2-Hour Job Search - YouTube
From one generalist IT person to another, would it be helpful to hop on a call to discuss your uncertainties? https://calend.ly/quinnpmchugh/meet
While I may not have a lot to offer in terms of career guidance, I can certainly relate to your position. My background is in mechanical engineering, but I currently do a mix of IT, operations, project management, and software engineering work. Professionally, I am interested in moving into project management full-time, but am also very interested in leveraging my IT skills to improve the movement's overall coordination and intellectual diversity through projects like EA Explorer.