I'm the Founder and Co-director of The Unjournal;. W organize and fund public journal-independent feedback, rating, and evaluation of hosted papers and dynamically-presented research projects. We will focus on work that is highly relevant to global priorities (especially in economics, social science, and impact evaluation). We will encourage better research by making it easier for researchers to get feedback and credible ratings on their work.
Previously I was a Senior Economist at Rethink Priorities, and before that n Economics lecturer/professor for 15 years.
I'm working to impact EA fundraising and marketing; see https://bit.ly/eamtt
And projects bridging EA, academia, and open science.. see bit.ly/eaprojects
My previous and ongoing research focuses on determinants and motivators of charitable giving (propensity, amounts, and 'to which cause?'), and drivers of/barriers to effective giving, as well as the impact of pro-social behavior and social preferences on market contexts.
Podcasts: "Found in the Struce" https://anchor.fm/david-reinstein
and the EA Forum podcast: https://anchor.fm/ea-forum-podcast (co-founder, regular reader)
Twitter: @givingtools
I'm trying to understand... what does "exempt" mean in the phrase "exempt, salaried employee"?
Do you mean that your salary is part of the expenses of a tax-exempt nonprofit, so people who donate to PauseAI (partly to pay your salary) can deduct this from their taxes if they itemize their returns? And I'm trying to understand the connection between this and the idea of claiming pro-bono hours? Thanks!
I just wanted to make sure The Unjournal was eligible @Toby Tremlett🔹 . We made this post and tagged it but you state, "only projects that post or answer + message me are eligible for next week's Donation Election". I hadn't seen that earlier, so I'm messaging you now. (Maybe other orgs also overlooked that?)
The Unjournal (unjournal.org) commissioned the evaluation of one of the biosecurity-relevantpapers you mention (Barberio et al, 2023). See our evaluation package here, with links to each evaluation within.
The evaluators generally agree about the importance and promise of this work, but also express substantial doubts about its credibility and usefulness. (They also make specific suggestions for improvements and extensions, as well as requests for clarification.) The evaluation manager echoes this, noting that the “limitations of the paper as it stands make it far less valuable than it could be.”
Note that The Unjournal has commissioned evaluations of Naidu et al (2023). See the summary and ratings here, and the linked evaluation reports. The second report offered substantial critiques and questions about the methods, interpretations, and context.
NB: The Unjournal commissioned two expert evaluations of this work: see the full reports and ratings here.
From the first evaluation abstract (anonymous):
Pro: Raises important points and brings them to wider attention in simple language. Useful for considering individual RCTs.
Con: Not clear enough about intended use cases and framing. ... Guidelines need more clarity and precision before they can be genuinely used. I think best to reframe this as a research note, rather than a ready-to-use ‘guideline’. Unclear whether this is applicable to considering multiple studies and doing meta-analysis.
From the second evaluation abstract (by Max Maier):
The proposal makes an important practical contribution to the question of how to evaluate effect size estimates in RCTS. I also think overall the evaluation steps are plausible and well justified and will lead to a big improvement in comparison to using an unadjusted effect size. However, I am unsure whether they will lead to an improvement over simpler adjustment rules (e.g., dividing the effect size by 2) and see serious potential problems when applying this process in practice, especially related to the treatment of uncertainty.
I'd love to know if this work is being used or followed up on.
We're considering pushing this further and investing in a more bespoke, shareable, and automated platform:
Improving Technology and User Experience: We want to build better tools for scholars, policymakers, journalists, and philanthropists to engage with impactful research and The Unjournal’s work. This includes developing interactive LLM tools that allow users to ask questions about the research and evaluation packages, creating a more interactive and accessible experience. We also want to extend this to a larger database of impactful research, providing tools to aggregate and share users’ insights and discussion questions with researchers and beyond.
This may take the form of research/evaluation conversational notebooks, inspired by tools like NotebookLM and Perplexity. These notebooks would be automatically generated from our content (e.g., at unjournal.pubpub.org) and continuously updated. We envision:
- Publicly shareable notebooks, also enabling users to share their notebook outputs
- One notebook for each evaluation package, as well as notebooks covering everything in a particular area or related to an identified pivotal question.
- A semantic search tool for users to query "what has The Unjournal evaluated?" across our entire database
- Embedded explanations of the evaluation context, including The Unjournal’s goals and approach
- Clear sourcing and transparent display of sources within both our content and basic web data. Academic citation and linking support
- Fine tuning and query engineering to align the explanations with our communication style (and, in particular, to clarifying which points were raised by Unjournal managers versus independent evaluators, versus authors)
We aim beyond this, to
- Incorporate a wider set of research (e.g., all the work in our prioritized database)
- Leverage users’ queries and conversation (with their permission) to provide feedback to researchers and practitioners on
- Frequently-asked-questions and requests for clarification, especially those that were not fully resolved
- Queries and comments suggesting doubts and scope for improvement
- Ways users are approaching and incorporating the research in their own practice
- … and similarly, to provide feedback to evaluators, and feedback that informs our own (Unjournal) approaches
Ultimately, this tool could become a "killer app" for conveying questions and feedback to researchers to help them improve and extend their work. In the long term, we believe these efforts could contribute to building future automated literature review and evaluation tools, related to the work of Elicit.org, Scite, and Research Rabbit.
We will support open-source, adaptable software. We expect these tools to be useful to other aligned orgs (e.g., to support ‘living literature reviews’).
Project Idea: 'Cost to save a life' interactive calculator promotion
What about making and promoting a ‘how much does it cost to save a life’ quiz and calculator.
This could be adjustable/customizable (in my country, around the world, of an infant/child/adult, counting ‘value added life years’ etc.) … and trying to make it go viral (or at least bacterial) as in the ‘how rich am I’ calculator?
The case
While GiveWell has a page with a lot of tech details, but it’s not compelling or interactive in the way I suggest above, and I doubt they market it heavily.
GWWC probably doesn't have the design/engineering time for this (not to mention refining this for accuracy and communication). But if someone else (UX design, research support, IT) could do the legwork I think they might be very happy to host it.
It could also mesh well with academic-linked research so I may have some ‘Meta academic support ads’ funds that could work with this.
Tags/backlinks (~testing out this new feature)
@GiveWell @Giving What We Can
Projects I'd like to see
EA Projects I'd Like to See
Idea: Curated database of quick-win tangible, attributable projects