Hide table of contents

The Unjournal prioritizes quantitative social science research with the potential for global impact. We commission experts to publicly evaluate and rate this work, synthesizing the results. We aim to make rigorous research more impactful and impactful research more rigorous.

You can read more about us on our website. For a more complete description of our processes and theory of change, please visit our Gitbook knowledge base.[1]  

What we’ve achieved so far

 We have commissioned the evaluation of 30 papers,[2]  and built a public database of research that shows potential for impact[3].  See our main output in our community at unjournal.pubpub.org.[4][5]

We have divided our effort between “scoring goals” and building systems, tools, and teams that allow us to scale our evaluation output while maintaining our quality, credibility, and impact-focus.

This includes a transparent procedure for prioritizing research, managing its evaluation, and rewarding strong work. We’ve configured an evaluation management and publication platform for our output, as well as data dashboards. We have a team of field specialists across a range of cause areas (GH&D, animal welfare, GCR, etc.), an advisory board and management team, and a pool of several hundred qualified evaluators.[6] 

Our public output – and our engagement with hundreds of academics and applied researchers – has helped establish the feasibility of our model and built the credibility of our work. We believe our evaluation packages represent the strongest public, open-access evaluations (~peer-reviews) available anywhere for economics and adjacent fields.[7] These assessments may have influenced the considerations of organizations and funders such as GiveWell,  ALLFED, and Open Philanthropy.

We have awarded prizes for the strongest research and the most helpful evaluation work, hosted and participated in several online events and outreach workshops, working with the Center for Open Science (COS),  Effective Thesis, BITSS, and others, and shared our model and approach across a range of media.

Pilot and future projects

Targeting “Pivotal Questions”

The Unjournal is now piloting the Pivotal Questions project (read more here), aiming for more direct and observable impact. While our usual approach prioritizes research papers that show potential for impact, here we start by identifying the questions that are most pivotal to key funding and policy decisions, and then sourcing and evaluating research that informs these questions. We are contacting impact- and evidence-driven organizations to elicit their highest-value questions and claims, helping them refine and operationalize these, and eliciting their beliefs. We will source research that informs these questions, commission targeted expert evaluations of this research, and synthesize the implications of the research and evaluations (also leveraging prediction markets where feasible). We will follow up with the relevant organizations, tracking our impact by considering their responses and measuring their belief updates and potential policy changes.

Independent, collaborative, live evaluations: engaging researchers and academia, promoting robust open science

  • Encouraging Independent Evaluation work (see our pilot initiative here)
  • Live events fostering collaborative evaluation, and pivotal question and belief elicitation
    • Working with i4replication.org on this and other initiatives, potentially partnering with their ‘replication games’
  • Engaging with PhD/faculty reading groups[8]

  • Building and curating open tools and guides for evaluation and methodological rigor

Communication, feedback and interactive (LLM) tools

  • Building platforms to chat with our research and evaluations (we discuss an early model/preview here)
  • Feeding back to researchers
  • These tools can be reused in other contexts (e.g., Open Phil-sponsored living literature reviews)
  • See this brief summary of the proposed project

Funding priorities

Baseline costs and operations (evaluating impactful research)

Our current runway is 12 months, including funding to evaluate 40 more research papers/projects over this period. We are likely to use the first ~$150,000 in further funding to extend our runway by another six months, producing an additional ~20 evaluation packages, as well as enabling some exploration and progress on the initiatives described above.[9]

This funding will support our current organizational size of 1.65 FTE, our technical architecture, compensation for our management team and field specialists, payments to technical, communications, legal and administrative contractors, team and crowdsourced incentives, compensation and incentives for evaluators, prizes for the most impactful research, and some travel and conference expenses.

We are currently spending 63% of our budget on salaries, 1.4% on management team compensation, 20% on evaluation expenses, 4.4% on prioritization, 7.8% on contractor-related expenses, 0.3% on tech, and 1.4% on conference expenses.

It costs $16,840 to extend our basic operations by a month, and $285,780 to extend our operations by another year. This includes 1.65 FTE, as well as contractor-related and tech expenses. Expanding our project under austerity, considering only core activities and expenses needed to support ~40 evaluation packages per year at our current standards, would cost $7,514 per month.

As noted, this includes evaluating an additional 40 papers per year.[10] The per-paper incremental evaluation cost is approximately $1560 which includes evaluation management costs ($400), payments to evaluators ($400 x 2), additional incentives ($150 x 2), and clerical support ($60). 

Using further funding (+ ~250k over 1 year)[11]

Additional funding (above $150,000) would enable the following projects and extensions. [12] The projects below are of similar priority. Many of these will have diminishing returns, and could be funded partially. Marginal funds are likely to be divided across these projects.

  1. Building interactive LLM tools: Develop tools to help readers ask questions of the research and evaluation packages. Extend this to a larger ‘impactful research’ database, offering tools to aggregate anonymous queries and feedback to be shared with the researchers and enhance their work. We will share these tools with other aligned orgs (e.g., to support ‘living literature reviews’).    Estimated Cost: $10k - $50k to develop, under $1k per year to maintain
  2. Enhance and extend the Pivotal Questions project:  Help organizations operationalize action-relevant target questions, elicit beliefs and set prediction markets, source and commission the evaluation of relevant research, synthesize the results. Estimated Cost: $5,025 per pivotal question, including the costs of evaluating underlying papers. We expect to be able to cover up to about 10 pivotal questions over a 12 month period → $50k 
  3. Hire a part-time communications director: Help us communicate our outputs and our approach to academics, EAs, partner organizations, research users, and stakeholders. Produce explainer videos and other media.[13] Estimated Cost: ~$30k-$60k per year for 0.5 FTE

  4. Hire one or more part time research fellows: Coordinate research prioritization, evaluation and communication for cause areas/fields such as GCR/AI governance.[14][15]  Estimated Cost: $10k-$20k per 0.15 FTE role. See previous job description

  5.  Expand our covered fields/causes: Hire additional research fellows to cover:
    • Quantitative political science (voting, lobbying, attitudes), political economy
    • Impactful legal research (animal welfare, AI liability, etc.)
    • AI governance and safety (going beyond quant. social science)
  6. Increase compensation for evaluators: Attract greater expertise and care from mid-career academics. We currently pay ~$400 on average, we could increase this to range from $400 - $1500, on a sliding scale depending on expertise, demonstrated previous evaluation work, importance of evaluation, etc. Estimated Cost: ~+$1000 per evaluation package, on average, ~$40k per year overall
  7. Fund academic co-manager/research lead. Estimated cost ~$20k - $50k+ for ~10 hours/week[16]
  8. Support live events and workshops: Engage collaborative evaluation, build networks and visibility. Estimated cost: ~$5k - $10k per event (guesstimate)[17]
  9. Further clerical, ops, and light communication support: Free up David Reinstein’s time to focus more on research-linked work, such as curating evaluation and research methods guides. Estimated cost: ~$10k - $80k

You can see our worksheet listing our menu of potential projects (a work in progress).

Why might you want to support our work?

You may have doubts about the credibility of EA-aligned research and the research driving EA prioritization. Or perhaps you think this research is high-value and being neglected by academics, governments, and journalists. Maybe you want to see this research get more external scrutiny to help it become more rigorous and respected.

Maybe your organization produces this research. You want to get feedback and know if it’s credible, but submitting to traditional journals is slow and frustrating. And the journals that the relevant experts review for may "desk-reject" your research as “too niche and distant from other research in the field.”

Perhaps you see substantial potential in non-EA academic research, if you could only trust it, and if they clearly reported the outcomes you care about. Or you think that if it was focused in a slightly different direction it could be much more impactful. And it takes too long to learn “what journal it gets published in” and even then you don’t know whether that means the results are deemed credible, or if it’s just seen as highly innovative and clever.

You like the idea of rigorous research and peer review, but you think it’s a frustrating and outdated process. You are in academia or you might want to pursue it, but the way ‘publish or perish’ career incentives work just doesn't seem aligned with doing highly credible, impactful research.

 

You may want to fund (or otherwise support) The Unjournal…

To help improve impactful research

  • Providing credible low-cost external expert feedback, including for EA-driven work, especially in our applied stream
  • Bringing technical rigor and quality control, engaging with specific academic expertise  
  • Raising the awareness and the credibility of this work, especially in nascent fields (e.g.,  economics of animal welfare)

To help impact-focused organizations and funders

  • Better access academic and technical research, and understand its relevance to their pivotal questions,
  • More quickly and fully understand how credible and useful this research is

To improve academic research and make it more useful

  • Encouraging academic researchers to consider measurable impact,
  • Provide to provide tools (e.g., reporting costs per impact) to make their work practically useful to EA and impact-focused organizations and funders
  • Through making the research publication and evaluation process better, more open, and more aligned with “doing better and more useful research”

Recapping, The Unjournal has the potential to improve the robustness of impactful research across quantitative social sciences, orient mainstream research towards impact, and improve the research evaluation process. The Unjournal’s open evaluations can help improve research that informs global priorities, help communicate this work better, and shape high-impact funding and policy decisions.

How to donate

Please reach out to contact@unjournal.org and we will be happy to accept your donation.

Earmarking your donation

If you wish, we are generally happy to accept donations to be aimed at a particular part of our agenda, activities, and projects. We can discuss how we will do this in a meaningful way, so that it has an actual effect on our activities relative to the counterfactual.

If you wish to donate to fund a particular research or cause area, we can generally arrange this, as long as it is broadly in line with our mission and our competencies. In such a case,  we will be careful to make it clear in our output that ~"this research was prioritized because of a specific grant from [anonymous sourced or named donor]". If the funder themselves are involved in the research, we will acknowledge this as well, and may add a caveat about this to the evaluation and ratings output.

 

Other ways to support The Unjournal's work

The Unjournal  (see our In a nutshell) wants your involvement, help, and feedback. We offer rewards and strive to compensate people for their time and effort. 

  1.  Join our team:  Complete this form (about 3–5 min) to apply for our...
    1. Evaluator pool: to be eligible to be commissioned and paid to evaluate and rate research, mainly in quantitative social science and policy
    2. Field specialist teams: help identify, prioritize, and manage research evaluation in a particular field or cause area  
    3. Management team or advisory board, to be part of our decision-making
  2. Suggest research for us to assess using this form. We offer bounty rewards. Submit your own research here, or by contacting contact@unjournal.org
  3. Do an Independent Evaluation to build your portfolio, receive guidance, and be eligible for promotion and prizes. See details Independent evaluations (trial)
  4. Suggest "Pivotal questions" for us to focus on
  5. Give us feedback: Is anything unclear? What could be improved? Email contact@unjournal.org. We will offer rewards for the most useful suggestions.
  1. ^

    This knowledge base, hosted in Gitbook, also includes a chatbot, allowing you to ask a range of questions and get sourced and linked answers.

  2. ^

     This includes 55 evaluations accompanied by structured ratings, 11 author responses (including responses from prominent scholars and teams involving Nobel laureates), and, in many cases, detailed syntheses and discussions from the evaluation managers.

  3. ^

    This currently includes about 90 papers. It is updated regularly and automatically, helping our users keep track of our pipeline and inform their own research and policy plans.

  4. ^

    We also integrate our output into the bibliometric ecosystem (DOIs, REPEC, Google Scholar, etc.) and actively promote it on social media. We want to maximize awareness, engagement, and our potential to change entrenched peer review systems.]

  5. ^
  6. ^

    Over 100 people in the pool have substantial relevant research experience. Most have PhD degrees or are pursuing PhDs (although this is not a strict requirement). Fill out this form if you wish to apply to join the pool, or join our team and efforts in other ways.

  7. ^

    Open peer review has barely taken hold in economics, and where it has been introduced (e.g., at Economics E-Journal) it has not attained high status, and the public reviews have often been cursory. The Unjournal requests and incentivizes a high standard of care and expertise, makes the evaluations more prominent, improves their formatting and communication,  and elicits meaningful and comparable ratings and metrics.

  8. ^

     Research programs (in economics, psychology, and more) at MIT, Oxford, etc. organize and run field-specific reading groups involving PhD students and faculty. E.g., see this one on economic growth and misallocation at Stanford. They read and share critiques of recent work, sometimes engaging with the authors. We hope to work with them to leverage their expertise and make the insights public, but anonymous. We have made some outreach.

  9. ^

     Perhaps three “pivotal questions”, outreach to 1-2 PhD reading groups, and $5000 set aside towards building LLM tools. This can be done by reallocating some of our time and tech contracting funds that were previously used towards building our existing platforms.

  10. ^

     We suspect we could extend this to 60 per year at an additional cost of about $31,500 for 20 additional papers/packages.

  11. ^

     Taking the rough midpoint of each of the costs specified below yields roughly $250k over a single year, on top of the $150k mentioned above.

  12. ^

    [We are applying for at least one grant of about this size; e.g., one of these largely focuses on Pivotal Projects but also funding some of our base operations. If our application is successful, further funds/donations are likely to go to the activities below.

  13. ^

     See previous job description for context

  14. ^

     This role would involve:

    Identifying and characterizing specific research (and research themes and pivotal questions) in the area of focus, helping The Unjournal prioritize work for evaluation

    Summarizing the importance of this work, its relevance to global priorities and connections to other research, its potential limitations, and specific issues meriting evaluation

    Organizing and engaging our relevant area field specialist team, guiding discussion, meetings, and prioritization voting.

    Helping build and organize the pool of evaluators in this area, offering them guidance and methodological resources

    Serving as an evaluation manager, or assisting others in this role

    Synthesizing and summarizing the research, evaluations, and practical implications

  15. ^

     Catastrophic risks, economics of AI governance and safety, social impact of AI/technology, GH&D, Animal Welfare

  16. ^

     We generally target $57.50/hour compensation for people involved in the research parts of our work. However, getting a more substantial commitment from an academic may require ‘teaching buy-off’, which can be very costly (see one example here)  

  17. ^

     Simple inferences from grants made to i4replication.org suggest their events may cost $20k to $40k each. However, these grants likely cover substantial overhead and fixed organization costs, suggesting that the incremental event costs should be lower. We suspect The Unjournal’s evaluation event costs will be on the lower end (perhaps $5k or less), because we do not need to engineer data availability, we can avoid university overheads, and we can pool some expenses with i4replication for joint events.

13

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

Executive summary: The Unjournal is seeking funding to continue and expand its work of commissioning expert evaluations of impactful quantitative social science research, having already evaluated 30 papers and built systems for scaling while maintaining quality.

Key points:

  1. Current achievements: 30 papers evaluated, public database created, evaluation systems built, and network of hundreds of qualified evaluators established
  2. Core funding need: $285,780/year for basic operations to evaluate 40 papers annually ($1,560 per paper)
  3. New "Pivotal Questions" project aims to directly impact funding/policy by evaluating research that informs key organizational decisions
  4. Additional funding priorities ($250k/year) include: building LLM tools, expanding Pivotal Questions, hiring communications director and research fellows
  5. Value proposition: improves credibility of impact-focused research, provides faster/better feedback than traditional peer review, helps funders assess research quality
  6. Offers earmarked donations and various ways for people to contribute beyond funding (evaluators, specialists, advisors)

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

A good summary. Note that marginal per paper cost does not include our overhead, communications, building our network and tools, etc.

Curated and popular this week
Relevant opportunities