JR

Joseph Richardson

Economics PhD student at Lancaster University and analyst at SoGive.

Bio

Economics PhD Student (Lancaster University)  and analyst at SoGive.

Comments
19

Hello Charlie. This looks like an interesting research question. However, I do have a few comments on the interpretation and statistical modelling. Both statistical comments are on subtle issues, which I would not expect an undergraduate to be aware of. Many PhD students won't be aware of them either!

On interpretation with respect to the Easterlin paradox: your working model, as far as I can tell, appears to assume that quit rates decrease in a person's latent happiness, but not in their reported happiness. However, if shifts in reporting are caused by social comparisons (i.e., all the other jobs or relationships you see around you improving) then from that direction rescaling no longer implies a flatter relationship, as the quality of the jobs or relationships available upon quitting have increased. However, your results are indicative that other forms of rescaling are not occurring e.g., changes in culture. I think this distinction is important for interpretation. 

The first of the statistical comments that these changes in probabilities of making a change could also be explained by a general increase in the ease of or tendency to get a hospital appointment/new job.  This stems from the non-linearity of the logistic function. Logistic regression models a latent variable that determines the someone's tendency to quit and converts this into a probability. At low probabilities, an increase in this latent variable has little effect on the probability of separation due to the flatness of the curve mapping the latent variable into probabilities. However, the same increase at values of the variable that give probabilities closer to 0.5 will have a big effect on the probability of separation as the curve is steep in this region. As your research question is conceptual (you're interested in whether life satisfaction scales map to an individual's underlying willingness to quit in the same way over time), rather than predicting the probability of separations, the regression coefficients on time interacted with life satisfaction should be the parameter of interest rather than the probabilities. These effects can often go in different directions. A better explanation of this issue with examples is available here: https://datacolada.org/57 I also don't know whether results will be sensitive to assuming a logit functional form relative to other reasonable distributions, such as a probit.

Another more minor comment, is that you need to be careful when adding individual fixed effects to models like this, which you mentioned you did as a robustness check. In non-linear models, such as a logit, doing this often creates an incidental parameters problem that make your regression inconsistent. In this case, you would also be dealing with the issue that it is impossible to separately identify age, time, and cohort effects. Holding the individual constant, the coefficient of a change in time with life satisfaction would include both the time effect you are interested in and an effect of ageing that you are not.

I'd be happy to discuss any of these issues with you in more detail.

I'd add that your Veganuary participation numbers are likely way too generous. They come from a poll of the general public on participation and then scaling those numbers up to a broader population. However, opinion polling of this type famously overestimates the share of the population with rare characteristics due to mistakes, trolling, or in this case potentially social desirability bias. 

Indeed, I have the impression that surveys estimating the share of vegans in the US population often getting about 5%, which is around the lizardman constant. Objective purchase data suggests that the number is more like 1%.

Thank you for the reply. Indeed, I was referring to the studies engaging with butter-margarine substitution. However, I think it definitely needs emphasising just how weak those studies are and thus that they cannot be trusted to make any policy decisions. Additionally, while relevant, I would not want to use Auer and Papies (2020) as a basis for policy decisions either, as it is quite unclear how comparable those estimates are due to them pooling across a wide range of markets potentially quite different to this one (is the degree of product differentiation the same? groceries could be different to pharmaceuticals or durables). Finally, there is also the issue the papers they synthesise using instruments might not use good ones, but I do not have the time to check that. 

Skimming the studies in the meta-analysis, I am rather sceptical that anything can be concluded from these studies. As far as I can tell, each study investigates how quantity sold varies with the price, but cannot distinguish between demand and supply shocks. If any price changes are instead due to demand shocks, then the estimated coefficient could be severely biased and potentially possess an incorrect sign. Therefore, without a valid instrumental variable that only affects prices through supply-side factors, these coefficients will not be informative of consumers' true substitution patterns.

Although I have no doubts that iron deficiency is a problem, I do not think the evidence linked for this is particularly strong backing for it having massive effects. In particular, estimating the cognitive impacts of anything from one study that hits marginal statistical significance with a massive estimated effect size (0.5 standard deviations)  seems likely to lead to a wildly inaccurate estimate of the true effect. This is because this study possesses all the hallmarks of low statistical power interacting with publication bias.

Furthermore, given that the the other studies appear to have small samples sizes (note: I am an economist, not a medic) and the p-values are not far off 0.05, I would be worried about publication bias exaggerating any effects there as well, especially as I suspect studies conducted fifteen to twenty years ago were unlikely to be pre-registered.

To convince me of an effect size, I would want to see a study with p<<0.01 or a meta-analysis of RCTs that addresses the issue of publication bias.

It's good to see an animal welfare organisation using serious analysis to guide their interventions, although I'm not entirely clear why the assessment is being done on the basis of deaths rather than the integral of welfare over time?

Looking at the numbers, it appears that producing  a kilogram of carp involves significantly more time in factory farms than for a kilogram of salmon (both fish spend around 3 years in a farm but carp weigh half as much and there are more premature deaths). Additionally, given the far higher mortality rates, it seems likely that carp welfare is significantly worse than salmon welfare. If both these factors are true, this intervention is only backfiring under specific (and potentially resolvable) assumptions on the badness of slaughtering wild fish , the welfare of a wild fishes, and the elasticity of wild fish populations with respect to farmed salmon demand. 

Isn't that exactly what we'd expect when there is the marginal utility of consumption is diminishing?  An additional pound in a developing country is probably more likely to be purchasing something more important to a person's welfare than someone in a developed country e.g., food or basic shelter vs video games. Furthermore,  some of these essentials could themselves be life extending, which would bias the estimates. Finally, it's possible that life in poverty is bad enough that individuals are willing to forego less to extend it (I put the least weight on this explanation, but it is plausible).

In each of these cases, this GDP-adjusted value of a statistical life discrepancy would be completely rational and the underlying poverty driving the differences would be what needs addressing.

Thanks. I think the issue is the use of  the word effect, which usually implies causality in my field (Economics), rather than association when referring to the cross-sectional analysis alongside the fact that context was lost when it was edited down for the forum.

I'd like to add that related interventions have been successful for policymakers in the developing world, with econometrics training  increasing reliance on RCT evidence in policymaking and instruction in Effective Altruism increasing politicians' altruism. Indeed, influencing policymakers may be cost-effective in a wider range of scenarios as it could be far cheaper and is unlikely to require as much highly visible political messaging.

I think this is interesting research but I would quibble with your interpretation of the top part of figure 4 as a causal effect. As far as I can tell, that part is a cross-sectional analysis that is only valid if individuals with greater knowledge of climate organisations are the same in all relevant ways as those with lower levels of knowledge that identify with Friends of the Earth to the same extent. This seems unlikely to be true and indeed does not have to hold for the fixed effect analyses that make up the majority of this piece to be unbiased. If I have not misinterpreted something here, I would recommend being much clearer in future about when switching between fixed and random effects models  as they estimate very different parameters, with fixed effects usually being much more reliable at retrieving causal effects. 

Load more