I want to focus on the following because it seems to be a problematic misunderstanding:
"1. Temporal position should not impact ethics (hence longtermism)"
This genuinely does seem to be a common view in EA, namely, that when someone exists doesn't (in itself) matter, and that, given impartiality with respect to time, longtermism follows. Longtermism is the view we should be particularly concerned with ensuring long-run outcomes go well.
The reason this understanding is problematic is that the probably two strongest objections to longtermism (in the sense that, if these objections hold, they rob longtermism of its practical force) have nothing to do with temporal position in itself. I won't say if these objections are, all things considered, plausible, I'll merely set out what they are.
First, there is the epistemic objection to longtermism (sometimes called the 'tractability', 'washing-out', or 'cluelessness' objection) that, in short, we can't be confident enough about the impact our actions will have on the longrun future to make it the practical priority. See this for recent discussion and references: https://forum.effectivealtruism.org/posts/z2DkdXgPitqf98AvY/formalising-the-washing-out-hypothesis#comments. Note this has nothing to do with different values of people due to time.
Second, there is the ethical objection that appeals to person-affecting views in population ethics and has the implication making (happy) lives is neutral.* What's the justification for this implication? One justification could be 'presentism', the view only presently existing people matter. This is a justification based on temporal position per se, but it is (I think) highly implausible.
An alternative justification, which does not rely on temporal position in itself, is 'necessitarianism', the view the only people that matter are those that exist necessarily (i.e. in all outcomes under consideration). The motivation for this is (1) outcomes can only be better or worse if they are better or worse for someone ('person-affecting restriction') and (2) existence is not comparable to non-existence for someone ('non-comparativism'). In short, it isn't better to create lives, because it's not better for the people that get created. (I am quite sympathetic to this view and think too many EAs dismiss it too quickly, often without understanding it.)
The further thought is that our actions change the specific individuals who get created (e.g. think if any particular individual alive today would exist if Napoleon had won Waterloo). The result is that our actions, which aim to benefit (far) future people, cause different people to exist. This isn't better for either the people that would have existed, or the people that will actually exist. This is known as the 'non-identity problem'. Necessitarians might explain that, although we really want to help (far) future people, we simply can't. There is nothing, in practice, we can do make their lives better. (Rough analogy: there is nothing, in practice, we can do to make trees' lives go better - only sentient entities can have well-being.)
Note, crucially, this has nothing to do with temporal position in itself either. It's the combination of only necessary lives mattering and our actions changing which people will exist. Temporal position is ethically relevant (i.e. instrumentally important), but not ethically significant (i.e. doesn't matter in itself).
*You can have symmetric person-affecting views (creating lives is neutral). You can also have asymmetric person-affecting views (creating happy lives is neutral, creating unhappy lives is bad). Asymmetric PAVs may, or may not, have concern for the long term depending on what the future looks likes and whether they think adding happy lives can compensate for adding unhappy lives. I don't want to get into this here as this is already long enough.
I agree with this answer. Also, lots of people do think that temporal position (or something similar, like already being born) should affect ethics.
But yes OP, accepting time neutrality and being completely indifferent about creating happy lives does seem to me to imply the counterintuitive conclusion you state. You might be interested in this excellent emotive piece or section 4.2.1 of this philosophy thesis. They both argue that creating happy lives is a good thing.