Highlighting this post from LessWrong. I’m surprised this is even controversial.
The mistake MacCaskill, Ord, and others seem to be making is that they are confusing “what is right” with “what is the best course of action.”
Just as a $10 payout with a 1% chance of success is a worse bet than a $1 payout on a 100% sure thing, you have to account for the likelihood of your actions leading to the desired outcomes, and in the case of the future, that is hard to predict and the farther into the future you go, the harder that becomes.
The same confusion appears in the Trolley Problems: it assumes we have perfect knowledge of the system or at least better knowledge than we actually do. In practice, moral truths like “people have the same value everywhere at every time” are inputs to the decision procedure, not decisive in themselves.
In the Trolley Problem, we have to assume we know better than the people operating the Trolley how to avoid an accident, that the “victims” are helpless, etc. In extreme longtermist thinking, we are assuming many, many things we can’t possibly know: the course of human history going forward, the desires of future people, the nature of life elsewhere in the galaxy, etc., etc. etc. … where the etceteras go on for billions of pages.
Note, this doesn’t mean we should think more about the long-term, and certainly about the mid-term (next century). It just means these extreme longterm forecasts are essentially worthless (except maybe as sci-fi plots).