Keven Bisson, "Longtermism's Techno-Narrative and its Criticisms"
Work in Progress Seminar Series | Winter 2023
鈥淟ongtermism's Techno-Narrative and its Criticisms鈥
Keven Bisson
Friday, March 3, 2023
3:30-5:30 PM
Leacock Building, Room 927
础产蝉迟谤补肠迟:听
According to Greaves and MacAskill in The Case for Strong Longtermism, we morally ought to prioritize the far-future. The main way to prioritize the far-future is by mitigating existential risks, risks that threaten the potential of humanity. The main reason put forward by longtermists is utilitarian in nature: if we weight future people and present people equally, expected-value analyses show that all interventions focusing on the far-future (prevention of pandemics, AI takeover, and asteroid deflection) can do much more good than the most effective short-term intervention (distribution of anti-malaria bednets).
I raise a problem for the metric used in their expected-value analyses: they do not account for the moral distinction between saving lives and increasing the number of lives. These two types of consequence are considered equal and are conflated together in a single metric. However, I argue that they are incommensurable and should be separated. My thesis is that taking this into account, we ought not to prioritize the far-future over the present.
Considering the two types of consequences on par leads to the problematic view that we ought to be indifferent between saving the life of someone and ensuring that a supplementary person is born. I argue that we ought to favour to save the person but that we cannot use a ratio to keep a single metric. To avoid these problems, longtermists ought to compare short-term and longterm interventions on two metrics.
On one hand, reducing existential risk is expected to save the lives of the people that would have died from the catastrophe. Moreover, by avoiding human extinction, trillions of expected supplementary lives that would not have existed without the existential risk mitigation intervention would exist. On the other hand, distributing bednets saves lives efficiently. Moreover, even if including the descendants of the person saved to be part of the effect of saving this person is considered bad practice, in the total utilitarian framework of longtermism it is acceptable.
The results of comparing short-term and long-term interventions become unclear when separating the two morally incommensurable metrics. Interventions focusing on the far-future are moderately less effective to save lives than bednets distribution but moderately more effective to increase the number of people. On this analysis, we ought not to prioritize the long-term over the short-term but treat them separately and relatively equally.