Longtermism: The New Moral Mathematics

In his new book, philosopher William MacAskill implies that humanity’s long-term survival matters more than preventing short-term suffering and death. “His arguments are shaky.”

Kieran Setiya in the Boston Review: What we do now affects those future people in dramatic ways: whether they will exist at all and in what numbers; what values they embrace; what sort of planet they inherit; what sorts of lives they lead. It’s as if we’re trapped on a tiny island while our actions determine the habitability of a vast continent and the life prospects of the many who may, or may not, inhabit it. What an awful responsibility.

This is the perspective of the “longtermist,” for whom the history of human life so far stands to the future of humanity as a trip to the chemist’s stands to a mission to Mars.

Oxford philosophers William MacAskill and Toby Ord, both affiliated with the university’s Future of Humanity Institute, coined the word “longtermism” five years ago. Their outlook draws on utilitarian thinking about morality.

According to utilitarianism—a moral theory developed by Jeremy Bentham and John Stuart Mill in the nineteenth century—we are morally required to maximize expected aggregate well-being, adding points for every moment of happiness, subtracting points for suffering, and discounting for probability. When you do this, you find that tiny chances of extinction swamp the moral mathematics. If you could save a million lives today or shave 0.0001 percent off the probability of premature human extinction—a one in a million chance of saving at least 8 trillion lives—you should do the latter, allowing a million people to die.

Now, as many have noted since its origin, utilitarianism is a radically counterintuitive moral view. It tells us that we cannot give more weight to our own interests or the interests of those we love than the interests of perfect strangers. We must sacrifice everything for the greater good. Worse, it tells us that we should do so by any effective means: if we can shave 0.0001 percent off the probability of human extinction by killing a million people, we should—so long as there are no other adverse effects. More here.

FEATURED QUOTE:

Irshad Salim

Honorary contributors to DesPardes: Ajaz Ahmed, Ammar Jafri, Anwar Abbas, Arif Mirza, Aziz Ahmed, Bawar Tawfik, Dr. Razzak Ladha, G. R. Baloch, Jamil Usman, Jawed Ahmed, Ishaq Saqi, Khalid Sharif, Masroor Ali, Md. Ahmed, Md. Najibullah, Shahbaz Ali, Shahid Nayeem, Syed Hamza Gilani