As a climate scientist/economist, I think about the future everyday. How much warming are we headed for in 2100? In 2200? How much will this warming damage the economy? And what can we do about it?
In both asking these questions and dedicating my immediate future towards researching their answers, I’m placing implicit value on the future; I value future human wellbeing in addition to present-day wellbeing.
In What we owe the future, philosopher William MacAskill makes a powerful case why you, too, should care deeply about the future. The twist is that by “future,” Mr. MacAskill doesn’t just mean the next 10—20 years — that would be far too short of a timeline in his framework — we’re talking the next thousands, if not millions of years. He calls his framework longtermism.
The key assumption of longtermism is the following: we, as a species, are very young. To use Mr. MacAskill’s words, we are “imprudent teenagers” in terms of the expected lifetime of our species, which, assuming we don’t wipe ourselves out (more on this later), can be conservatively put at about 1 million years.
To put this in perspective, homo sapiens have been around for about 300,000 years, so we’ve got roughly 700,000 years to go. If we maintain current levels of population, this means that the number of people that will exist in the future outnumber those in the present by ten thousand to one. And that’s not accounting for the possibility that we live beyond the usual 1 million years lifespan due to technological advancement or spreading out among the stars. If that happens, then the present-day population is even younger relative to our expected lifespan of perhaps millions of years.
This simple population-level analysis has serious ethical implications. As Mr. MacAskill lays out, these future people — the grandchildren of your grandchildren of your grandchildren’s grandchildren— are horribly disenfranchised. Their wellbeing obviously matters, and yet, they have no say in present-day matters, despite the present inexorably influencing their (upcoming) lives.
Take climate change. If our actions today lead to a planet that is 5 °C warmer (which is unlikely, but could happen if certain countries backtrack their emission reduction commitments or we hit a few climate tipping points), then people in the future would inherit a world where their wellbeing is practically predetermined to be lower than generations past. And there would only be the present to blame.
In view of this injustice, Mr. MacAskill argues that much more attention and money needs to be allocated to quantifying longterm, potentially species-threatening risks, and formulating corresponding mitigation strategies. Among his chief concerns are existential risks, such as nuclear conflict; value lock-in, or the notion that a governing body could freeze moral progress in a less-than-desirable state; climate change (duh); stagnation, or a protracted halt of economic growth; and the variety of risks associated with artificial general intelligence.
A few of these risks should be self-evident. Nuclear war has had the capacity to end humankind as we know it since the United States bombed Hiroshima and Nagasaki (unless you hide under a school desk, in which case, you’ll be safe). Russia’s invasion of Ukraine and subsequent resumption of nuclear brinkmanship has only reminded us of this. Major climate action is already under way in the United States and much of the world in response to climate risk. Prolonged stagnation would result in a modern day Dark Age. Suffice to say, we should do all we can to prevent these risks from materializing.
The risks posed by artificial general intelligence (AGI) and its potential for value lock-in, however, were surprising to me. Admittedly, I’m a novice when it comes to AI; before What we owe the future, AI was just something that gives me scary good ad recommendations while I’m playing chess online. But now, I see why there is so much discussion around the topic, especially AGI. AGI could, on one hand, spur unprecedented economic growth, while on the other could lock-in our values and, erm, kind of take over the world. Risky business indeed.
So what can we do about all of this? Mr. MacAskill gives four suggestions that individuals can do.
- Donate to effective charities and non-profits;
- Political activism;
- Spread good ideas;
- Have children.
To point one, Mr. MacAskill lays out a variety of non-profits that are incredibly cost-effective vehicles for doing good. I found this quick calculation compelling: Suppose you are a median income American. If you are a vegetarian, you abate 0.8 tons of CO2 from ever being emitted into the atmosphere per year. Good work. Now, if you were to donate 10% of your income (about $3,000) to the Clean Air Task Force, you can reasonably expect to abate 3000 tons of CO2 from the atmosphere per year. Perhaps one really can have their steak and eat it too. (More information on “effective” charities can be found at givingwhatwecan.org.)
Points two and three are self-explanatory. Go vote. Don’t be a conspiracy theorist. Repeat.
The final point I found shocking. Doesn’t having kids increase energy, water, and food demand, and thus drive up carbon emissions? Thus causing more climate change? Wouldn’t this be bad for the future?
To understand why having kids is a moral good for the future, two things should be considered. First is that having kids lowers the risk along other dimensions unrelated to climate change; for example, having children directly contributes to lowering the risk of stagnation, as stagnating population numbers can lead to protracted slow economic growth.
But the second reason why having children is a moral good according to Mr. MacAskill requires cutting through a common ethical falicy: the intuition of neutrality. In slogan form, the intution of neutrality goes like this:
We’re morally positive about making people happy, but morally neutral about making happy people.
- Jan Narvenson
To make this point compelling beyond the arguments of population ethics (read the book for more), you’d have to prove that your potential child, beyond reasonable doubt, would be happy. So, Mr. MacAskill commissioned a team of pyschologists to carry out a survey, and found that the vast majority of people, if they consider their lives in sum, would prefer to have lived than never have lived at all. This is especially true if you live in a rich country, as self-reported life satisfaction scales practically linearly with GDP per capita, shown below:
Take home point: people prefer living to not, and are generally satisfied with their lives, especially if they possess a sufficient level of wealth. Thus, if you have the option to bring another human into existence, you’re contributing to future wellbeing in just about as direct a way as you can. The benefits outweigh the costs.
This is especially true if you think about your children having children, and so on. Having a child is, essentially, starting a chain reaction of happiness generation. And this doesn’t even consider the knock-on effects of their impact on others, which, Mr. MacAskill argues, are undeniably going to be a net-positive.
What we owe the future is an valuable contribution to the ethics literature, and is written to be broadly accessible. I personally found it to be incredibly enlightening, as it details the risks that face us — as well as future us — fluently and skillfully. There’s even two chapters dedicated to hardcore population ethics buried in here for the philosophically inclined (hand up). I’d highly recommend this book to anyone who is concerned with how we can make the world, as well as the future world, a better place.
Rating: Equivalent to the wellbeing per capita of the world where we actually limit climate change to 1.5 °C. (Can we please?)
Pokémon card for this book: I was quite excited to read this book; I had heard Mr. MacAskill on numerous podcasts and am a fan of his thinking. Thus, it earned the holographic card from my last pack: a sick-ass Banette card with its gen 3 Elite Four trainer, Phoebe, in the background.