The person-affecting value of existential risk reduction

Introduction

The standard motivation for the far future cause area in general, and existential risk reduction in particular, is to point to the vast future that is possible providing we do not go extinct (see Astronomical Waste). One crucial assumption made is a ‘total’ or ‘no-difference’ view of population ethics: in sketch, it is just as good to bring a person into existence with a happy life for 50 years as it is to add fifty years of happy life to someone who already exists. Thus the 10lots of potential people give profound moral weight to the cause of x-risk reduction.

Population ethics is infamously recondite, and so disagreement with this assumption commonplace; many find at least some form of person affecting/asymmetrical view plausible: that the value of ‘making happy people’ is either zero, or at least much lower than the value of making people happy. Such a view would remove a lot of the upside of x-risk reduction, as most of its value (by the lights of the total view) is ensuring a great host of happy potential people exist.

Yet even if we discount the (forgive me) person-effecting benefit, extinction would still entail vast person-affecting harm. There are 7.6 billion people alive today, and 7.6 billion premature deaths would be deemed a considerable harm by most. Even fairly small (albeit non-pascalian) reductions in the likelihood of extinction could prove highly cost-effective.

To my knowledge, no one has ‘crunched the numbers’ on the expected value of x-risk reduction by the lights of person affecting views. So I’ve thrown together a guestimate as a first-pass estimate.

An estimate

The (forward) model goes like this:

  1. There are currently 7.6 billion people alive on earth. The worldwide mean age is 38, and worldwide life expectancy is 70.5.
  2. Thus, very naively, if ‘everyone died tomorrow’, the average number of life years lost per person is 32.5, and the total loss is 247 Billion life years.
  3. Assume the extinction risk is 1% over this century, uniform by year (i.e. the risk this year is 0.0001, ditto the next one, and so on.)
  4. Also assume the tractability of x-risk reduction is something like (borrowing from Millett and Snyder-Beattie) this: ‘There’s a project X that is expected to cost 1 billion dollars each year, and would reduce the risk (proportionately) by 1% (i.e. if we spent a billion each year this century, xrisk over this century declines from 1% to 0.99%).
  5. This gives a risk-reduction per year of around 1.3 * 10-6 , and so an expected value of around 330 000 years of life saved.

Given all these things, the model spits out a mean ‘cost per life year’ of $1500-$26000 (mean $9200).

Caveats and elaborations

The limitations of this are nigh-innumerable, but I list a few of the most important below an approximately ascending order.

Zeroth: The model has a wide range of uncertainty, and reasonable sensitivity to distributional assumptions: you can modulate mean estimate and range by a factor of 2 or so by whether the distributions used are Beta, log normal, or tweaking their variance.

First: Adjustment to give ‘cost per DALY/QALY’ would be somewhat downward, although not dramatically (a factor of 2 would imply everyone who continues to live does so with a disability weight of 0.5, in the same ballpark as those used for major depression or blindness).

Second, trends may have a large impact, although their importance is modulated by which person-affecting view is assumed. I deliberately set up the estimate to work in a ‘one shot’ single year case (i.e. the figure applies to a ‘spend 1B to reduce extinction risk in 2018 from 0.0001 to 0.000099’ scenario).

By the lights of a person-affecting view which considers only people who exist now, making the same investment 10 years from now (i.e. spent 1B to reduce extinction risk in 2028 from 0.0001 to 0.000099) is less attractive, as some of these people would have died, and the new people who have replaced them have little moral relevance. These views thus imply a fairly short time horizon, and are particularly sensitive to x-risk in the near future. Given the ‘1%’ per century is probably not uniform by year, and plausibly lower now but higher later, this would imply a further penalty to cost-effectiveness.

Other person affecting views consider people who will necessarily exist (however cashed out) rather than whether they happen to exist now (planting a bomb with a timer of 1000 years is still accrues person-affecting harm). In a ‘extinction in 100 years’ scenario, this view would still count the harm of everyone alive then who dies, although still discount the foregone benefit of people who ‘could have been’ subsequently in the moral calculus.

Thus the trends in factual basis become more salient. One example is the ongoing demographic transition, and the consequently older population give smaller values of life-years saved if protected from extinction in the future. This would probably make the expected cost-effectiveness somewhat (but not dramatically) worse.

A lot turns on the estimate for marginal ‘x-risk reduction’. I think the numbers offered in terms of base rate, and how much it can be reduced for now much lean on the conservative side of the consensus of far-future EAs. Confidence in (implied) scale or tractability an order of magnitude ‘worse’ impose commensurate increases on the risk estimate. Yet in such circumstances the bulk of disagreement is explained by empirical disagreement rather than a different take on the population ethics.

Finally, this only accounts for something like the (welfare) ‘face value’ of existential risk reduction. There would be some further benefits by the light of the person-affecting view itself, or ethical views which those holding a person affecting view are likely sympathetic to: extinction might impose other harms beyond years of life lost; there could be person affecting benefits if some of those who survive can enjoy extremely long and happy lives; and there could be non-welfare goods on an objective list which rely on non-extinction (among others). On the other side, those with non-deprivationists accounts of the badness of death may still discount the proposed benefits.

Conclusion

Notwithstanding these challenges, I think the model, and the result that the ‘face value’ cost-effectiveness of x-risk reduction is still pretty good, is instructive.

First, there is a common pattern of thought along the lines of, “X-risk reduction only matters if the total view is true, and if one holds a different view one should basically discount it”. Although rough, this cost-effectiveness guestimate suggests this is mistaken. Although it seems unlikely x-risk reduction is the best buy from the lights of a person-affecting view (we should be suspicious if it were), given ~$10000 per life year compares unfavourably to best global health interventions, it is still a good buy: it compares favourably to marginal cost effectiveness for rich country healthcare spending, for example.

Second, although it seems unlikely that x-risk reduction would be the best buy by the lights of a person affecting view, this would not be wildly outlandish. Those with a person-affecting view who think x-risk is particularly likely, or that the cause area has easier wins available than implied in the model, might find the best opportunities to make a difference. It may therefore supply reason for those with such views to investigate the factual matters in greater depth, rather than ruling it out based on their moral commitments.

Finally, most should be morally uncertain in matters as recondite as population ethics. Unfortunately, how to address moral uncertainty is similarly recondite. If x-risk reduction is ‘good but not the best’ rather than ‘worthless’ by the lights of person affecting views, this likely implies x-risk reduction looks more valuable whatever the size of the ‘person affecting party’ in one’s moral parliament.

Continue reading “The person-affecting value of existential risk reduction”

How fragile was history?

Elsewhere (and better): 1, 2.

If one could go back in time and make a small difference in the past, would one expect it to effect dramatic changes to the future? Questions like these are fertile soil for fiction writers (generally writing under speculative or alternative history) but receive less attention in the historical academy, which tends to focus on explaining what in fact happened, rather than what could have been. Yet general questions of historical fragility (e.g. Are events in human history ‘generally’ fragile? In what areas is history particularly fragile? Are things getting more or less fragile over time?) are of particular interest to those interested in altering the course the long-run future by differences they make today. Continue reading “How fragile was history?”

In defence of epistemic modesty

This piece defends a strong form of epistemic modesty: that, in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue, hewing instead to an idealised consensus of experts. I start by better pinning down exactly what is meant by ‘epistemic modesty’, go on to offer a variety of reasons that motivate it, and reply to some common objections. Along the way, I show common traps people being inappropriately modest fall into. I conclude that modesty is a superior epistemic strategy, and ought to be more widely used – particularly in the EA/rationalist communities.

[gdoc]

Provocation

I argue for this:

In virtually all cases, the credence you hold for any given belief should be dominated by the balance of credences held by your epistemic peers and superiors. One’s own convictions should weigh no more heavily in the balance than that of one other epistemic peer.

Continue reading “In defence of epistemic modesty”

In defence of democracy

Last week the UK held a referendum on whether it should remain a member of the European Union. ‘Leave’ won by a narrow (52%) majority. The aftermath so far involves the resignation of the Prime Minister and consequent leadership election; a leadership challenge within the opposition; an increasingly restive Scotland looking for independence; and large slides of UK stocks and currency.

Contra the balance of the chattering classes (and the great majority of all those in my social media bubble). I think the results of the referendum should be respected, and therefore the UK should leave the EU.[1] Continue reading “In defence of democracy”

Are History’s “Greatest Philosophers” All That Great?

Introduction

In the canon of western philosophy, generally those regarded as the ‘greatest’ philosophers tend to live far in the past. Consider this example from an informal poll:

  1. Plato (428-348 BCE)
  2. Aristotle (384-322 BCE)
  3. Kant (1724-1804)
  4. Hume (1711-1776)
  5. Descartes (1596-1650)
  6. Socrates (469-399 BCE)
  7. Wittgenstein (1889-1951)
  8. Locke (1632-1704)
  9. Frege (1848-1925)
  10. Aquinas (1225-1274)

(source: LeiterReports)

I take this as fairly representative of consensus opinion—one might argue about some figures versus those left out, or the precise ordering, but most would think (e.g.) Plato and Aristotle should be there, and near the top. All are dead, and only two were alive during the 20th century.

But now consider this graph of human population over time (US Census Bureau, via Wikipedia):

1000px-Population_curve.svg

The world population at 500BCE  is estimated to have been 100 million; in the year 2000, it was 6.1 billion, over sixty times greater. Thus if we randomly selected people from those born since the ‘start’ of western philosophy, they would generally be born close to the present day. Yet when it comes to ‘greatest philosophers’, they were generally born much further in the past than one would expect by chance. Continue reading “Are History’s “Greatest Philosophers” All That Great?”

Free will, without God?

Part 8 in series: 20 Atheist answers to questions they supposedly can’t

11. How is free will possible in a material universe?

Short answer: Depends what you mean by ‘free will….’

Long answer: What exactly do we need to ‘count as’ having free will (and does our situation satisfy that?) Particularly, if we live in a world that is apparently determined via laws of nature, surely our brains (and perhaps therefore our minds) are included in this inviolable causal chain. So, if our thoughts are determined, what then for our intuition we have free will? Continue reading “Free will, without God?”