About

Hi, I’m Greg

I work on Global Catastrophic Biological Risks (GCBRs) – very large biological disasters that could cripple or extinguish human civilisation, and the worry these risks may be increasing in step with the march of biotechnological progress. Before this I was training in Public Health, and before that I worked as a junior doctor. My CV, such as it is, is here.

I’m heavily involved in Effective Altruism: the idea of trying to find out what does the most good (and then trying to do it). This has led me to change my mind about a few things:

Making the far future go well is the most important thing: See Bostrom “Astronomical Waste“,  Sandberg “Desperately seeking eternity“, Beckstead “On the Overwhelming Importance of Shaping the Far Future“. Parfit summarises it well:

What now matters most is how we respond to various risks to the survival of humanity. We are creating some of these risks, and we are discovering how we could respond to these and other risks. If we reduce these risks, and humanity survives the next few centuries, our descendants or successors could end these risks by spreading through this galaxy.

Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea.

If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would have given us all, including those who suffered most, reasons to be glad that the Universe exists.

Biological risks could be really important for how the future goes: The reasons are perhaps best articulated by Zabel here. My impression is biological risks aren’t the most important risks to work on in general (I think that’s probably AI). But they are probably the best thing for me to work on given my background and comparative advantage.

Medicine probably isn’t the best thing to focus on: Medical careers aren’t a great match for these topics. They also aren’t a great match for more commonsense views on the most important things (e.g. if well-being of current people is what matters most, likely ‘earning to give‘ as a banker does better than ‘saving lives’ as a doctor). I wrote the fairly downbeat career guide on medicine for 80,000 hours. I have also given a few talks on the same.

(As ‘the doctor who doesn’t think medicine is that good’, I end up over-exposed as a neat framing device/cautionary tale: New Scientist, ‘Doing Good Better‘, Daily Telegraph, Washington Post.)

One of the ways I take refuge from my day job is maintaining a long and ignominious track record of arguing on the internet, particularly dabbling in philosophy and statistics. I write a lot on the Effective Altruism Forum, LessWrong, and very occasionally my writing ends up elsewhere. This blog collates my writing across these venues, and some other stuff besides.

One thought on “About

  1. Obviously, the only way to be notified about new posts is to leave a comment.

    So be it:

    “Hi Greg, how ya doin? :D”

    Peace, Jan (we met at the EA global in Oxford)

    Like

Leave a comment