00:00:10.000  Great. So I'm excited we finally get to do this.
 
      
      
      
      
      00:00:23.160  >> I know. I should have like practiced the last name,
 
      
      
      
      00:00:32.760  agree on some stuff, disagree on some other stuff.
 
      
      
      00:00:38.000  >> Yeah, I think mostly it's because you haven't talked about long-termism.
 
      
      00:00:44.600  So you guys have raised some worries about this thing called long-termism.
 
      00:00:51.040  You wrote some things of your own and you talked about it on the podcast.
 
      00:00:55.040  Luca and I happen to be interested in these questions as well.
 
      00:00:58.560  I think we have some interesting differences to dig into.
 
      
      
      00:01:07.920  >> You're here to prove us wrong in a couple sentences in other words.
 
      
      
      
      00:01:15.760  >> We should say you two are the charming hosts of the Here This Idea podcast,
 
      00:01:19.920  which is a far more successful podcast than ours.
 
      00:01:22.880  So you guys are really lowering your standards, but we appreciate you guys coming on.
 
      
      
      
      00:01:33.920  >> We're hoping we don't damage your reputation by having you on.
 
      
      
      
      00:01:44.600  >> So I guess we thought we would begin by all just maybe going around and
 
      00:01:48.600  talking about what the hell we're talking about because I think it's going to be
 
      
      00:01:54.680  First by just discussing what we all think about when we hear the term long
 
      00:01:58.240  termism, I think it means different things to each one of us.
 
      00:02:01.120  And so perhaps that's a good place to start and then hopefully the conversation
 
      00:02:04.240  will just explode into unstructured anarchy after that.
 
      
      
      
      00:02:17.960  You see it come up in some corners of finance and business and
 
      
      00:02:26.080  But the kind of long termism we're interested in is more specific than that.
 
      00:02:31.160  And it's a kind of, so I would describe it as a family of moral views
 
      00:02:37.520  that have emerged in their kind of more precise forms in the last few years.
 
      
      00:02:49.560  Well, I think the comparison is to other exciting, interesting moral isms.
 
      
      00:02:59.000  Where it's just a bit silly to try to pin them down to anything specific.
 
      00:03:04.160  Because those isms are just broad enough to encompass lots of different views.
 
      00:03:09.600  And sometimes those views actually just internally disagree.
 
      00:03:13.520  Because I think any interesting moral view isn't going to be the kind of thing
 
      00:03:18.680  you can pin down in a formal, if and only, if definition.
 
      
      00:03:25.600  there are specific jargon filled definitions you can give of different kinds of long termism.
 
      
      00:03:35.400  You can be a very specific kind of socialist or feminist.
 
      00:03:39.040  Especially if you're writing for lots of papers.
 
      00:03:41.280  It's really important that you kind of get clear about what you're talking about.
 
      
      00:03:46.200  But I think in general, long termism is something like it's a view that is especially concerned
 
      
      00:03:56.920  And by very long run, people tend to mean something like centuries or millennia.
 
      00:04:02.080  So we're not just talking like the next two or three political cycles.
 
      00:04:08.000  And attached to that, there's also a kind of empirical claim that society doesn't value
 
      
      00:04:18.080  And there are things we can do now to improve how the long run future goes.
 
      00:04:22.360  And then the kind of natural conclusion there is that we should do those things.
 
      
      00:04:27.560  So that's the kind of the most general definition I would like want to give.
 
      00:04:33.040  And that's kind of what I have in mind when I'm going to be talking about long termism.
 
      00:04:37.760  If I can add to that, I think from where I'm coming from, I think what Finn here on there
 
      00:04:42.360  is really important that there's not like a clear definition of what long termism is.
 
      00:04:46.640  And within like isms, I guess it's still like something that's very new and something
 
      00:04:51.320  that within EA circles, you know, we're still kind of trying to figure out what exactly
 
      
      00:04:56.080  But the way that I kind of like to think about it is out of this kind of EA perspective is
 
      00:05:00.840  we're interested in doing the most good possible.
 
      00:05:03.240  And one really good thing that we've kind of found out in this mission of trying to
 
      00:05:07.000  do this is that it's worth really paying a lot of attention to the kind of stakeholders
 
      
      00:05:15.040  And the first kind of these stakeholders we found are like the poorest people in developing
 
      00:05:19.840  countries because they just don't have a voice, right, when it comes to this global system.
 
      00:05:23.720  So helping them is like a really effective thing to do because they're already so neglected
 
      00:05:27.680  and already such a bad point that it's a really effective way to do with your money.
 
      00:05:34.320  The next kind of stakeholder group you found is animals who don't have a voice for obvious
 
      00:05:39.080  reasons and are therefore suffering way more than they should.
 
      00:05:43.200  And now this kind of new idea that I think long termism brings into is that future generations
 
      00:05:47.360  also don't have a voice and they're really being neglected at the moment.
 
      00:05:50.360  And you can see this through things like climate change, I think most most obviously, but even
 
      00:05:54.320  outside of this, there's loads of things like extinction risks or AI safety or other things
 
      00:05:59.000  I think we're going to get into that long termism really points to and says this is actually
 
      00:06:04.440  really bad that we're ignoring this and we're living in a really impatient society that
 
      
      
      00:06:12.640  And it can actually be the case that this is one of the most effective ways to help and
 
      00:06:16.680  to do good in the world, even if it's less easy to see than some of these other things.
 
      00:06:21.600  I think yeah, when I'm going to refer to long termism, that's kind of the viewpoint where
 
      
      00:06:27.000  So this is sort of the moral circle expansion viewpoint where you've taken like moral progress
 
      00:06:33.480  that we've seen over time and sort of expanding spheres of concern that we've first given
 
      00:06:40.760  to yeah, the poorest people and then to animals.
 
      00:06:42.880  And this is just sort of arguing that a natural extension of this would be towards people
 
      00:06:46.560  in time, which is captured by the what did you call it fin in your piece, the moral
 
      
      
      
      
      
      
      
      00:07:01.560  Well, let's talk about the moral circle, which Ben mentioned, which is this image of the scope
 
      
      00:07:13.520  This is originally Peter Singer talking about it.
 
      00:07:16.520  His idea is that that circle, very roughly speaking, has expanded over time.
 
      00:07:22.400  And there's also a kind of normative claim there, which is that it should expand.
 
      00:07:26.040  That's a good thing to take seriously the interest of more and more moral stakeholders.
 
      00:07:33.880  And the so yeah, one one in to a long term is to think that long termism is just kind
 
      
      00:07:44.760  But I kind of like silly, but I guess fun way of doing it is to think about the circle
 
      
      
      00:07:53.840  And then you think about that again, and you think, well, you know, one thing that not
 
      00:07:57.600  just long term is like to emphasize, but one thing that just seems pretty clear is that
 
      00:08:01.520  the future could be more valuable than the present in terms of just its size and also
 
      
      00:08:09.240  So would it be a cylinder or could it be like a cone?
 
      00:08:11.880  Yeah, I don't think it's like very, very serious analogy, but it's kind of fun to think
 
      
      00:08:18.840  And so perhaps I can explain why I'm so worried about this as a way to start kindling some
 
      
      00:08:28.480  So, so yeah, like I think every time I've talked about this on the podcast, without fail, I
 
      00:08:33.680  always start by saying that like I love EA, and I'm not going to break tradition because
 
      00:08:38.400  I love EA a lot, particularly because of like all of the super important work that it's
 
      00:08:46.320  doing and say like fighting global poverty, like you had mentioned, the Against Blurry
 
      00:08:51.400  Foundation or deworming charities or like unconditional cash transfers and animal welfare.
 
      00:08:56.880  Like these are super important issues, which EA has not only talked about, but actually
 
      
      
      00:09:08.760  But like to any listeners out there, if they care about these issues, then I think that
 
      00:09:13.360  they should be incredibly worried about long-termism because I think long-termism swallows all of
 
      
      00:09:18.720  It's just, it's this argument that demolishes every other form of charitable giving and kind
 
      
      00:09:28.000  So the, all the arguments from within the space of long-termism roughly take this form.
 
      00:09:33.360  First you start with some dystopian nightmarish sci-fi scenario.
 
      00:09:38.480  So for example, like Black Mirror, the one where John Hamm imprisons people in like an
 
      
      
      00:09:49.880  And then you say, listen, I know that this is unrealistic and I know that there's like
 
      00:09:53.520  an infinitesimally small probability of this scenario happening.
 
      00:09:57.400  But if it does, if it did, the expected amount of suffering it would cause would be infinite.
 
      00:10:04.280  And so you start with some like tiny probability of some crazy scenario happening, like tend
 
      
      00:10:09.360  And then you multiply it by the utility of it not happening or the disutility of it happening.
 
      
      00:10:18.440  And then you say, oh my God, in expectation, the most important thing we could be doing
 
      00:10:25.320  right now is preventing John Hamm from imprisoning us in an Alexa.
 
      00:10:30.360  And like less our listeners think I'm making this up.
 
      
      00:10:39.120  So I would recommend anyone who thinks I'm like being a little bit silly here to go.
 
      
      
      
      
      
      
      
      
      
      
      00:10:54.320  And if you go to the May 2020 article titled our current list of especially pressing world
 
      
      00:11:01.560  And the top of the list are things like coming up with governance strategies for outer space,
 
      00:11:08.040  whole brain emulation and this thing called S risk.
 
      00:11:10.440  And if you click on S risk, the see more link it takes you to a post which starts as a illustrative
 
      
      00:11:21.160  And then it says this actually isn't an S risk because S risks are worse.
 
      
      00:11:28.240  And then if you go further down on the link, like I wish I was making this up.
 
      00:11:36.920  Lowest priorities according to 80,000 hours are things like mental health research, biomedical
 
      00:11:41.920  research and other basic science and increasing access to pain relief in developing countries.
 
      00:11:47.600  So what is happening is that sci-fi scenarios are taking precedence over literally reducing
 
      00:11:56.520  the amount of pain and suffering for people alive right now in Africa.
 
      
      00:12:05.320  So when I say I'm worried about long termism, specifically what I'm referring to is arguments
 
      
      00:12:10.880  Distopian sci-fi scenario multiplied by tiny probability of it happening multiplied by
 
      00:12:15.920  a large expected value if it did happen because this style of argumentation can swallow everything
 
      00:12:23.320  that you both listed as being important and the stuff that people talk about on your
 
      
      00:12:28.040  So if you care about animal welfare, if you care about the poor as I do, then you should
 
      00:12:33.240  be very worried about the popularity of this kind of argumentation.
 
      
      00:12:40.080  And this is why I'm really excited to talk with both of you because it is starting to
 
      00:12:46.800  devour 80,000 hours and it's going to continue to move charity by charity, just striking
 
      00:12:53.920  them off the list until we are all just thinking about Black Mirror.
 
      
      00:13:00.520  I definitely agree, actually, with a lot of what you say.
 
      00:13:03.520  And I think one important thing to note as well is that I'm by no means have a firm opinion
 
      00:13:09.200  that I think it's a super important thing that YouTube brought up and congrats as well
 
      00:13:13.320  and engaging that conversation and replying to the tons of comments you guys got.
 
      
      00:13:20.680  But yeah, I think it's a valuable point you raise.
 
      00:13:23.960  There are presumably some people who are really bought into long-termism and have decided
 
      00:13:28.040  that the most impactful thing to do right now is to go through and downvote every single
 
      
      
      00:13:37.760  So whatever sticking around and studying this conversation, I agree, it's a worthwhile one
 
      
      
      00:13:47.280  One thing I want to ask then on that point you bring about there, and this is something
 
      00:13:50.360  I'm not sure about myself though, is let's say that for whatever reason, we take this
 
      00:13:55.920  we need to stop Jon Han's scenario as like the most important priority thing.
 
      00:14:00.280  Like what do you see the actions to actually stopping that being and like how would that
 
      
      00:14:06.160  I think one of the issues that I have is, and I'm not sure about this, but even if you
 
      00:14:10.300  think about these kind of long-term goals that might feel really abstract and wrong,
 
      00:14:14.920  what tangible actions would that cause that you would feel is yeah, hurtful to the world.
 
      00:14:22.240  Or is it mostly like an opportunity cost kind of thing for you?
 
      
      00:14:25.840  So you can imagine, just look at the 80,000 hours list of priorities and then flash that
 
      00:14:31.360  against say the list of priorities on Give Well or on the open philanthropy and just look
 
      00:14:37.040  at what they prioritize, which is say poverty, a leavement and fighting disease and suffering
 
      
      00:14:46.320  And just imagine every single one of those this gets replaced with Black Mirror Episode
 
      00:14:50.480  1, Black Mirror Episode 2, Black Mirror Episode 3.
 
      00:14:53.320  And we're talking what hundreds of millions of dollars that flow through EA, and the concrete
 
      00:14:59.280  harm is simply that this like, I like the comparison to feminism that Shin that you raised
 
      
      00:15:12.360  And second wave EA is- What are you saying about second wave?
 
      00:15:15.720  No, I'm a huge fan of second wave feminism of course.
 
      00:15:19.320  But this idea of movements coming in waves is the thing which I'm highlighting.
 
      00:15:26.320  And a huge fan of second wave feminism obviously, less of a fan of third wave feminism.
 
      00:15:30.920  And you can see the clash between a lot of the views held by third wave feminists as
 
      00:15:37.400  trying to undo a lot of the progress made by second wave feminists.
 
      00:15:41.000  Not to get into that conversation too much, but that is-
 
      
      00:15:47.160  I think that is kind of what's happening right here with long termism.
 
      00:15:50.320  There's this new wave which is undoing everything which MacAskel had built in the first wave.
 
      00:16:00.520  Because like you said, if we expand our moral circle, now we are weighing the well-being
 
      00:16:06.440  of people who are alive and suffering right now against the well-being of an infinite
 
      
      00:16:13.680  And because this style of argumentation is taken so seriously within this community, nothing
 
      
      
      00:16:25.360  And so the worry that I have is precisely the opposite love that I had for EA in the
 
      
      00:16:34.400  I'm worried about losing all the things that EA has built.
 
      00:16:37.240  Can I understand your question as asking something slightly more subtle which is even
 
      00:16:45.280  if the goals that long termism aspires to or like the objectives of reducing some crazy
 
      
      00:16:57.760  It could be the case that in trying to meet this objective, the actual day-to-day operations
 
      00:17:04.440  that you would do so would be the same as some other objective, like just trying to
 
      00:17:09.080  make the world happier and healthier for example.
 
      
      00:17:14.240  I think a good way to kind of distinguish from this is I think it's kind of something you
 
      
      00:17:22.400  There is like a scenario where you take strong long termism so seriously that you're just
 
      00:17:27.120  willing to cause any amount of harm for the next thousand years in the hope that that
 
      00:17:32.280  will create some even larger astronomical value in the future.
 
      00:17:36.480  And then because time goes on in a thousand years time you're willing to make the exact
 
      00:17:39.800  same trade off and that just causes us in a really bad spiral thing.
 
      00:17:43.640  And I think that's one thing that's interesting but I feel that's very different to the argument
 
      00:17:48.280  of long termism is just useless and these things are never going to materialize and it's just
 
      00:17:52.360  funding research that no one's going to read and it's kind of like getting rid of that
 
      
      00:17:57.000  It's kind of you might as well have burnt and stuff and that just feels like to qualitatively
 
      00:18:02.880  different arguments that we can have both have right but that was just what I was kind
 
      
      00:18:09.640  If you see this like long termism in of itself all right ironically enough like as an S risk
 
      00:18:14.040  where we become so obsessed with this thing that we kind of bring dystopian amongst ourselves
 
      
      00:18:20.800  Yeah, I was just kind of curious where you stand on that.
 
      
      
      00:18:26.520  I think that it is not just silly academics writing publications that no one's going to
 
      
      00:18:32.240  I think it is a philosophy that's turning into an ideology frankly and I think that if it
 
      00:18:38.520  was relegated to obscure academic journals then I wouldn't really care.
 
      00:18:43.760  But the fact that it is starting to take over things that I do care about is why I think
 
      00:18:50.040  that yes it is like an S risk in and of itself.
 
      
      00:18:54.240  So I think it would be useful here just a draw a distinction to make sure we're not
 
      
      00:19:00.600  Yes, Vadon I take it you are reacting in everything you've said so far to what gets
 
      00:19:07.800  called strong long termism which does what it says on the tin.
 
      00:19:12.520  It's the most kind of radical working out of this kind of view.
 
      
      00:19:20.400  Well I think there is a more expensive and more accommodating kind of long termism which
 
      00:19:28.160  doesn't make these kind of absolute claims which you take it to be making.
 
      00:19:32.400  It's not saying let's go and steal money in like significant amounts from these other
 
      
      00:19:37.720  What it is saying is look at the margin we found this thing that seems to matter a great
 
      00:19:43.360  deal and crucially almost no one is making a targeted effort in this area.
 
      00:19:50.600  So that gives the case for like moving money right now into kind of exploring what we can
 
      
      00:19:59.440  But there's no claim there about like the absolute amounts.
 
      
      00:20:08.640  And what I want to kind of highlight is this worry that you respond to the most extreme
 
      00:20:15.360  things people have said and then take that to stand for what everyone is saying under
 
      
      00:20:22.800  I'll just read out like the strong long termism definition from the paper.
 
      
      
      00:20:32.480  We have an axiological and a deontic version which I think is jargony and not hugely relevant.
 
      00:20:41.680  So the axiological version is just a claim about what is best to do.
 
      00:20:48.480  Axiology is just like talking about what things are good or bad.
 
      00:20:53.720  And here it is they say in a wide class of decision situations the option that is X and
 
      00:20:58.600  T best is contained in a fairly small subset of options whose X and T effects on the very
 
      
      00:21:07.000  They're actually impressively vague even for like their attempt at making the most
 
      
      
      00:21:15.240  But what it does say is that what the example they give is like someone is thinking where
 
      00:21:20.440  to donate like a bunch of money and they want to do the most good possible.
 
      00:21:25.320  And they are saying right there the best thing to do will be to give it to some long termist
 
      00:21:30.400  cause which is going to look very different from the kind of traditional cause areas.
 
      
      00:21:36.880  And yeah just to underline what I'm trying to like like here is that you don't have to
 
      00:21:41.160  buy into that strong claim in order to be a long termist.
 
      00:21:44.440  You might just think, hey look we've found this thing called long termism.
 
      
      00:21:49.440  So well done us let's also care about this in addition to all these other things and
 
      00:21:55.400  let's not worry about the absolute kind of proportion of efforts right now because frankly
 
      
      00:22:01.200  May I ask do you buy into the strong long term of claim and why or why not?
 
      00:22:07.280  I'm agnostic because it's like call about a just getting.
 
      00:22:14.520  What's I yeah can I give more interesting answer than that.
 
      00:22:18.760  My hunch is that you find something deeply wrong with that claim and hence your agnosticism.
 
      00:22:25.480  And I'm trying to tease it out because I think we'll have common ground when you explain
 
      
      00:22:31.720  Here's what I find most worrying and what I think you are both absolutely right to
 
      
      00:22:37.520  This is pretty much the first like serious shot at writing a piece of academic philosophy
 
      00:22:45.240  that is grappling with long termism directly or at least is one of the very first few.
 
      00:22:53.240  Now one of the broader aims for people who are brought into long termism is to make this
 
      00:22:57.320  more than just a kind of niche wonky philosophical position.
 
      00:23:02.320  The goal is to make it you know like a vibrant movement that encompasses like non academics
 
      00:23:08.160  and you know the comparison goes back to other isms like feminism where you don't have
 
      00:23:13.160  to read like analytic philosophy about feminism in order to call yourself a feminist.
 
      00:23:18.680  Now if that's the goal what is the best way of doing that probably not writing this like
 
      00:23:25.800  really controversial really easy to misunderstand piece as like the first kind of public facing
 
      
      00:23:36.280  I don't want to like guess the reasons why Will and Hillary wrote it but I don't think
 
      00:23:43.360  it's too unclear like that's the game you play in academia and certainly in philosophy
 
      00:23:50.040  is you try writing about the most kind of extreme claims or the most out there claims
 
      00:23:58.400  that you can defend in good faith and that's how you generate you know criticism you get
 
      00:24:02.760  buzz and you get citations and when you're not like potentially at the start of a significant
 
      00:24:10.000  moral movement then that's like fair game but where you are it's just like such an
 
      00:24:15.600  unforced error so that's a worry but that doesn't really answer your question because
 
      00:24:20.000  the question was is it actually true that's going to be you know like part of the conversation
 
      
      00:24:27.960  I think one thing to say is that it is it's appropriate and actually probably probably
 
      00:24:33.160  rights to be uncertain about these things to be like morally uncertain in general that's
 
      
      00:24:45.040  No place to say anything especially kind of authoritative right but it seems like it could
 
      
      00:24:55.680  The way that I kind of like look at it and this kind of like flips the table to you guys
 
      00:24:59.800  but like when you hear about like the like very simple argument that future generations
 
      00:25:05.560  matter at least to some degree and it's probably likely that at the moment we're taking that
 
      00:25:10.960  far less seriously than we should be and that probably has some conclusions for what EA
 
      00:25:16.460  should be doing and what we should be prioritizing that feels really strong and I'd be interested
 
      00:25:21.240  to hear what you guys think about that or like how deeply you disagree with that statement
 
      00:25:26.680  and I definitely don't have like any background in philosophy so I completely clueless as
 
      00:25:31.600  to that but that makes like a lot of intuitive sense to me and when I think about it like
 
      00:25:36.240  on the margin I don't think it's bad having people talking and exploring this idea even
 
      00:25:41.840  if it's kind of just nudging the needle to something that's more sensible.
 
      00:25:44.720  I think it is important at some points to just think that EA is just in of itself still
 
      00:25:49.080  a very small community you know when you look at the whole globe and things like Givewell
 
      00:25:53.600  have really taken off like I know lots of people who have heard about Givewell and have
 
      00:25:57.160  never heard about effective altruism and haven't heard about long term as thank you
 
      
      00:26:01.280  Yeah yeah right and like I think it is kind of amazing if you you know give EA some credit
 
      00:26:06.960  as well for that in like promoting these things I think that's really awesome but then that
 
      00:26:10.720  kind of makes you think okay well what is EA's job then I don't think after you know
 
      00:26:14.360  they reach some critical mass I think EA's job is to look at the next thing to kind of
 
      00:26:18.120  explore and to promote and I think long termism just isn't really taken seriously at all just
 
      00:26:22.600  because it is such a new idea that I don't think it's bad having people explore that
 
      00:26:26.520  even if we're not quite something that that has like a great conclusion yet or has something
 
      00:26:31.920  like like super reliable so like what kind of actions would take from it.
 
      
      00:26:37.600  Nice yeah so I think I agree with the point you're just making but part of exploring a
 
      00:26:42.400  new idea is taking it to its extremes and then getting pushback right which is exactly
 
      00:26:47.040  what we're trying to do is like criticize the idea.
 
      00:26:49.520  I'd also just like to make a point with the strong long termism because like my reading
 
      00:26:53.840  of it is slightly different my reading of the strong long termism paper is that it's
 
      00:26:57.400  an inevitable conclusion from taking the ideas of long termism seriously so they didn't
 
      00:27:03.520  made I think what is an apt analogy with religion in the last episode which I'm just gonna
 
      00:27:07.960  ruthlessly paraphrase which is you know if so say you have this idea that they're like
 
      00:27:14.000  might be an omniscient god in the sky then if you actually follow the logic of that you
 
      00:27:20.880  get somewhere like fundamentalism so you once you adopt certain premises you end up
 
      00:27:26.000  at certain conclusions and I think that's exactly what's going on in the case of long
 
      00:27:29.600  termism so as soon as you allow sort of this expected value reasoning to like sneak in
 
      00:27:34.560  the door every conversation all your reasoning is now going to be swamped by the potential
 
      00:27:40.920  vastness of the future and you have to end up at something like long termism because
 
      00:27:45.880  once you a lot like adopt this kind of reasoning now you're weighing a finite well being in
 
      00:27:50.680  the present against you know potentially infinite or near infinite well being in the future
 
      00:27:55.680  and so I'm near infinitive me exceptionally exceptionally large Google yeah although that's
 
      00:28:02.040  an important difference I just have like briefly react to that I think it's worth pointing
 
      00:28:07.640  out that neither will nor Hillary ever claim that they're like fully sold they're kind
 
      00:28:14.120  of raising this as an interesting and plausible idea and the reason they don't do that is because
 
      00:28:19.080  it's possible to be uncertain about something even when the argument looks plausible on the
 
      00:28:24.600  face of it where does uncertainty come in where they can come in at the kind of normative
 
      00:28:29.160  level where like you've this problem with using the expected value framework in certain
 
      00:28:35.440  context you think it breaks down and that feels like something worth worrying about and the
 
      00:28:40.480  uncertainty can also enter in in the empirical questions because the strong long termist
 
      00:28:47.440  conclusion only pops out once you actually consider how the world is you need to be sure
 
      00:28:52.880  that there are things we can actually do to reliably win influence how the long run future
 
      00:28:57.440  goes and that's another place where you might just think there are no ways we can improve
 
      00:29:00.640  the long run future and I think the state of play now it's fair to say is that maybe
 
      00:29:05.360  with the exception of certain kinds of existential risk how we might possibly like reliably
 
      00:29:12.640  influence how things go like a thousand years from now who knows so it's yeah I think it's
 
      00:29:20.400  appropriate to be uncertain even if you are like half sympathetic to us to respond to something
 
      00:29:26.800  that Luca had asked what what do we think about the argument that people in the long-term future
 
      00:29:34.160  their voices aren't represented and we need to do something now to help right I think that's
 
      00:29:38.800  that's a totally reasonable argument and I could imagine a say a variation of long-termism which
 
      00:29:44.080  says we care about the long-term future but we also recognize we cannot make these sci-fi
 
      00:29:48.480  arguments because they're destructive you can have both of these thoughts in your head
 
      00:29:53.280  simultaneously and that could lead you to certain conclusions like for example we need to work on
 
      00:29:59.200  say reducing the likelihood of nuclear war but we're not going to take this out of our budget that
 
      00:30:05.200  helps poverty alleviation because one thing that we know is that the only reason that we are here
 
      00:30:12.080  doing as well as we're doing in the 21st century is because previous generations had worked quite
 
      00:30:18.240  hard to improve the world in an incremental way to pass it on to their the following generation and
 
      00:30:24.000  so most of the arguments that start the conversation around long-termism about wanting to ensure the
 
      00:30:31.200  long-run future as well I'm totally in favor of it it's just that they funnel in to use your cone
 
      00:30:36.160  analogy they funnel into these and we need to work on S-Risks and so that's the only thing
 
      00:30:41.520  that I'm saying should not be done because you so on the infinite or very large point the reason
 
      00:30:49.040  why the word infinite is useful is because for any big number that you give I can give a bigger
 
      00:30:54.960  number there's no end to this game there's no final number which we can all agree is the biggest
 
      00:31:01.600  and so whatever move you make to try to encourage me to work on X-Risks I can play the same move
 
      00:31:08.800  back and say well we should work on S-Risks which is exactly what what's happening so
 
      00:31:15.040  my main point of criticism is in this style of argument and it you think is possible to
 
      00:31:21.680  excise that style of argument from the conversation of long-termism and not make it this zero sum
 
      00:31:26.400  either long-term future or short-term future and raise additional money to say prevent
 
      00:31:32.880  bioterrorism and nuclear weapons I'm totally in favor of all of that of course that's great and
 
      00:31:38.800  and AI safety as well of course but we shouldn't take it out of the budget to help the poor
 
      00:31:45.440  or help people who are alive right now and that's the thing that I'm highlighting
 
      00:31:49.440  this feels like progress to me so something Luca mentioned earlier is that buying into the
 
      00:31:55.920  whole expected value Bayesian framework is neither necessary actually nor sufficient
 
      00:32:02.000  to get long-termism and there is a version of long-termism which just seems uncondivertally
 
      00:32:09.280  good and that's the version you just talked about like you don't need lots of massive numbers
 
      00:32:14.880  and calculations you just need to appreciate that the future could potentially be very large and
 
      00:32:21.440  full of really great valuable things and you need to recognize that like the world just does
 
      00:32:27.360  nothing or like next to nothing you know you hear the statistics like the biological weapons
 
      00:32:31.360  convention which is this kind of UN body for enforcing prohibitions on certain kinds of
 
      00:32:39.840  experimentation with bioweapons it has the funding of like an average McDonald's so at the margin
 
      00:32:46.560  let's just like do a little bit more and that seems right that also seems like it deserves to
 
      00:32:54.640  be called long-termism and if that is true then am I hearing that you at least think that a kind
 
      00:33:00.480  of semi-skimmed version of long-termism is fine we've always said that I think we say this repeatedly
 
      00:33:06.960  that there's nothing wrong with wanting to protect the future we just have to realize that
 
      00:33:15.200  we're comparing the suffering of people who don't yet exist to people say in Nigeria or in
 
      00:33:22.960  Botswana is just it's an illegitimate move we can't do that but there's nothing wrong with
 
      00:33:29.840  with wanting to solve problems today precisely because we care about our descendants and I
 
      00:33:36.640  disagree slightly that the world doesn't care about the long-term future it does it just doesn't
 
      00:33:40.960  use this word what do you think like the founding fathers were thinking about when they set up the
 
      00:33:46.240  United States Constitution they're considering how do we best ensure that a whole bunch of people
 
      00:33:51.040  can live harmoniously together in the long the long term yeah I phrased that I phrased that
 
      00:33:55.840  pretty bad I'd say on that like Vaden I agree what you're saying and I do think people care
 
      00:34:01.200  about the future but people also care right about like people in Africa as well right we send
 
      00:34:06.080  0.8% of GDP like as foreign aid but that doesn't mean that's enough right I think the point of
 
      00:34:12.080  long-termism is is that we're not we might care some amount about future generations but we're
 
      00:34:17.120  not caring enough and that means more resources need to go to those cause areas and I think it is
 
      00:34:21.760  like it's something we definitely need to be aware of is that those resources are going to come from
 
      00:34:26.640  somewhere right and it is a very tough question to ask and I think it's also something that EA
 
      00:34:31.760  generally is yeah I mean this is like a generalization but like I think it is something that that
 
      00:34:36.960  people in EA should think about more is that when you are spending money on anything that is also
 
      00:34:41.120  money that's coming from somewhere and yeah but like even so I think that the point of long-termism
 
      00:34:47.680  is that it's not enough that we should be spending more resources and those are going to come from
 
      00:34:51.600  somewhere and then I guess the question that you're worried about is that it comes at the harm of
 
      00:34:56.320  really important short-term interventions yeah what what what would enough look like and how would
 
      00:35:01.920  you know when you're there yeah I've got no clue because it will always be more right that's my
 
      00:35:08.640  main point it'll always just be more and more and more because there is no way to know what is
 
      00:35:14.960  a billion years from now is going to look like there's a there's a wall of of not uncertainty I
 
      00:35:20.320  hate the use of this word uncertainty because uncertainty and knowledge are different things
 
      00:35:25.120  but there's a wall of unknowability that we cannot puncture through right and so there's a huge
 
      00:35:33.440  asymmetry in what we can know about say reducing incidents of malaria via bed net distribution
 
      00:35:43.840  and reducing incidents of being locked in John Ham's Alexa via what like if this wasn't such a
 
      00:35:51.280  like a real example it would be ludicrous but it's prioritized more highly than increasing access to
 
      00:35:58.560  pain relief in developing countries on 80,000 hours and so on the strong long-termism point like I
 
      00:36:04.720  just want to read a quote from 80,000 hours website which so they give a list of six things
 
      00:36:13.360  which is at the very tail end of their list of priorities so the six things are the following
 
      00:36:18.320  mental health research biomedical research and other basic science increasing access to pain
 
      00:36:22.320  relief in developing countries other risks from climate change so climate change is now slipped
 
      00:36:26.240  away from long-termist cause to something that they don't prioritize as much reducing smoking
 
      00:36:31.760  in developing countries these are five things which they say we don't prioritize as highly
 
      00:36:37.680  because they seem less likely to substantiate substantially impact the very long-run future
 
      00:36:42.560  so I'm not the one making this equivalence this is what is already happening and it's
 
      00:36:48.480  happening at the expense of human beings lives and it's because there is always going to be a call
 
      00:36:55.520  for more more more more because of the infinite value of the future which one can always fall
 
      00:37:03.040  back on so if we can excise this style of argumentation from the long-termist discussion and prevent
 
      00:37:10.000  important causes like preventing pain in developing countries for Christ's sake if we can if we can
 
      00:37:17.760  prevent important causes from being just devoured by this style then I'm completely fine with long
 
      00:37:24.000  termism but from what I can tell long-termism is based on arguments from Nick Bostrom that are
 
      00:37:29.120  just all about this stuff and it if we can come to a place where long-termism doesn't represent this
 
      00:37:35.280  kind of argument I'm happy I'm happy and we can think and talk about the future for as long as we
 
      00:37:40.640  want we can talk about the resources that should be allocated there as long as we just disallow this
 
      00:37:45.760  this kind of reasoning yeah though okay I'll if I can like try and pick up on on two points there
 
      00:37:51.280  one is I agree that I'm super skeptical that we have any idea what's going to happen like even in
 
      00:37:56.000  like a hundred years I'm personally like super skeptical of that and super skeptical of any
 
      00:38:00.560  claims that rely on that I think if you just look at like what the world was like a hundred or two
 
      00:38:04.880  hundred years ago and you put yourself in those shoes and you try to predict in any way what's
 
      00:38:08.800  going to happen I think that should just be really humbling just to say like I'm with you on that
 
      00:38:13.200  and for me most long-termist causes that I believe in don't rely on that argument right you can make
 
      00:38:19.520  a case for building rigorous institutional design or fighting against corruption and those other
 
      00:38:24.640  like more broad terms that are justified by long-termism because they might come useful at some point
 
      00:38:30.720  even without being specific right of what happens in a hundred or two hundred years time
 
      00:38:34.240  the other point and I think this is like an important thing to kind of get into and might be like a
 
      00:38:39.200  bit more where we like tangibly disagree on things is that I don't think equating future utility
 
      00:38:46.880  and current utility with zero discount means that there's like an infinite amount of value
 
      00:38:52.480  in the future so the way that I kind of come from this is like with a little bit of like the economics
 
      00:38:58.080  kind of take on this and there you have something called social time preferences which kind of a
 
      00:39:03.840  composed of three variables so the first one is just the pure time preference so to speak
 
      00:39:08.720  which is what we're kind of talking about with okay you know in the very abstract sense
 
      00:39:13.360  is one life in the future worth one life today but then there's also other things you need to
 
      00:39:18.720  consider right which is that generally we've seen the future getting richer and that should count
 
      00:39:22.800  for something right if we like to hope that things are going to get better then surely there is a
 
      00:39:27.280  case to say that okay people are going to be much poorer today than the future so isn't there a case
 
      00:39:31.760  to help people more today and then the other thing is like a more wonky thing called like inequality
 
      00:39:37.760  aversion but like basically what you get with this kind of growth rate to think about and that's
 
      00:39:42.800  just like one thing that econ's kind of like to focus on you can bring in other things like uncertainty
 
      00:39:46.240  and stuff is that you can still make the case that future lives are worth just as much but because
 
      00:39:50.640  of some external factors or assumptions you can still get you know your practical discount rate
 
      00:39:56.880  to decrease in a way that it doesn't go to infinity right and governments have to make these
 
      00:40:01.200  decisions all the time every time the government thinks about whether they should build a nuclear
 
      00:40:04.480  power plant which will have effects for a hundred years or when they're debating about climate change
 
      00:40:08.640  and the social cost of carbon they're already like making these assumptions and they're talking
 
      00:40:12.160  about these things and you know you don't get this like infinity problem right governments are still
 
      00:40:16.720  able to come up with some decisions you might criticize with those decisions and you might
 
      00:40:19.840  disagree with what things they kind of use but you can still have a zero time preference and not
 
      00:40:25.280  assume there's like this infinity of value in the future that you're willing to make any
 
      00:40:29.040  sacrifices for and I think that in some way makes me like a bit more confident about like taking a
 
      00:40:34.480  more extreme long-termist position and not have to worry about this this this thing you you kind of
 
      00:40:38.800  described before jumping into the time preference issue can I just pick up on your first point
 
      00:40:44.320  Luca which is you said that long-termism enables us to justify certain interventions like more
 
      00:40:51.680  robust institutional design and work on democracy or something but I'm wondering if
 
      00:40:58.000  either of you have an example of a problem that is worked on only under the guise can only be
 
      00:41:07.120  worked on and justified by long-termist reasoning that can't be justified by other sorts of moral
 
      00:41:12.880  reasoning that is not sort of a sci-fi scenario so Finite I completely agree with you that like
 
      00:41:17.920  long-termism is this family of views right but the way I'm viewing it right now is like it's a family
 
      00:41:23.920  of views all based on certain styles of reasoning and the conclusions the valid conclusions that
 
      00:41:30.320  you get from this like improving institutional decision-making or decreasing a nuclear armament
 
      00:41:37.040  these are all problems that can be worked on under the guise of like basically any other
 
      00:41:42.560  problem-solving mentality ever and then it's only sort of the ludicrous scenarios that you get
 
      00:41:49.200  because of the long-termism and I think these are actually what you start taking the reasoning
 
      00:41:52.640  very seriously like I said before sort of inevitable and so that's the problem I have with it not
 
      00:41:56.880  necessarily that exactly what it's justifying right now but you know the kinds of reasonings
 
      00:42:01.360  that can undergird a lot of these the answer is changing the QWERTY keyboard layout
 
      00:42:07.280  it's more sensible so that our descendants can enjoy it and the serious answer is name and
 
      00:42:13.360  intervention that can only be justified with effective altruist reasoning in general and that
 
      00:42:19.760  isn't just obviously good before clearly unconditional cash transfers to the extreme
 
      00:42:25.520  poor or handing out insecticide treated bed nets is like an obviously good thing so I would struggle
 
      00:42:31.600  to think of something to answer that question even in this kind of general EA case I wouldn't struggle
 
      00:42:35.760  at all the deworming like this is how the classical cut is teeth right is that you can't just give
 
      00:42:44.000  to charities that make you feel good you have to look at the data so this is like to your point of
 
      00:42:50.560  like what could you justify with EA reasoning that you couldn't justify with other reasoning
 
      00:42:55.200  everything that EA does is all about the data and this is what like why they were so powerful and
 
      00:43:00.560  important I think effective altruism is also a family of views and it is home to an awful lot
 
      00:43:07.440  of ideas so it's difficult to say anything too sweeping about it but what are the interesting
 
      00:43:12.560  things it does is tells you which interventions and cause areas to prioritize and what to do about
 
      00:43:19.360  that rather than coming up with entirely new interventions although it does do that occasionally
 
      00:43:25.760  and I think something similar is going on in the case of long-termism so the criticism that you
 
      00:43:30.960  race is that look it's either trivial in that we already knew these things matter like clearly we
 
      00:43:36.480  don't want nuclear war so it would be good to reduce that risk or it's just false or like dangerous
 
      00:43:42.560  if you're using the kind of expected value framework and you get these infinities and you start worrying
 
      00:43:47.600  about John Han taking over the world and I think there is some course to be steered between
 
      00:43:54.960  triviality and ridiculous falsehood where long-termism is saying something substantial and interesting
 
      00:44:03.520  not in terms of coming up with entirely new interventions although maybe there are some
 
      00:44:08.880  but in terms of getting a better grip on what kind of things we should be prioritizing now
 
      00:44:15.520  and look maybe one of these examples is like funding that biological weapons convention as a
 
      00:44:20.960  step yeah I think Finhira on the head there were like deworming happened right before the EA
 
      00:44:26.880  community existed it's not something new and I think there's like a bit of conflation going on
 
      00:44:30.880  between like causes and choices right everybody can care about of course you can you know as we
 
      00:44:35.760  said before everybody cares about the future to some degree everybody cares about you know people
 
      00:44:40.720  in in developing countries to some degree and animals to some degree but the question is like
 
      00:44:44.240  how much weight do you put on these things and how does that affect your choices and in the same way
 
      00:44:48.880  that the choice that EA highlighted when it came to deworming is do you spend on a very inefficient
 
      00:44:54.800  charity at home or do you spend on a very efficient evidence-based charity abroad and there's a much
 
      00:45:00.080  stronger case we're making that through that EA reasoning I think what long-termism is is it
 
      00:45:03.520  takes a different choice which is the question of do we spend money today or do we invest in the
 
      00:45:09.120  future and there the case then is there are like very strong reasons to believe that you should
 
      00:45:14.240  be investing here but it's that choice right that's that's the difference yeah one thing actually one
 
      00:45:18.320  thing to say here is that one can be critical of long-termism but still support some of the
 
      00:45:26.320  interventions that it brings about so again like I think a good parallel here is religiosity
 
      00:45:31.200  because one can be supportive of for example the the process the the idea of like tithing
 
      00:45:37.600  in religious circles and so like giving to 10 percent of your of your income to charity for
 
      00:45:43.840  example so one can think that that's a really good idea but think that the reason that the
 
      00:45:48.480  religious site to do so are really bad ideas so like this idea that there's like an almighty
 
      00:45:53.040  creator in the sky who's going to punish you if you don't tithe is a bad reason to tithe even
 
      00:45:57.760  though tithing itself might have positive consequences and be a good idea and so it's just worth separating
 
      00:46:03.360  like the criticism of the the style of reasoning yeah that long-termism usually use from what they're
 
      00:46:08.880  actually doing day to day that that's a really good example so just like a really
 
      00:46:14.560  brief thing to say is like you get a similar thing right with long-termism as well where you
 
      00:46:18.000  have this like idea of keeping the world or I think it's called like the seven generations of
 
      00:46:22.400  sustainability I think that's like an idea from like a native american tribe and you know that
 
      00:46:28.640  I doubt they were like thinking about things and expected utility or this kind of utilitarian
 
      00:46:32.880  framework but they still kind of reach that same conclusion of of taking future generations
 
      00:46:37.200  much more seriously yeah I think that's a really good example Ben I was just going to
 
      00:46:41.840  highlight a difference between the kinds of arguments that mccaskell and EA were making when
 
      00:46:47.280  arguing for say deworming programs over play pumps that style of reasoning is
 
      00:46:55.200  scientific it's one that says we have our assumptions about the world and then we have data and we're
 
      00:47:02.400  going to collect data to challenge our assumptions because we could be wrong and that was why it was
 
      00:47:07.040  such a powerful argument against argument in favor of reallocation right but there is no such
 
      00:47:15.360  argument that it's possible to be made about the future one billion years from now there's no
 
      00:47:20.160  real way to get any sort of data whatsoever and I'm okay with some like I think at one
 
      00:47:28.560  point in the EA forum I highlighted just that there is a methodological error being made
 
      00:47:34.400  equating expected values of the future and expected values of malaria bed nets because in one
 
      00:47:40.000  scenario you have data and another scenario you don't and I would be entirely okay with say
 
      00:47:46.480  people arguing about fund reallocation within the realm of long-termism so do we want to put more
 
      00:47:53.440  money towards S-risk or towards preventing a global totalitarian government regime from enslaving us
 
      00:48:00.560  all or to AI fine we can say in this domain we do not have data and so we can talk about
 
      00:48:07.680  portfolio reallocation within that domain but we cannot cross compare because we're not comparing
 
      00:48:13.840  apples to apples and so for short-term interventions where we have data we can talk about reallocation
 
      00:48:19.200  in that domain as well but it's this cross-contamination of expected value reasoning
 
      00:48:24.880  that allows people to say it's much more important to work to prevent John Hamm than it is to work to
 
      00:48:31.360  prevent people dying from malaria all day and and this is because of faulty reasoning and that's why
 
      00:48:39.360  both Ben and I are highlighting this is a problem but if you if people just return to the way like
 
      00:48:46.400  what we all learned from a cascal in the first place reading his brilliant book about the importance
 
      00:48:50.880  of data if we return to that and then add to it some concern for the long-term future fully
 
      00:48:57.600  recognizing the limitations of our tools here then I'm okay with it of course we can talk about
 
      00:49:02.640  a lot of the stuff but we just have to be super careful not to let it distract from poverty
 
      00:49:09.200  alleviation and helping people who are suffering right now yeah let's let's plant the flag here
 
      00:49:14.560  right so it sounds like we've reached a kind of agreement that the things long-termists
 
      00:49:21.520  care about the kind of sensible things like reducing the risk of all that nuclear war or
 
      00:49:28.080  bioweapons or something like this these are sensible and to the extent that long-termism is going to
 
      00:49:35.120  make those changes more likely to happen then more power to it what you guys are worried about
 
      00:49:43.040  then Vain is the kind of reasoning or the the most popular kind of reasoning that leads to those
 
      00:49:50.160  conclusions and especially when you get so hung up on this kind of inappropriate use of what is
 
      00:49:58.320  just like a formal tool that you start actually taking seriously the kind of ridiculous extreme
 
      00:50:07.200  things it says when you like really push that style of reasoning and that's what you're worried
 
      00:50:11.920  about especially when it becomes that style of reasoning becomes parasitic on the part of EA that
 
      00:50:19.280  was so important during its kind of earlier period which is this emphasis on actual empirical
 
      00:50:28.240  data because you can't reason in the absence of beautiful synthesis of our views or at least by
 
      00:50:35.680  view excellent job that was great I completely saw an awful lot of that beautifully said yes
 
      
      00:50:43.200  and here's why you're wrong yeah can i maybe pick up on one thing just because a link to like
 
      00:50:52.320  Vaden's example of deworming which i think might be like a nice segue-ish into this debate about
 
      00:50:57.680  using evidence and stuff and a lot of what you said i'm very sympathetic with i think
 
      00:51:02.320  one important thing to note is that when we're talking about evidence and certainty and stuff
 
      00:51:06.800  and a lot of the criticisms right for criticism i think you raised about expected utility and
 
      00:51:12.560  like low probability and stuff is a more general critique that's not just relevant a
 
      00:51:17.600  long time so you gave the example of deworming right deworming is like very famous in development
 
      00:51:22.560  economics for actually being like a very hotly debated study thing i think and give well
 
      00:51:27.680  their methodology right they discount the effect of warming by 98 to like 99 percent just because
 
      00:51:33.840  of uncertainty about like how relevant the results are so i guess i in more particular there was
 
      00:51:38.160  basically a study that showed that deworming was really good in this like one very specific
 
      00:51:43.200  circumstances but a lot of very clever people have raised a lot of concerns about like if that
 
      00:51:48.480  like is actually externally valid to other situations as well which had to do with
 
      00:51:52.320  a flooding and some other things where basically people earns a lot of money but there wasn't
 
      00:51:57.920  really clear like why they did um and there is like a big yeah it's a whole like tension to go down
 
      00:52:03.120  too but like basically the point being that even with these short-term interventions you can be
 
      00:52:06.640  really uncertain about if they actually work but give well also uses expected value calculus right
 
      00:52:13.040  to reach these decisions that have very low probabilities i mean deworming is like a very
 
      00:52:16.960  novel example here but i think is a point to raise that this is like a more more general
 
      00:52:21.360  critique um i think of like using evidence and reasoning um then just like long termism even
 
      00:52:26.960  though like long termism might be the most extreme example but but notice the ability in that very
 
      00:52:31.920  example for give well to be refuted right so there's there's a process by which people can go collect
 
      00:52:38.720  more data and refute the reasoning give well uses right this and this is how academia works we
 
      00:52:43.680  publish papers and individual papers not to be taken as god's word uh on the on the topic right
 
      00:52:50.080  and so more academics go out and do more studies and over time we start getting a clearer picture
 
      00:52:55.280  and we there was one study in it and then we course correct um and we're continually guessing at what
 
      00:53:01.200  the world looks like and continually refuting the the worst ideas and getting closer and closer to
 
      00:53:06.960  the truth right and so and the mechanism to do this in the give well case is continually going back
 
      00:53:12.080  to the data looking at it uh more scrupulously going out and getting more data asking more questions
 
      00:53:17.760  criticizing the framework the assumptions etc etc the ability to do this in the long-term
 
      00:53:22.880  as case is very very limited right so you can only criticize basically the use of expected
 
      00:53:29.680  gap value calculus but there's very very few ways in which you can once you adopt that style of
 
      00:53:34.560  reasoning it's very difficult to start refuting some of the conclusions because like what do you
 
      00:53:39.680  what do you we're just gonna wait a hundred thousand years and see how many future humans there were
 
      00:53:44.880  and then say oh shit we were wrong about that one you know um and so so i just want to highlight
 
      00:53:49.840  the the differences in the ability to actually correct our errors which is i think crucial right
 
      00:53:55.520  yeah that's that's a very valid point there's a few things to say here right one is in life
 
      00:54:00.080  occasionally you are forced to make decisions where there is no possibility of gathering evidence
 
      00:54:05.360  before the fact there's no possibility of you know correcting course and falsifying your guesses
 
      00:54:11.360  but not making it cool and not doing anything about whatever the situation you face is
 
      00:54:16.160  constitution decision as well and i'm not suggesting you're saying this but i claim to the effect that
 
      00:54:22.400  we should only ever do things when it's possible to gather like good empirical evidence about
 
      00:54:29.360  whether this is a good thing to do that's not gonna good to work and sometimes you have to rely on
 
      00:54:35.040  other things like like arguments and reasons as well as data so maybe that's the first thing to say
 
      00:54:40.560  that this is the reason why i i tried to frame it as a methodological error that could be rectified
 
      00:54:48.000  it's about this cross comparison thing i recognize that say working to um start thinking about
 
      00:54:56.960  what governance structures will have to be on Mars when we populate Mars is a useful conversation
 
      00:55:04.240  i'm in favor of of that for sure i'm not saying that one can only think about that which we have
 
      00:55:09.680  immediate data that would be self-refuting because i'm super interested in philosophy and
 
      00:55:14.160  philosophy you typically don't have data to to adjudicate so totally um again it's not about
 
      00:55:20.880  saying we cannot think about the long-term future i'm all in favor of science fiction just as a
 
      00:55:27.280  form of expression and of idea generation it's just we can't then compare to situations where we
 
      00:55:34.080  have data because these are very different phenomena um and especially we can't compare using the
 
      00:55:39.520  same word like probability because the word probability means very different things to different people
 
      00:55:44.720  but they don't realize that they don't realize that when you talk about to be yours probability he's
 
      00:55:48.560  just talking about a made-up number and when you talk about the probability associated with bed nets
 
      00:55:52.320  or take maybe a clear example the probability associated with AI takeover compared to the
 
      00:55:57.520  probability associated with a volcanic eruption one comes from frequencies which you can count
 
      00:56:02.880  and the other comes from just belief states which we all know are completely subjective to
 
      00:56:08.000  bias of various different forms so um agree that we can't just exclude all forms of reasoning without
 
      00:56:15.360  data it's just we have to be careful about this cross comparison yeah no i i i i agree with that i
 
      00:56:20.320  think one thing to maybe add there as well is that right when we're talking about i guess like what
 
      00:56:26.080  we have evidence and stuff on um we also need to be aware even in like those interventions like
 
      00:56:31.200  deworming or other things about like some of the long-term effects that it might have right because
 
      00:56:35.040  at the moment we're kind of very implicitly assuming that the RCT result is what we get and like nothing
 
      00:56:40.480  happens thereafter right but there are also you know you you can imagine a scenario right where like in
 
      00:56:45.040  uh 50 or 60 years that kind of has knock on effects all i like i don't know it might have positive
 
      00:56:51.200  long-term effects that we're not aware of and we're not factoring in and actually giving malaria nets
 
      00:56:54.960  and deworming is even more valuable because it boosts GDP in some way or you know elets um
 
      00:57:00.160  uh i forget like the the name for it but like this Einstein example right where you just have to
 
      00:57:04.560  think about like all these smart people um you know who have ever lived in the world who just
 
      00:57:08.560  died of malaria and they were never the given the chance right to contribute to i think you know
 
      00:57:13.120  these are also things that just feel really hard to to quantify and like have evidence for in any
 
      00:57:18.000  way but still feel like they should inform our decision-making right to to some degree was was
 
      00:57:23.760  was that a long-termist case for short-term intervention yeah i i don't know i'm kind of
 
      00:57:27.360  exploring it right but yeah i think there is like a case to uh to be made that's just like really
 
      00:57:32.560  consider the long-term effects as well yeah i just i want to take a second to just echo vadon's point
 
      00:57:37.280  which is a point um that i think i regret not making in the bla our last two episodes about long-termism
 
      00:57:43.680  um which i think it's so i think the position i'm espousing is easily conflated with empiricism
 
      00:57:50.080  which i think fin you were rightly pointing out which is to say we can't only act when we have data
 
      00:57:56.000  and i completely agree so um and this is where like the role of a good theory and a good explanation
 
      00:58:01.680  comes in in science right so we have to act uh in accordance with like the best theory of something
 
      00:58:07.760  so um we had good reason to adopt like uh relativity for example which predicted black holes and
 
      00:58:13.680  therefore i think we should act in accordance that that the universe like has black holes before we'd
 
      00:58:18.400  like gather data about it and this has been super prevalent actually in the covid case which someone
 
      00:58:24.160  like rob wibblin has been very um good at pointing out which is like the inability for a lot of people to
 
      00:58:31.280  uh to like adopt certain uh vaccines and approve certain vaccines under the guise of like we don't
 
      00:58:37.040  have enough data um but this is like a misunderstanding of how the scientific process actually works so
 
      00:58:41.360  like data use is is used to discriminate between different theories um and viewing and viewing us
 
      00:58:46.800  only able to act when we have good enough data is uh like a misunderstanding of what a theory does in
 
      00:58:51.920  the first place um and so i i do want to separate uh so i'm not making the empiricist claim that we
 
      00:58:57.120  can only act when we have data but what i am saying is that when your reasoning uh lies on probabilities
 
      00:59:04.000  um they these they're different sort of rules at play right so if if you're gonna if you're gonna
 
      00:59:11.680  start comparing like vadon said um arse the results of rcts and saying in expectation we expect this
 
      00:59:18.640  many lives to be saved based on the data and the assumptions we've made this is very different than
 
      00:59:24.080  saying the probability of ai takeover is 30 percent in the next decade or something because
 
      00:59:29.440  there's just completely different processes at work for how we get these numbers um i think these
 
      00:59:33.680  numbers honestly i think we should start writing these numbers differently like one of them we
 
      00:59:36.880  should write with like a little pill the over them or something because these it's very confusing
 
      00:59:40.720  that we're using the same symbols on paper but these are like completely different concepts and
 
      00:59:45.680  so one we should just like write a red pen or something and so maybe that might have been a little
 
      00:59:52.080  hard to follow or garbled on my part but i so i'm not saying that we can only act when we have data
 
      00:59:56.240  but i am saying in the in the particular case of probability we have to be very clear that a lot
 
      01:00:00.000  of these numbers have been generated by completely different processes one of them being entirely
 
      01:00:03.520  subjective um and one of them being actually objective and coming from data and therefore
 
      01:00:07.840  able to be corrected that's so that's true and i feel like this is just worth digging into um
 
      01:00:12.960  a lot of further because it feels like a crux so what do we say here one point to raise
 
      01:00:20.320  insofar as there is an important difference between subjective probabilities and more objective
 
      01:00:27.360  or so-called sequential probabilities it's not obviously the case that we should take one more
 
      01:00:33.360  seriously than the other or we should act only on the basis of one um sometimes subjective
 
      01:00:38.960  probabilities matter a great deal for instance we need to have some guess about how a global pandemic
 
      01:00:45.840  will pan out but we haven't experienced anything like that before so we don't have any kind of
 
      01:00:49.920  uh frequencies to base our guesses on but you need to make it cool either way and you know you
 
      01:00:54.800  face lots of choices like that so it's not like a it's not obviously the case that we should ignore
 
      01:01:01.280  or take less seriously um probabilities which aren't strictly derived from really good evidence
 
      01:01:09.360  that's a great example by the way when you're done we should we should just zone in on the on
 
      01:01:14.560  the pandemic example if there's a lot going on there yeah so that'd be great right right
 
      01:01:19.680  so yeah let's just like spell out what we're disagreeing about which is um there are different
 
      01:01:25.600  views of what probabilities are and some people think you should just treat all probabilities to
 
      01:01:30.400  say and some people think there are different kinds so on one hand there is a viewer probability
 
      01:01:35.920  which is you have a number between zero and one and where it's appropriate to use it's telling you
 
      01:01:42.960  about how the world is in a certain way if i said that um this coin i have i'm going to flip it and
 
      01:01:50.400  the chance that it'll land heads is 0.5 you could translate what i'm saying there as being something
 
      01:01:56.240  like in the limiting case if i flipped it you know gazillion times 50 percent of those times
 
      01:02:02.000  it's gonna land heads right so it's the kind of fact about its propensity or the kind of frequency
 
      01:02:07.920  that it comes up either it has come up in the past or will come up in the future there are like
 
      01:02:12.480  different ways to get it out but in any case it's like a fact about the world and then you have
 
      01:02:17.280  another camp of interpretations about what probabilities say and the idea there is that
 
      01:02:24.640  probabilities don't immediately tell you about the world probabilities describe instead a fact about
 
      01:02:33.600  the mind of the person using them so that's something like a degree of belief or strength of
 
      01:02:40.960  belief so that's like a subjective interpretation of probability and when you combine it with ideas
 
      01:02:47.840  about how to update those probabilities on new evidence then you can call yourself a
 
      01:02:54.560  beigeon this is like a beigeon like interpretation of probability so those are like what like kind
 
      01:02:59.600  of two rough camps and um uh not to put to find a point in there but i think you hate beigeons from
 
      01:03:08.240  the impression so i'm yeah i don't feel like qualified to say anything kind of technically
 
      01:03:17.840  accurate but i also feel tempted to stick up for the beigeon side of this argument and maybe the
 
      01:03:24.800  first thing to say and i'll like shut up soon but one thing to say if you're a beigeon you are allowed
 
      01:03:30.560  to have subjective probabilities about anything including those things which also obviously kind
 
      01:03:40.400  of fit the objective story so if i flip my coin a hundred times and it comes up heads 50 times
 
      01:03:47.600  i'm allowed to have a subjective credence that the next flip will come up heads with a 50% chance
 
      01:03:56.560  so in a sense beigeon epistemology it starts with all the obvious stuff that an objective
 
      01:04:05.040  interpretation of probability will be first to point out and extends it to places where
 
      01:04:11.920  someone who's more sympathetic to that objective interpretation will be scared to go it's worth
 
      01:04:17.680  thinking of it as a kind of yeah a way of extending notions of probability beyond the most obvious
 
      01:04:25.040  applications so there's a continuity there rather than a disjoint between
 
      01:04:30.480  you know rct probabilities and then like sci-fi probabilities so yeah that's the first point to
 
      01:04:37.360  me excellent so on the question of do i hate bezians do i hate beige no i am a beigeon my
 
      01:04:43.920  entire research career is basically in bezion methods and this is my sole public publication
 
      01:04:50.080  output when i come to machine learning is publishing bezion methods and so no i'm a hundred
 
      01:04:54.400  cent of bezion i'm not a bezian epistemologist though that's where the big distinction lies
 
      01:04:59.120  because i fully recognize the power of bezian methods in statistical reasoning oh yeah no one's
 
      01:05:05.440  denying bezeral right just to like get that out of the way yeah yeah which i was accused of somehow
 
      01:05:11.040  on the ea form but it's also a statistical methodology which is incredibly powerful and
 
      01:05:19.040  makes use of subjective knowledge so i'm not even a pro i don't have a problem with
 
      01:05:25.040  infusing subjective priors into our data analysis that's also completely great
 
      01:05:31.360  it's just what happens when you take out the data entirely you're left with bezian epistemology
 
      01:05:37.920  as contrasted against bezian statistics so like when it comes when you said it's not necessarily
 
      01:05:46.480  clear which one is better a good uh between subjective estimates of probability or objective
 
      01:05:52.000  estimates of probability um here's a tiny little thought experiment imagine you go to the doctor
 
      01:05:58.000  and the doctor tells you you have uh brain cancer and they say the probability of you living past
 
      01:06:05.920  your thirties is 90 percent then he says that comes from looking at a hundred thousand people who
 
      01:06:13.680  have brain cancers exactly like yours um you're gonna feel one way but imagine he says that's just
 
      01:06:19.360  what i believe that's just a belief in mind i haven't looked at the data i just this is a hunch
 
      01:06:23.520  it's a gut feeling that with your style of cancer yeah i believe you're gonna live past 30
 
      01:06:30.000  which one would you prefer i would argue you'd prefer the one that's based on data right um it's
 
      01:06:35.120  not necessarily about if the data i don't really understand what you're trying to get on that
 
      01:06:38.400  uh my what i'm trying to get at is that um probabilities derived from data are superior to
 
      01:06:43.600  probabilities derived purely from belief of course they are but who who's saying that did you not
 
      01:06:49.120  say earlier we i think it was that it's not necessarily clear which one is better um is i think
 
      01:06:54.320  the phrase that you used uh when it comes to subjective probability or objective probability
 
      01:06:59.120  yeah and so so i think that's on me for not being clear enough about the two views we're comparing
 
      01:07:05.680  so i'll try again an objective interpretation of probability just takes those numbers between
 
      01:07:13.680  zero and one to be saying something about the world um an objective fact about frequencies or
 
      01:07:22.880  propensities right these subjective family of interpretations of probability take those numbers
 
      01:07:29.920  to be saying something about something inside my head right about a strength of a belief but
 
      01:07:38.240  clearly you're allowed to base your beliefs on evidence and in cases where you can you obviously
 
      01:07:47.040  should in fact you should you know use all the kind of available evidence and data that you can so
 
      01:07:55.600  it's a boring philosophical distinction which doesn't matter most of the time rather than a
 
      01:08:01.840  distinction about where you should draw your evidence from or what sources of data are best
 
      01:08:06.640  um something like that i would say it matters a huge amount when you're given a probability and
 
      01:08:11.840  you don't know uh from which philosophical school it was uh derived so in ords book um he gives
 
      01:08:18.880  probabilities associated with volcanic eruption probability associated with nukes uh probabilities
 
      01:08:23.040  associated with um asteroid collisions these are again like going to your doctor who's looked at
 
      01:08:29.520  100 000 different examples of asteroid collisions um and then he gives a probability associated with
 
      01:08:35.760  ai takeover um and now he switched right he switched to subjective belief probability uh there's no
 
      01:08:42.640  data you can't possibly calculate um the probability of ai takeover in this fashion um and that's where
 
      01:08:48.080  it matters it matters a lot because this distinction is um subsumed in the word uh subjective and so
 
      01:08:54.960  or does vary up front he's like i'm using subjective estimates here um but i would claim that the
 
      01:09:00.160  readers don't realize what exactly is happening uh it's just what that means when you just add the
 
      01:09:05.360  adjective subjective in front of it uh things change entirely um and just think about what
 
      01:09:12.400  uh this would be like in the domain of health and medicine if doctors just switched in a cavalier
 
      01:09:18.480  way between talking about um incidents of breast cancer what derived from data and just beliefs
 
      01:09:25.200  and gut feelings also using the same word probability and then they just start ranking things based on
 
      01:09:30.320  this this it matters a lot it's not just a philosophical empty discussion because it is the fuel which
 
      01:09:36.080  allows people to um mix uh probabilities associated with data with probabilities associated with
 
      01:09:44.080  nothing just belief states yeah that makes sense so where there is a practical difference it's that
 
      01:09:48.720  um fans of this objective uh class of interpretations of probability are going to bork at putting numbers
 
      01:09:58.880  on things where there isn't a strong enough body of evidence whereas fans of a more kind of
 
      01:10:06.000  patient approach are going to just like come up with a guess drawing on similar examples from the
 
      01:10:12.480  past and mushing together uh more various sources and reasons um and you're saying that when that's
 
      01:10:23.440  not made clear that you're drawing your number from uh second rate sources that can be misleading
 
      01:10:32.480  um i think it's unfair in the tobiord example i think he's fairly upfront that these are
 
      01:10:40.320  i mean if you would know what subjective means right and he also says look these could be an order of
 
      01:10:46.000  three wrong either way but it would be um unfair to the reader or maybe kind of like patronizing to
 
      01:10:54.560  the reader not to just like say my guesses about how likely these things are because i've just written
 
      01:11:00.960  an entire book and you're probably wondering what they are so you can you know you should be trusted
 
      01:11:05.440  to to like understand where where these things are coming from maybe people don't understand them but
 
      01:11:12.320  i i would expect most people do just one tiny point and then just a very super point um super small
 
      01:11:20.480  point which is that there's a third option available right um and the third option is the one which
 
      01:11:24.880  fast-lav smeal took in his book uh global catastrophes and trends which is to just not use subjective
 
      01:11:32.720  probabilities uh at all um just rely on arguments in that case and use probabilities to summarize
 
      01:11:40.240  data but it's not a choice of either be honest about subjective probabilities or hide them the
 
      01:11:47.040  third option is don't use them vane everything you just said there is is really great and i think i
 
      01:11:53.360  might actually be be with you there more more than with with fin well um we're looking for a third
 
      01:11:59.520  host writer yeah this idea has one host well one thing i was gonna bring up and i'm again like i
 
      01:12:07.280  think it maybe comes to like a broader point of like how you message things and how especially
 
      01:12:11.760  right you communicate science and statistics with like the general public fin where you said um
 
      01:12:17.040  like you shouldn't patronize with the reader i think probably in the in the tobi old book that
 
      01:12:21.520  does make sense but he does also like add the numbers together right to give like one really
 
      01:12:26.160  nice headline figure which is like one and six right in the century and that really literally just
 
      01:12:30.960  adds together the more objective probabilities or like frequentest stuff of uh asteroids with
 
      01:12:36.640  the subjective kind of a i think to give one really nice figure which you know is the first thing
 
      01:12:40.320  that comes up with if you're not going to read the book right you're still going to see the
 
      01:12:43.200  the one and six number so that's like where i think i really agree with vadence point that i think
 
      01:12:47.440  that is actually something to consider more seriously especially when these things build up on each
 
      01:12:51.360  other right and you then cite this in your next literature review and so i can definitely see
 
      01:12:55.040  how that kind of becomes a bit of a ponzi scheme right beautiful words it's all kind of building
 
      01:12:59.040  on to each other also important to note that neither as recline or sam her is picked up on this
 
      01:13:04.720  yeah subjective versus objective difference right he was on both of their podcasts and both of them
 
      01:13:08.640  were just using this number one out of six as if the the different sources being fed into this
 
      01:13:14.240  function that gave the output were the same and so you know these are two very small people who both
 
      01:13:19.040  didn't mention it and maybe they realized there was a difference but they didn't say it and so
 
      01:13:23.200  people who now haven't read the book who are listening to their podcasts which are in the millions
 
      01:13:27.760  presumably have this walking around with this number one out of six in their head which is just
 
      01:13:31.680  like tobe or it's completely subjective but we have read that that's a really good point i think
 
      01:13:37.440  yeah that is actually a fair point maybe there's a worry about messaging the first thing to say
 
      01:13:41.280  there is that that is separable from the more fundamental question of whether subjective
 
      01:13:46.480  probabilities are ever reasonable to talk about and come up with in the first place
 
      01:13:51.280  um another point there is in that specific example maybe it's not such a bad thing that if
 
      01:13:58.800  people are going to take anything from that book then that kind of headline figure
 
      01:14:02.880  it gets people interested and maybe that's not such a bad thing and then they learn more if they
 
      01:14:08.640  are interested and find out that it's okay it's like less clear it could actually be more it could be
 
      01:14:13.520  less um but it's a hook choose choose your least favorite cause area and imagine they did it if
 
      01:14:19.120  they did it honestly then i would appreciate their honest guess um if it's like i'm trying to sell
 
      01:14:24.400  this thing by coming up with the highest number that i can justify then that would be bad but it
 
      01:14:31.120  would be bad not because they are using a subjective uh interpretation of probability but because
 
      01:14:36.400  they're being dishonest which is just bad anyway it's just a super tiny point of clarification
 
      01:14:40.800  which i think is important is that neither bend nor myself or accusing or or anyone of being
 
      01:14:45.840  dishonest right now um it's just yeah yeah yeah yeah yeah it's just i i think there's um it is a
 
      01:14:52.720  ponzi schemes another philosopher um referred to it as a scandal and i think that that is closer
 
      01:14:58.880  to the mark which is that everyone is using the word probability and not realizing that very
 
      01:15:03.360  different things are going on and so i think it's it's actually a public confusion which even people
 
      01:15:10.080  as brilliant as as uh oard and uh sam harrison and as recline um uh haven't recognized uh so so so
 
      01:15:18.480  it's it's a it's a it's a giant mistake not a intentional act of um propaganda or or what have
 
      01:15:26.160  you so just to clarify that yeah yeah and like it's it's a general science yeah right problem as well
 
      01:15:31.840  i i can point to like other examples right where it just seems more more just bigger than that
 
      01:15:36.560  but one thing i wanted to ask vaden and ben then so um if you say that there's no there's this third
 
      01:15:44.960  way right of like not using um subjective probabilities at all um but you also earlier said right that
 
      01:15:51.120  you're like generally sympathetic to things like ai safety uh or you know extinction risks to to
 
      01:15:57.760  some degree even if it's not these more like sci-fi escrowist kind of things how do you justify that
 
      01:16:03.920  then is is this like a question of like going back to a theory or using general intuitions or so
 
      01:16:08.640  because you still have to make right like some kind of trade-off like what kind of alternate
 
      01:16:12.880  decision making kind of process would would you propose that i'm actually not sympathetic to ai
 
      01:16:17.600  risk okay right but sorry first you can maybe argue about that later i think um it's actually
 
      01:16:24.800  useful to return to fin's thought experiment here about nuclear war so um the first thing i'll say
 
      01:16:29.840  is that i'm i'm actually not gonna say that people shouldn't use subjective probabilities like if
 
      01:16:35.920  they want to organize their own thoughts in specific ways and find it useful to generate arguments
 
      01:16:41.120  and sort out their own personal priorities by like assigning numbers to things i don't care
 
      01:16:45.200  and actually this is one of the biggest problems i have with like the less wrong style of Bayesianism
 
      01:16:50.160  is that it calls people irrational for not abiding by the subjective view of probability and like
 
      01:16:56.640  for me i i don't actually care how you come up with arguments how you start prioritizing things as
 
      01:17:01.600  long as you're open to like argumentation and criticism from other people so anyway i want to
 
      01:17:06.080  separate the the criticism of subjective probabilities and the comparison of subjective and objective
 
      01:17:11.680  probabilities to just like using them so if you want to use them to convince yourself of things or
 
      01:17:15.600  or argue that's fine um but we just have to recognize where they're coming from and like not
 
      01:17:19.920  compare them willy nilly with objective probabilities um but in the in the particular case of like
 
      01:17:24.800  nuclear war um so i think one thing you're smuggling in there is like why do you need beliefs that obey
 
      01:17:31.920  the rules of probability like what's the axiom of human rationale it says we must have a belief
 
      01:17:38.240  about the probability of nuclear war in the future and that thing has to be a number between zero and
 
      01:17:43.840  one right so this is like a huge assumption being made in this world of Bayesian epistemology
 
      01:17:49.120  that is like nowhere argued for um but it's just like assumed like we you know we must um have
 
      01:17:54.400  beliefs that conform mathematically in these specific ways yeah i mean one point there is
 
      01:17:59.120  it's getting a little in the like prudential self-interest a case you can just show that if you if you
 
      01:18:06.080  violate like the rules of updating on probability according to Bayesianism then you just end up
 
      01:18:11.760  booked being dutch booked yeah you get kind of money pumped whatever you lose out over the
 
      01:18:15.120  long run yeah in a casino if you're a casino and you um uh don't uh use the rules of probability
 
      01:18:24.080  when um designing your games then you're going to consistently make uh law you're going to lose
 
      01:18:28.560  money right so um the amount that roulette pays out for certain um uh roles of the of the the wheel
 
      01:18:37.680  that's not arbitrary and if you deviate from say half on red half on black and it's like three
 
      01:18:45.600  quarter uh payout three fourths on red and one quarter on black then you're going to lose money
 
      01:18:52.400  but um so then you could say that as a casino owner your subjective probability needs to match
 
      01:18:59.040  what the objective case is otherwise you're going to make decisions for your casino which will
 
      01:19:04.800  make it go under um but this is a highly contrived example and it's one that works only because we
 
      01:19:12.080  know how to set up um chaotic systems which actually produce objective randomness in the world um so
 
      01:19:21.760  in situations where we know there's a source of true randomness such as pachinko machines or
 
      01:19:29.760  roulette um in those limited cases it makes sense to align your subjective credence to the objective
 
      01:19:36.480  probability but this is the rare case um this is not the kind of reasoning that we can then
 
      01:19:43.440  adopt when reasoning about the future we actually know using explanatory theories
 
      01:19:50.720  what physical systems produce actual randomness and this is what computer programmers have to think
 
      01:19:55.920  about when they use random number generators because it's actually really hard to produce random
 
      01:19:59.600  number generators that actually statistically make random um random events uh so we know where
 
      01:20:04.800  randomness comes from in nature and these are the only places where you can talk sensibly about
 
      01:20:09.600  mapping subjective credences on to the objective world but these are chaotic and complex systems
 
      01:20:16.400  as well as uh like brown emotion and stuff but but again this is limited cases maybe a point to
 
      01:20:23.680  race here is um the track record of betting markets and prediction platforms where you are betting on
 
      01:20:34.400  all sorts of things which do not meet the vane criteria for randomness like political
 
      01:20:39.040  like elections and how the pandemic is going to turn out and the most like vane
 
      01:20:44.960  triggering of them all are like will uh trigger i love it yeah will the like reagan conjecture be
 
      01:20:51.840  proved either way like in the next ten years like this is not random um but people place numerical
 
      01:21:01.280  probabilities on them um what's interesting is that um once you make enough of these guesses you
 
      01:21:07.680  can see if you're well calibrated or not so where you've made a guess that something will turn out
 
      01:21:14.720  to happen with like a 20% probability did it end up happening 20% of the time once you've made
 
      01:21:21.200  enough of those bets um and once you rack up that kind of track record there it feels like
 
      01:21:27.840  you can be said to be either right or wrong well calibrated or not with respect to your guesses
 
      01:21:35.840  even though those guesses were about like all kinds of things which aren't anything like
 
      01:21:43.600  did it apply or casino spends tiny thing listeners can go on the Wikipedia page for randomness and
 
      01:21:51.840  then there's like a subsection sources of randomness in nature and that's the criteria and that i'm
 
      01:21:56.000  using it's not vane's criteria and it's it's aware science like actually knows um where randomness
 
      01:22:02.800  occurs um on the super forecaster thing that we talked about this a bunch with maricio um and
 
      01:22:10.560  what does one tiny point to make is that a well calibrated forecaster is allowed to say 0.5 when
 
      01:22:16.960  they don't know stuff so you could have a forecaster but but of course but when you say i have a
 
      01:22:23.200  subjective credence of 0.5 you are not saying that if we round the world a thousand times
 
      01:22:29.520  i'm pretty confident it's going to happen 500 times you're saying i have no idea how many times
 
      01:22:34.080  it's going to happen but my degree of belief that it will happen is about 0.5 no it's a
 
      01:22:40.480  so my only point is that quote well calibrated forecasters it's not like a a found of wisdom
 
      01:22:45.520  about the future it is just people either say like if you split the world into that which i'm super
 
      01:22:52.000  certain about that say joe biden will still be president in two years not four years um and
 
      01:22:58.000  things that i'm completely uncertain about that i just don't know like uh what is going to happen
 
      01:23:03.360  in 100 years from now and you just ask me questions that are just that which i'm certain about or
 
      01:23:06.640  that which i'm uncertain about it is like no nothing about then that could be a perfectly
 
      01:23:10.000  well calibrated forecaster um because like we're not getting anything for free and that's a
 
      01:23:15.040  crucial point we um so yeah that's really so gigo right like garbage in garbage out you cannot
 
      01:23:21.120  learn anything which you didn't already know by putting numbers and stuff great oh that's a lot
 
      01:23:26.720  back good and yeah so glad to hear you said it yeah yeah exactly yeah but there's something there's
 
      01:23:32.640  something on the kind of grammatical surface of saying something is this value in expectation or
 
      01:23:38.960  like the probability is such and such where what you mean is my kind of degree of belief that it'll
 
      01:23:44.880  happen is this and that's true as far as it goes like it can be misleading maybe if you don't know
 
      01:23:49.680  all these people are talking about that they actually know something about how many times these
 
      01:23:54.160  things would turn out if you like ran the world a thousand times but that's only like a surface
 
      01:24:01.120  level mistake and no one's claiming to know anything more by putting these numbers and stuff
 
      01:24:08.960  than they knew before right well the surface level mistake goes pretty deep because people
 
      01:24:13.920  argue in favor of subjective views of probability by counting the frequency of super forecasters
 
      01:24:19.040  right and the frequency of their uh accurate or inaccurate predictions um so there's not a clear
 
      01:24:25.520  delineation of frequentist methods for subjective methods it all gets conflated and out the other
 
      01:24:30.320  end comes just numbers which people don't know their uh how they were derived and so um we should
 
      01:24:37.360  maybe get back to the long-termist subject but this like it's just it's an interesting irony that
 
      01:24:42.880  um the one of the best arguments for Bayesian credences is through frequencies I mean to tie that
 
      01:24:48.880  into the long-termism thing is like one of the biggest I think possibly the only argument for AGI
 
      01:24:54.320  being a big danger is just citing the credences quote unquote of experts right and saying like
 
      01:25:01.040  they believe that within 50 years there's going to be AGI that does so and so um and we need to
 
      01:25:07.200  take this shit seriously because look at the look at these numbers my god right so there's no like
 
      01:25:11.680  one layer deeper of like let's look at the arguments um and I just yeah I saw I have a thought
 
      01:25:15.600  experiment for you both like say you have two forecasters um for the audience forecasters just
 
      01:25:21.680  someone who's like predicting things about the future uh and one of them has a track record of
 
      01:25:26.880  like 90 correct right these are guys a fucking super or she is a super forecaster like none
 
      01:25:33.440  other right and one of them sucks like point two like worse than chance it's just terrible um and
 
      01:25:39.280  they're both telling you their guess at like whether AGI is going to take over the world in the next
 
      01:25:44.320  hundred years um would you actually believe would you give more weight to the opinion of the super
 
      01:25:51.920  forecaster with like the not with their track record um or would you actually just forget the
 
      01:25:57.840  numbers and examine their arguments I mean this is like probably the silliest answer but like if
 
      01:26:03.040  the uh the second forecaster is so bad right that it like negatively correlates in some way
 
      01:26:08.400  then that could be like do the opposite of whatever they say yeah do the opposite yeah yeah I feel
 
      01:26:14.880  like you mentioned this in the middle podcast but you I remember you raised a really great point
 
      01:26:18.240  which doesn't as far as I can tell undermine anything deep about like Bayesian epistibology
 
      01:26:24.560  but it's a mistake people make which is if you just like hand out a survey to a hundred AI
 
      01:26:29.440  experts asking these like unosable questions about when general artificial intelligence will
 
      01:26:35.200  arrive and like everyone who gets a survey they're like I don't have a fucking idea but I have put
 
      01:26:41.920  a number because you know I'm getting like a date or whatever it's a survey I'll just put like
 
      01:26:46.960  top my head and then like those answers come back and you see that one person thinks that AGI will
 
      01:26:55.760  arrive by 2050 and the next person thinks it'll arrive by 2045 and you think hey there's these
 
      01:27:01.680  all clustering around the same point now that a hundred people have said it'll arrive by this
 
      01:27:06.000  point we should be way more confident that it will then compared to just asking one person
 
      01:27:10.080  um because we can aggregate their beliefs and come up with this like even stronger
 
      01:27:14.640  um more confident belief look at the posterior right exactly and that's often inappropriate and
 
      01:27:25.120  it's going to give you a like way false sense of confidence where just everyone is um equally
 
      01:27:33.040  clueless and you can't turn collective cluelessness into a kind of combined confidence
 
      01:27:38.240  killer illiterate I think you've just uh underbined uh one paper that bostrom and uh
 
      01:27:44.720  ord co-wrote because I think nick bostrom literally did that and published a paper on it with uh
 
      01:27:51.920  a tobi word and so I'm glad that to hear you say that because it is a ridiculous methodology but
 
      01:27:57.280  it is one which um is published by the ord and bostrom yeah yeah I would say it depends on um
 
      01:28:03.760  how independent you think those guesses are right so if you think they're all drawing one like
 
      01:28:07.600  totally independent reasons and sort of evidence that you should think that they combined something
 
      01:28:14.160  stronger um and maybe that I haven't really know what you're talking about um but yeah no maybe
 
      01:28:18.240  that that is a mistake you can make and like a slightly more mundane example is um you get
 
      01:28:23.360  some howlers especially in like pop science writing is howlers some english words or what is
 
      01:28:28.640  like an egregious error like uh yeah yeah some egregious errors to see um okay so like
 
      01:28:35.200  i've never said it as well um so people will come up with like a pop-side claim like um
 
      01:28:43.120  eating ginger like makes you happier whatever and they'll look at the literature and they'll see
 
      01:28:50.880  ten published papers and they all have positive results and their p-values are all obviously
 
      01:29:00.240  nice and low so you think my god's ten papers you know that means the chance that the eating
 
      01:29:06.960  ginger doesn't make you happy is just like rock bottom right because how could they all come to
 
      01:29:10.800  that conclusion um and that's the case i think where a beigeon style approach
 
      01:29:16.480  starts to look quite attractive because you can say things like how likely do i think this is
 
      01:29:22.800  before i looked at these papers how many papers are going to get find a null result and get rejected
 
      01:29:29.120  and you start thinking about things like publication bias um which don't show up in just the p-values
 
      01:29:35.600  you get at the end of that process and that's a case where that's pretty sensible i'm not
 
      01:29:41.440  suggesting you think it's not but it is a nice example of a kind of subjective style approach
 
      01:29:47.440  working quite well no i just yeah to dive into that example i mean what does the end process of
 
      01:29:51.760  that reasoning look like so for me i'd say um while there's all these confirmatory papers it doesn't
 
      01:29:57.920  you know there's the file drawer effect the well-known file drawer effect in social sciences we don't
 
      01:30:01.760  know how many papers that you know tried to run the study and found negative results um should my
 
      01:30:06.640  conclusion here to be to like put a number on my prior belief update it with all the confirmations
 
      01:30:12.240  that make a guess at how many like disconfermatory cases are hiding in someone's file drawer and then
 
      01:30:16.880  have like a number 0.16 at the end of the day or is it just to be like pretty skeptical of the claim
 
      01:30:22.640  and you know why do i have to put a number on it so there's this like extra step i find that's
 
      01:30:27.920  often taken in like the Bayesian world where yes you're right like the Bayesian style reasoning in
 
      01:30:33.600  terms of like um updating quote unquote whatever language you want to use with new evidence coming
 
      01:30:38.640  and of course we're in favor of that that's what criticism and evidence are for right to help us
 
      01:30:42.400  like uh shape how we think about the world's and slowly converge to truth but this extra step of
 
      01:30:47.440  going from uh well you know i think something's likely or unlikely or like i'm pretty skeptical
 
      01:30:52.320  of it or like you know i need more arguments before i make up my mind to like well i have this
 
      01:30:57.200  probability that's you know 0.33 um that's the extra step where i just like it seems so silly to
 
      01:31:03.440  you know try and get that um that and then but when it happens it's like once we've made that step
 
      01:31:08.720  then we have a number um and numbers are great because we can pair that number with other numbers
 
      01:31:13.040  um and then we get into this dangerous territory of like we started out our reasoning process saying
 
      01:31:18.400  yes of course all this is highly subjective da da da i'm just gonna put a number on it but
 
      01:31:22.720  then we forget we then we a year later we find this like paper and we have this number and we're
 
      01:31:27.040  like holy shit 33% that ginger increases life expectancy or whatever um you know i mean deworming
 
      01:31:33.680  only has like a 10% chance of working so we better just give everyone ginger you know and just forget
 
      01:31:38.960  that um forget that this number was like you know kind of conjured it's just the sources are not
 
      01:31:43.840  the same um and so anyway but that's nothing wrong with the underlying philosophical view that
 
      01:31:51.840  motivates putting numbers on things you're uncertain about that's a problem with people
 
      01:31:56.160  misunderstanding it which is separate right like yeah no i think i mean i agree like i said earlier
 
      01:32:01.040  i don't have pro if someone really wants to go through the trouble like putting a number on every
 
      01:32:04.800  one of their beliefs then that's fine i'm just as soon as we start comparing those numbers with like
 
      01:32:09.600  other numbers derived from like more um from better sources of information that's what i have a
 
      01:32:14.880  problem with and i think that that kind of really i think it's rare i guess in practice that the
 
      01:32:19.840  Bayesian withholds comparing completely subjective beliefs with like other kinds of numbers yeah i
 
      01:32:27.280  mean the problem can't be comparing slightly uncertain things with fully certain things like
 
      01:32:31.200  maybe you think one pill improves your sleep quality by like two percent and we just know that
 
      01:32:38.320  absolutely for sure for everyone and then another pill some people it's been like three percent and
 
      01:32:42.800  some people it's been one percent um you can compare those things or maybe you're like not sure how
 
      01:32:47.360  effective the third pill is because you haven't run enough studies and and it's appropriate to
 
      01:32:52.560  come up with like guesses there and compare them and then you could just draw this continuum between
 
      01:32:57.680  those cases where you have a little bit of uncertainty and you're kind of okay with that to cases where
 
      01:33:02.800  there's a lot more uncertainty and it's not clear that there is a kind of shining line where you just
 
      01:33:09.440  like can't go any i think the shining line is just probabilities derived from data should be
 
      01:33:15.040  compared with probabilities derived from data and probabilities not derived from data should not
 
      01:33:19.760  be compared with probabilities not derived like just the data is the shining line um notice i say
 
      01:33:24.480  data i don't say Bayesian versus frequency misinterpretation because you have all sorts of excellent
 
      01:33:29.280  Bayesian statistics which is about doing exactly what you are highlighting but what is data right
 
      01:33:35.440  measurements of the world um so but what's a measurement of the world this is the the
 
      01:33:40.080  philosopher and fin coming out like every belief i have comes from some kind of observation right
 
      01:33:45.600  counting bed nets what is data we may be getting into semantics but if you could put it in the
 
      01:33:52.000  excel table that would be what i'm referring to as data yeah like if you could perform statistical
 
      01:33:56.080  analysis on it um that's what i mean uh i'm not being using an exotic word pure i'm just being
 
      01:34:01.520  yeah counting uh hospitalizations due to covid counting um malaria bed net distribution and
 
      01:34:08.240  contrasting that against um reports of malaria this is this is what givwell is so um great at
 
      01:34:15.360  doing so whatever givwell does is what i'm talking about when i say data um i'm not using anything
 
      01:34:20.160  exotic uh one thing i'm like a little bit concerned about and just like a genuine question i haven't
 
      01:34:26.160  really thought much about this is that i definitely see what you're saying vane that there's lots of
 
      01:34:31.120  scenarios where this conflation is really problematic and can lead to really bad outcomes but i can
 
      01:34:36.400  just also see a lot of scenarios where all you have right is one subjective probability for one thing
 
      01:34:42.080  and a more like evidence-based probability for another thing and you still have to make some sort
 
      01:34:46.480  of decision like i just i'm kind of struggling a bit in like making this like more more action term
 
      01:34:53.600  i i guess and like um how how do you just end up making decisions then right like like in the
 
      01:34:58.640  in the real world based on theories and criticism so it's not the case that all you have is a subjective
 
      01:35:04.880  probability estimate you also have explanations about how the world works arguments that um are
 
      01:35:11.360  trying to point out flaws in those explanations and you make decisions in the real world by um
 
      01:35:17.600  the method of conjecture and refutation you you take a guess about what decision is going to
 
      01:35:23.840  best um lead to positive outcomes uh if you don't have data then all you can do is
 
      01:35:28.800  try to get your friends and peers and colleagues to criticize it and you can write someone called
 
      01:35:33.920  call popper i think you'd really enjoy it oh what can i say i think i'm fairly repetitive on this
 
      01:35:42.080  point by now but yeah i just have to keep repeating myself because they say questions keep recurring
 
      01:35:46.800  and so um yeah no that's that and i'm definitely not saying that's like the wrong approach to take i
 
      01:35:53.040  think it was more just like a thing for me and like kind of clearing it up just because it is right
 
      01:35:56.880  the fact right that you often just have like very mixed things to go off from some of which are
 
      01:36:01.840  gonna be subjective some of which are gonna be like more evidence-based and you just need to
 
      01:36:06.240  use them all right to reach a decision and you're not saying that um you know you just ignore all
 
      01:36:11.680  the subjective stuff and you only rely on the evidence if i understand you're right what you're
 
      01:36:14.480  saying is you just take that all together and then you make a decision you don't like have to
 
      01:36:19.760  you know draw up a table and work out all your like expected value thing is is that what you're
 
      01:36:26.160  saying yeah it's um i don't tend to think of the primitive building blocks as being evidence
 
      01:36:32.240  and subjective beliefs i think of it as being theories and explanations and criticism um and so
 
      01:36:36.960  these are the things which are uh that unify both the data case and the not data case all we have
 
      01:36:42.240  are theories and criticism um sometimes a criticism can take the form of data not all the time um and
 
      01:36:49.520  when it doesn't then you have to rely on other forms of criticism but notice that nothing that
 
      01:36:54.960  i've said is subjective it is um it is a objective property of what pauper would say is the third
 
      01:37:02.160  world but it's it's a it's a output of human cognition um Einstein's theory of relativity is
 
      01:37:09.360  objective um he could change his mind on it he could believe something different and it wouldn't
 
      01:37:15.440  matter because the theory would stand on its own um it doesn't matter what he believes he could
 
      01:37:19.680  believe tomorrow that is all wrong if he was still alive but it wouldn't matter because it was written
 
      01:37:24.240  down and now we can think about it in his own terms right um i i might be completely off the
 
      01:37:29.520  mark here uh so they can definitely tell me if i'm wrong here but like i definitely
 
      01:37:33.360  agree with the sentiment that theory is really important and it's something that is really being
 
      01:37:39.040  neglected so like the thing i'm kind of thinking about is with like advances in machine learning
 
      01:37:43.760  and stuff where you can get decisions right but they're incredibly opaque and you have no theory
 
      01:37:47.680  behind them whatsoever and you're just relying on this like very opaque kind of decision
 
      01:37:53.680  making thing for you that you have no control over like a lot of theory gets lost and when that gets
 
      01:37:57.040  applied to certain criterias and then you just kind of agree with whatever accuracy right you know
 
      01:38:02.240  you have and then you take that as your probability of if it's right or wrong like just that track
 
      01:38:07.360  record that can be really problematic if you don't actually understand the theory underlying it and
 
      01:38:12.240  um you shouldn't be misplacing trust in there and that i definitely see and uh i definitely agree
 
      01:38:18.960  with yeah so like um an example i've used in the past to kind of emphasize this point it um
 
      01:38:24.800  is let's say we invented like deep learning before we invented meteorology um so you can imagine
 
      01:38:31.200  training uh deep models to predict if it's going to rain or snow or uh hail tomorrow um and we could
 
      01:38:39.840  predict it with a high degree of accuracy and then when people say well what's actually happening in
 
      01:38:43.840  the real world what like what's what do uh then all the scientists just throw up their hands and
 
      01:38:48.240  basically play well we it's not up to us to make any decisions or have any theory about what's
 
      01:38:53.360  actually going on it's you know it could be Zeus it could be uh solar flares it could be Apollo
 
      01:38:59.920  all we care about is his prediction um and gives a great example there where he imagines we
 
      01:39:08.000  lands ourselves with some oracle which can tell us whether our predictions are true or false with
 
      01:39:13.360  pervide accuracy um that would be surprisingly not very useful if we're trying to for instance like
 
      01:39:19.040  build a spaceship or something because where do you start you need ideas and that you can't just
 
      01:39:23.440  get ideas from things you already know and asking how likely they are exactly um exactly i think
 
      01:39:30.800  one thing we might be getting confused about is this kind of broader weaker view which is a
 
      01:39:39.520  subjective interpretation of probability you know roughly the view that some probabilities
 
      01:39:45.360  uh make claims about mental states rather than the world and then a like slightly more specific
 
      01:39:53.040  kind of more naughty uh view about science which is like a kind of
 
      01:39:57.840  Bayesian philosophy of science and yeah again i think you can pull those things apart and you
 
      01:40:05.040  can be like really into the preparing um view of science while also thinking that it's appropriate
 
      01:40:13.600  to use um subjective s for its probability so for what it's worth like i think my own view is
 
      01:40:22.880  something like that that kind of this like very broad Bayesian epistemology is like you know pretty
 
      01:40:30.640  plausible um so i think most things just end up untouched um maybe that's kind of unfair but
 
      01:40:38.240  that's kind of where my head is okay so it feels worth moving on um and bringing this back to
 
      01:40:45.360  long-termism so for context you vayden wrote a a hit piece against long-termism
 
      01:40:56.720  one of the things the points you made was that long-termists rely on taking an expectation
 
      01:41:02.400  over the very long-run future they then point out how kind of enormous it is and then they justify
 
      01:41:07.440  things on that basis um your point is that uh it's not that the far future is big in
 
      01:41:14.880  expectation is actually undefined in expectation because it's not appropriate to take an expectation
 
      01:41:19.280  over that kind of thing and the um example or reason you give is to imagine uh an infinite set
 
      01:41:29.120  of alternating like black and white balls or something uh in a big infinite earn and um
 
      01:41:38.400  this is like a kind of neat and surprising fact about infinite sets uh where you don't have
 
      01:41:45.520  a measure over them you can ask a question like i'm gonna pick out of all the random what's the
 
      01:41:50.160  probability that it's white and the naive answer is 50 percent and the true answer is correct me if
 
      01:41:58.160  i'm wrong actually undefined without a measure and a measure is well it's kind of just a way of like
 
      01:42:06.240  getting legitimate probabilities uh when you start asking questions about infinite sets
 
      01:42:11.680  um and that is cool and true as far as it goes my point is that um if that argument applies to
 
      01:42:22.720  long features finite but very long features then presumably also applies to short features but we
 
      01:42:28.960  can reason about short features so it doesn't apply um to long features and just to draw that out um
 
      01:42:38.720  when long termists make their argument in terms of taking an expectation over the very long run
 
      01:42:45.120  future they are not normally interested in infinite value and certainly they don't rely on it
 
      01:42:51.760  they're interested in very long finite time scales and very large but finite amounts of value
 
      01:42:59.040  so yeah the range of possible futures is going to be very big but it's not qualitatively different
 
      01:43:04.640  to the range of futures for the next year or the next decade or the next century
 
      01:43:08.160  so if you want to say that in order to take an expectation over the next decade you need to
 
      01:43:12.080  consider an infinite set of outcomes then you know sure you can't take an expectation over what's
 
      01:43:17.360  going to happen in the next decade but if you don't need to worry about infinite sets in the
 
      01:43:21.600  decade case then the question is you know what point do they enter in as the centuries or millennia
 
      01:43:27.120  and if they don't enter in a tool then the point doesn't go through and if they do enter in a
 
      01:43:33.680  tool then the point proves too much and it just says that we can't reason about the future at all
 
      01:43:37.600  so that's roughly the kind of response I was going for and hopefully that makes sense
 
      01:43:43.120  totally great um so just to repeat back the concern to make sure that we all are on the same page
 
      01:43:51.360  you grant the infinite set case but the worry is that well tomorrow there's an infinite set of
 
      01:43:59.840  things that could happen as well and we obviously reason about tomorrow probabilistically uh so
 
      01:44:07.360  why can't we reason about tomorrow one billion years from now probabilistically um because it's
 
      01:44:13.360  a continuum it's not like a discrete change and so uh what what gives is that fair yeah sounds good
 
      01:44:20.640  yeah nice um yeah so this is perfect so we were talking earlier about um objective views
 
      01:44:27.200  probability subjective views uh but there's a third one which is often neglected from the
 
      01:44:32.880  conversation um like people in the future uh and it's the instrumental view of probability and that's
 
      01:44:39.600  the one which I claim is the only real way to think about this uh without paradox um because
 
      01:44:46.080  of course you can assume a measure onto an infinite set of course you can do that you can
 
      01:44:50.720  assume that of the infinite um different things that could happen uh ai takeover is going to have a
 
      01:44:59.440  probability of 0.3 and everything else is going to be um constant compared to to that so people
 
      01:45:06.320  assume infinite uh assume measures over infinite sets however they like whenever they want to um
 
      01:45:12.160  that's fine uh the instrumental view says um there is no intrinsic probability associated with
 
      01:45:18.720  the future um and there's no intrinsic subjective probability estimate either it's just uh the only
 
      01:45:25.520  question is what is useful in service of accomplishing some goals um so it's a useful assumption to make
 
      01:45:31.440  in order to accomplish some task at hand um that is what Bayesian statistics is all about that's
 
      01:45:37.920  what statistics just in general is all about and what modeling is is all about is for tomorrow I have
 
      01:45:44.880  a dinner party and I'm going to put a probability distribution over the people who are attending
 
      01:45:51.280  because that is useful for me to get an estimate over the um attendees I can do that I can also put
 
      01:45:58.240  a probability distribution over what's going to happen a billion years from now I can do that too
 
      01:46:02.640  these are just assumptions we make and then the only question is whose assumptions are better
 
      01:46:07.280  and what is the best set of assumptions we can make in order to accomplish certain goals right
 
      01:46:12.880  and so we use data to do that in the short and immediate term and the thing that's harder and
 
      01:46:17.760  harder is adjudicating between whose sets of assumptions are better because we have less and
 
      01:46:22.400  less information to use uh for adjudication and so that is why there's no proving too much
 
      01:46:29.680  here the only thing that it kills is this Bayesian absolutism which says we have to assign
 
      01:46:34.320  credences to our beliefs that's the only attack um that it it makes but there's no paradox between
 
      01:46:40.880  an infinite set of things happening in the future and an infinite set of things happening when I
 
      01:46:44.400  flip the coin like it could be the case that somebody detonates a bomb in between the coin flip and
 
      01:46:51.120  therefore the probability of it blending heads is also undefined that's that's fine I could make
 
      01:46:56.160  that assumption it's just that it's not a very reasonable one and it's not a very useful one
 
      01:47:00.160  because it doesn't help me to accomplish my goal at hand yeah I'm like sympathetic to what you're
 
      01:47:04.960  saying about when it is and isn't appropriate to take expectations over super uncertain and
 
      01:47:10.880  super far out of it what I am not so sure about is that the particular argument you make in terms
 
      01:47:18.240  of infinite sets is very relevant at all yeah it would be surprising if like a kind of neat
 
      01:47:27.120  observation about measure theory you could tell us something significant about these kind of things
 
      01:47:33.520  um and then on the point about like instrumentalism yeah I mean that sounds like interesting
 
      01:47:39.760  implausible one way to push back is you said look the question is um whose assumptions are better
 
      01:47:45.600  right uh or like whose estimates are better and if I better you mean who's assumptions are just
 
      01:47:51.760  like more true or more accurate no I mean more useful yeah right so it's between either more true
 
      01:47:56.960  or more accurate or more useful if you're going in for more more useful you know I'm not saying
 
      01:48:02.160  anything new here right but they kind of canonical pushback to any kind of like instrumentalist
 
      01:48:06.160  view is that seems that there are cases where um a guess might not be useful but might be more true
 
      01:48:11.600  or the most useful guess might be less true um and it feels a little no kind of poos modern or
 
      01:48:19.120  something oh no definitely not excellent yeah that was like liberally triggering right but yeah yeah
 
      01:48:24.000  yeah I do yeah yeah so the common concern against instrumentalism is that it just leads to
 
      01:48:30.400  relativism that it's my usefulness for a zero usefulness um and that is very much the reason why
 
      01:48:35.760  I'm not instrumentalist in the domain of epistemology knowledge is not just about usefulness knowledge
 
      01:48:40.160  is about what's actually true probability is not the same as knowledge probability is mainly about
 
      01:48:46.720  usefulness except in very certain circumstances which I can enumerate um where we have physical
 
      01:48:54.480  theories that tell us we expect randomness to occur in nature so casinos and chaotic events
 
      01:49:01.040  are precisely those ones where there is underlying truth of the matter because for example the
 
      01:49:07.680  central limit theorem is going to happen whether or not I believe in it so that is where there is
 
      01:49:12.560  some truth there and that is the rare case so instrumentalism as applied to knowledge is very
 
      01:49:21.200  susceptible to your critique and that's why I'm not instrumentalist when it comes to epistemology
 
      01:49:25.040  but when it comes to probability probability is just a tool that is useful um in most cases the
 
      01:49:31.440  other corner case is um quantum mechanics but I'm not going to go there um I want to make one
 
      01:49:37.040  other point which is like it seems completely irrelevant to the discussion of long-termism um
 
      01:49:43.200  and that's actually that's a fair point uh like who like why would we care about this
 
      01:49:50.480  stupid infinite set thing um in the case of like moral values and and stuff right and so the
 
      01:49:56.720  majority of my piece I focus on other aspects right um but if I didn't also provide a technical
 
      01:50:04.480  argument against both of their assumptions um so in the case for strong long-termism um they list
 
      01:50:11.680  two assumptions uh and so I needed to both point out that I think this is a immoral philosophy
 
      01:50:19.440  come ideology but I also needed to refute it on its own terms right and so the refutation of long
 
      01:50:27.360  termism on its own terms um necessitated a technical argument undermining its assumptions
 
      01:50:32.560  otherwise the the critique that I'm attacking it based on negative conclusions would be thrown
 
      01:50:38.560  because I had to undermine the actual argument itself so I think it's worth just restating
 
      01:50:45.200  that the case for long-termism or at least this kind of expected value case it relies on putting
 
      01:50:51.440  a kind of plausible-ish flaw on the potential value of the long-run future so all you need to
 
      01:50:57.760  get going is a claim like the future it seems you know pretty likely that it could be
 
      01:51:04.320  enormously valuable and that's pretty much all you need the numbers are actually not necessary
 
      01:51:09.520  but they are kind of you can see how they're nice if you're like an analytic philosopher and you
 
      01:51:14.400  want to give the impression of being you know rigorous and technical um so although this particular
 
      01:51:20.880  style of reasoning might be inappropriate um I'm not sure criticizing that style of reasoning
 
      01:51:28.160  gets at the claim that comes out of it even the strong claim because all the arguments
 
      01:51:35.120  don't need uh use a come up with like an accurate guess as to the actual quote unquote um the
 
      01:51:45.680  size of the future in expectation the uh use of words instead of numbers I think is a
 
      01:51:50.880  excellent suggestion and I have no problem with people talking about yeah I'm pretty sure
 
      01:51:56.000  such and such is going to happen or I'm pretty unsure it's such but that's great um what
 
      01:52:01.440  yeah I am concerned about is Shivani in the case for strong long-termism where she takes
 
      01:52:08.720  numbers pulled out of thin air and compares it to the expected value of malaria charities this is
 
      01:52:14.640  not me saying people are doing this this is one of the like main components of the argument is that
 
      01:52:23.120  if you are Shivani and you have to decide what um uh where to put your money you should also reason
 
      01:52:30.160  like her uh you should compute the expected value of totalitarian world governments and AI
 
      01:52:36.560  doomsday and jon ham and then you should compare that to give wells expected values and this is
 
      01:52:41.600  seen as like the pinnacle of reasoning this is an extended narrative was made about Shivani that
 
      01:52:52.400  fed through the entire paper and the point of this is that we're all Shivani like this is the point
 
      01:52:57.920  that mcaskill and grieves are making we are all Shivani we should do what Shivani does uh which is
 
      01:53:04.480  to basically give up on africa give up on poverty uh fighting and we should just put money into
 
      01:53:11.920  long-termism or better we should set up a constitutionally um dedicated long-termist foundation or
 
      01:53:19.840  better if we don't have money to do that or the time we should go into long-termism research
 
      01:53:24.800  directly it's it's it's this um black hole of ideology which is just sucking everything into it
 
      01:53:31.680  um and I think using words instead of precise numerical values would be a very useful way to
 
      01:53:42.480  uh prevent this slide from from happening so I agree this might be cut but one push back there is that
 
      01:53:51.040  you can just translate the argument that gets given in the paper you're talking about no no no
 
      01:53:58.800  you can in terms of vega words and then you just get the same conclusion but vega it's not as if
 
      01:54:05.360  uh the conclusion doesn't come when you get less precise it's just the the argument for
 
      01:54:11.520  becomes less precise I don't think that's true I think they're relying on the very precise numbers
 
      01:54:17.040  that you get by comparing like the number of lives saved for gibwell versus like the 10 to the
 
      01:54:23.520  3 or whatever lives saved if you donate to ai safety like it's very explicit about the number
 
      01:54:29.120  of future people the probability of saving lives for ai safety and as soon as you replace a possible
 
      01:54:35.520  10 to the 15 people with the sentence the future could possibly be pretty big the reasoning does
 
      01:54:41.840  not follow at all now you have vague things about maybe ai's a problem uh gibwell we have pretty
 
      01:54:49.760  good evidence that gibwell is very effective on the order of a lives saved for three thousand
 
      01:54:54.880  dollars and now you can't do arithmetic with you can't do arithmetic with those statements right and
 
      01:55:00.640  so now you actually have to dive into the arguments for thinking like why is ai safety a problem
 
      01:55:05.920  why could it be a problem like what's the evidence for that um and so I yeah I don't think it's true
 
      01:55:10.880  that the argument would go through if you start using vega language I think it's like a full
 
      01:55:15.440  storm to the to the long term it's expected values aren't defined over words right it's defined only
 
      01:55:20.800  over numbers I don't know if this is helpful or not but it might be worth just like emphasizing
 
      01:55:26.560  then if I understand you're right that the point is not just our earlier discussion right of like
 
      01:55:31.600  subjective probabilities and issues with exactly what that number is but bringing in this other
 
      01:55:37.760  thing of like this astronomical value of the future which is infinity or just some really big number
 
      01:55:43.600  and then multiplying it together so just to like kind of reiterate it's not just the problem
 
      01:55:48.160  that the probability bit is wrong it's bigger than that it's that you then go on to multiply it
 
      01:55:52.400  with this insane number whether that initial subjective probability is 70% that there's
 
      01:55:57.680  going to be a nuclear war in a thousand years or not point nor nor nor nor one percent or what
 
      01:56:04.000  have you it doesn't really matter it's the like multiplying it by infinity thing and then using
 
      01:56:08.080  that to come up with your decision that's the issue precisely and because you couldn't do this
 
      01:56:12.720  when you have data right like yeah so from a frequentist perspective fine but also from a
 
      01:56:18.400  Bayesian statistical perspective also you can't just come up with arbitrarily large numbers
 
      01:56:22.640  pulled out of thin air because there's something to constrain you there's some thing that will
 
      01:56:26.480  tell you if your assumption is right or wrong but when you are allowed to just pull up numbers
 
      01:56:31.920  and call them probabilities and then everything starts to become this just mushy goo of large
 
      01:56:39.120  numbers multiplied by small numbers and then a giant story is weaved behind it well hang on
 
      01:56:44.320  it's the problem that we are paying too much attention or worrying too much about high stakes
 
      01:56:50.480  and low probabilities or is the worry that those probabilities are too arbitrary my guess is that
 
      01:56:57.600  there are actually two worries here one worry is that if you go along expected value style reasoning
 
      01:57:07.840  even when the probabilities you use are entirely like ofa and objective then you could end up
 
      01:57:15.120  still with kind of unreasonable sounding conclusions about the things we should do because
 
      01:57:20.480  the choices you make are dominated by outcomes which are very very unlikely but if they were to
 
      01:57:28.720  happen they'd be like exceptionally bad or exceptionally good which is why Ben and I continuously
 
      01:57:34.640  emphasize the arguments and explanations and theories component right like that's the thing
 
      01:57:38.640  that is the thing that's really under consideration and the probability is just a stand in for the
 
      01:57:44.240  fact that say we have a decent understanding of how to make nuclear weapons and the theory that
 
      01:57:49.920  there may be a nuclear mistake is is reasonable enough to put money to preventing this from
 
      01:57:55.600  happening but subjective probabilities is never a focal point of our thinking when it comes to
 
      01:58:03.600  the world it's always about theories and explanations and then subject to probabilities are sometimes
 
      01:58:08.160  like a nice to have on top of that but not anything to take much more seriously than that yeah sorry
 
      01:58:14.560  so I just wanted to like make that make that distinction because it does feel like there are
 
      01:58:17.600  kind of two two things going on rather than one and when you say this style of reasoning I think
 
      01:58:23.840  you're like referring to two things once just to be clear one is like subjective probabilities
 
      01:58:30.000  and the other is expected value reasoning the style of reasoning just like for listeners
 
      01:58:34.720  you don't have to know any more expected values it's just make up some crazy scenario
 
      01:58:38.800  assume it's really unlikely but if it happens it would be devastating and then
 
      01:58:42.640  get everyone to work on that instead of things we concrete you know about like poverty and
 
      01:58:49.360  that's that's the problem and the only reason this is taken so seriously is because of all this
 
      01:58:54.320  probability stuff that lies underneath it but again the focus of the thing that I think is a bad
 
      01:59:01.120  idea is doing that and then the reason people do that is because of all of this this giant iceberg
 
      01:59:06.400  like structure of philosophy that lies underneath it but it's just a bad style of argumentation that
 
      01:59:10.800  has very damaging processes yeah this is literally just like to clear up my understanding here so I
 
      01:59:16.960  finishing their two different things and they are two different things but like the point is is
 
      01:59:22.240  that the types of arguments are made kind of rely on both things because even if you don't use
 
      01:59:26.560  even if you use infinity and stuff but then you don't use subjective probabilities you just
 
      01:59:31.600  look at the evidence because there is no evidence right just because we don't have the data on
 
      01:59:36.160  AI takeover or anything that's going to happen in the future then you have infinity but you have
 
      01:59:40.160  probability zero right and then it's not something you kind of need to worry about it's it's sci-fi
 
      01:59:44.640  right it's kind of kind of fiction stuff so it is the fact that both things are relevant here
 
      01:59:49.760  because you kind of need and and also it only gives like if you we didn't have this giant
 
      01:59:55.120  literature of people celebrating the use of subjective probabilities no one would argue in
 
      01:59:59.280  this style because it's self-evidently ridiculous but it's only because it's taken
 
      02:00:03.680  it's given such currency because every conversation pointing out the ridiculousness of this quickly
 
      02:00:09.360  moves into a conversation about probability and then that's what you're arguing about
 
      02:00:15.280  and so so it's necessary to go into the probability stuff because you have to argue in the weeds but
 
      02:00:20.400  ultimately that is just the reason why this kind of argumentation has gotten such currency and is
 
      02:00:25.760  doing such damage like it's something that Ben mentioned at the end of I think our first giant
 
      02:00:30.960  shot at long-termism which is just like we all care about the future and like
 
      02:00:39.040  paparion philosophy kind of highlights is that the best way to help the future is to just work
 
      02:00:43.440  on solving problems right here right now that we know how to solve and a huge problem that I'm like
 
      02:00:49.040  super interested in is the problem of trying to enable self-education in developing countries
 
      02:00:56.800  because I think if like kids in say Africa and India were able to like self-educate using the
 
      02:01:02.560  internet and stuff then we would have just huge amounts of knowledge coming out and that would
 
      02:01:06.720  I think be like the meta-esque risk that would address all of these other concerns right
 
      02:01:12.560  huge huge and so like you can you can frame that concern from purely a long-termist lens you can
 
      02:01:19.280  say I care about the long-term the best way to safeguard the long-term is to enable knowledge
 
      02:01:24.640  production now because the one thing that the future is going to need is more knowledge and the
 
      02:01:29.280  best way to do that is education and so you could you could frame it this way and notice that I
 
      02:01:35.600  don't have to talk about probabilities I just talk about the power of knowledge right and so that's
 
      02:01:40.480  the main thing yeah that is really interesting and I think it hits interestingly enough on one of
 
      02:01:46.480  the reasons why I'm really keen about long-termism generally is that I feel you do need to take this
 
      02:01:52.240  kind of approach in order to justify those types of interventions so we talked a lot right about
 
      02:01:58.000  kind of give well and this really like evidence RCT backed kind of studies but it's also really
 
      02:02:03.280  important to notice that those studies are based on like a very narrow perspective of like what
 
      02:02:09.520  they're actually measuring a lot of those RCTs are from a very explicit cost benefit analysis which
 
      02:02:14.640  means you look at the dollar cost and you look at the dollar benefit and the dollar benefit they
 
      02:02:18.720  typically look at is wages and wages is a really poor indicator right if you're interested in
 
      02:02:24.160  education and this kind of knowledge creation and the idea is that you know benefit things more
 
      02:02:28.800  broadly is really badly to look at just like you know what wage a scientist or something might get
 
      02:02:33.360  paid for the value they're creating and I feel that if you're if the consequence of rejecting long-termism
 
      02:02:39.760  is that you only then look at these kind of give well studies which are really great and I think
 
      02:02:43.760  add a lot of value into it but really also mean you're losing out on these other types of interventions
 
      02:02:48.640  that might be really useful you're actually hurting the the kind of causes you're trying to promote
 
      02:02:52.480  her. I would just disagree that you need long-termism in order to justify looking at other
 
      02:02:58.240  outcomes right so I think the criticism there is that RCTs and development economics can be
 
      02:03:06.080  too particular in its goals right or its methods are constrained and we should realize that we
 
      02:03:11.360  should criticize them and potentially fund studies to look at other things or be on guard for
 
      02:03:16.240  only paying attention to that which can be measured very absolutely and that's a totally
 
      02:03:21.920  valid criticism to make and and trying to look at at other things or rather looking at like other
 
      02:03:31.200  studies to fund or just paying attention to other things is a valid criticism that can be
 
      02:03:35.760  made totally independently of long-termism so I don't maybe you can just illuminate where
 
      02:03:41.200  long-termism would actually start doing work there for you because I don't think I see that bit.
 
      02:03:45.680  Okay if you take like a more long-term perspective and I'm not talking about a thousand years here
 
      02:03:50.800  but if you just take like the general heuristic that you want to build a world that's really good
 
      02:03:55.520  in 50 years and then that is your all we can say proxy gold or something and then you break down
 
      02:04:00.640  okay what do we need for that and then you say okay well we need economic growth for that what
 
      02:04:04.080  drives economic growth economic growth gets driven by technologies which gets driven by inventions
 
      02:04:09.280  which gets driven by education and therefore I prioritize education now this has nothing to do
 
      02:04:13.360  with this kind of expected value utility thing we were talking before but it does come from a
 
      02:04:17.760  long-term perspective of you're interested in creating a world that's not just good tomorrow
 
      02:04:22.320  or in a week's time or in a year it's good in the long term and from that you then reason back
 
      02:04:27.040  that education is something you care about and you're willing to promote even if that means
 
      02:04:31.200  not giving money to malaria and that's all do warming pills right I think I think that's where
 
      02:04:35.040  where I'm kind of coming from. One thing that I dislike about the term x risk is it carries the
 
      02:04:39.840  assumption that every other problem besides this is less important and so I think that there are
 
      02:04:48.160  many different ways to view how to best make the world a better place for our descendants one is
 
      02:04:56.000  taking an education perspective and that's kind of the one that I like but another is taking economics
 
      02:05:00.000  perspective and another's taking an environmental perspective and another is taking a gender
 
      02:05:04.640  quality perspective or a marriage equality perspective there's a thousand different lenses
 
      02:05:10.800  that we can use to view how to make the world better and I think that that's great and I think
 
      02:05:17.280  everybody should have a kind of a personal relationship with that argument which appeals
 
      02:05:24.560  most to them and then work on that but the implication or not implication like the direct
 
      02:05:32.000  view of people at fhi and and stuff is that there are actually only four or five things that are
 
      02:05:40.800  important these are the existential risks and everything else that's a non existential risk
 
      02:05:48.080  is almost by definition less important and this is like directly stated on the 80,000 hours
 
      02:05:54.240  website and just the way that they talk about prioritizing long-term things over short term
 
      02:05:59.520  and finier shaking your head so you shouldn't respond but yeah okay I'm not saying I agree with
 
      02:06:04.960  those claims but I'm also keen to make some distinction here between claims about what's best
 
      02:06:15.120  to do at the margin and what would be best for everyone to do in absolute terms in some ideal world
 
      02:06:23.840  so do fans of long-termism think that other interventions are not important
 
      02:06:33.200  obviously not and I'm not suggesting you think they are saying that but they're not to be clear
 
      02:06:42.240  may I read a quote just to give a flavor of like what I'm referring to so this comes from
 
      02:06:52.080  Nick Bostrom and I think it was one of his first papers on existential risk so he's talking about
 
      02:07:00.560  events like Chernobyl, Bhopal, volcano eruptions, earthquakes, droughts, World War One, World War Two,
 
      02:07:07.520  epidemics of influenza, smallpox, the black plague and AIDS and about this he writes
 
      02:07:13.360  these types of disasters have occurred many times and our cultural attitude towards risks
 
      02:07:19.280  have shaped have been shaped by trial and error and managing such hazards but tragic as these
 
      02:07:25.040  events are to the people immediately effective in the big picture of things from the perspective
 
      02:07:30.480  of humankind as a whole even the worst of these catastrophes are just mere ripples on the surface
 
      02:07:35.920  of the great sea of life they haven't significantly affected the total amount of human suffering or
 
      02:07:41.520  happiness or determine the long-term fate of our species this is in the seminal paper on existential
 
      02:07:48.720  risk so the implication if not the direct belief this direct statement is that the only thing that
 
      02:07:56.560  matters are existential risks and things like World War One, World War Two, AIDS and influenza
 
      02:08:03.840  and smallpox are mere ripples, mere ripples so this is what I'm drawing attention to I guess
 
      02:08:11.360  there's no problems whatsoever with working on smallpox or AIDS or world preventing world
 
      02:08:18.640  wars these are all legitimate concerns but when you call the thing that you're working on
 
      02:08:24.480  existential risks and everything else that other people are working on short-term
 
      02:08:28.400  short-termist then this is carrying the implication that the problems that you deem to be significant
 
      02:08:37.920  are more significant than everybody else's problems and I would never in a thousand years
 
      02:08:42.160  say that enabling children to self-educate in developing countries is so important that not
 
      02:08:49.760  doing that is an existential risk that's like it's almost like moral blackmail to accuse people
 
      02:08:54.720  who don't share my view as contributing to the suicide of our species is what this is implying
 
      02:09:01.760  and this is a huge conversation stifling also to provide another example I'm fortunate I don't
 
      02:09:09.520  have a quote for it so you'll just have to take it on faith that I'm representing the views of
 
      02:09:14.560  people faithfully but I've you know I think probably you guys have come across this too like I've
 
      02:09:20.480  I've been in lots of conversations with people sympathetic to long-termism who have like stopped
 
      02:09:25.680  caring about animal welfare for example because they think it's like a very short-term cause and
 
      02:09:31.280  with clean meat on the way the amount of suffering is like it's finite right and so it can't possibly
 
      02:09:38.320  be with us for more than say like 50 years at which point we'll have like clean meat and so
 
      02:09:43.360  they're choosing to work on like very long-term nebulous problems like certain existential risks
 
      02:09:49.760  are going to work on AGI and not caring about animal welfare precisely because of these arguments so
 
      02:09:56.000  anyway just another example of how like it really does impact your day-to-day and like what you what
 
      02:10:01.600  you value it just just to just to like add something this is the explicit moral recommendation from
 
      02:10:10.720  the case for strong learn long-termism so the idea then is for the purpose of evaluating actions we
 
      02:10:16.320  can in the first instance often simply ignore the effects contained in the first 100 or 1000 years
 
      02:10:22.400  what do you think people are doing when they ignore animal suffering they're following the
 
      02:10:26.720  recommendations from FHI the explicit ones and so this is this is the problem that Ben and I are
 
      02:10:33.840  trying to highlight which is that this flattens everything I think one absolutely crucial point
 
      02:10:39.840  to make here is that at least on the best exposition of long-termism the claim is not that the long-term
 
      02:10:50.160  matters because the short term doesn't matter anymore yeah yeah or that the short term doesn't
 
      02:10:54.320  matter any less everything we already cared about stays fixed all of those things are just as
 
      02:11:02.720  important in absolute terms as they were before and what's happened here is that we've just
 
      02:11:08.160  if you buy into it anyway the idea is that we've realized there's this kind of repository of like
 
      02:11:16.240  potential value and potential suffering that is potentially even greater so it's not like you
 
      02:11:22.960  become like any lesser blige to avoid kind of certain but you can trade off between those things
 
      02:11:28.800  now right yes yeah by the way I want to just add risk of sounding like a sacrifice I want to
 
      02:11:34.480  caveat what I've said by saying that's what the long-term is like would say right like I would say
 
      02:11:40.480  you know what I would think about that that's that's the idea sure like if you kind of normalize by
 
      02:11:46.960  what's more relatively important than anything else than short-term intervention to become
 
      02:11:54.000  less relatively important but that's a kind of trivial consequence that you get all the time
 
      02:12:01.440  like if you recognize that animals have moral patienthood then humans are less morally significant
 
      02:12:08.320  as a proportion of all those sentient beings in the in the world once you've made that realization
 
      02:12:12.480  but that's like neither here nor there but but I think you're making a really important point and
 
      02:12:17.600  I think we should talk about it more yeah no I definitely agree with Ben what you said there about
 
      02:12:24.320  like trade offs and stuff and I think if somebody can credibly show me how like eating meat is going
 
      02:12:30.800  to bring on like a faster like clean meat future I can like imagine like becoming convinced and
 
      02:12:38.080  finding that all right I just think in a lot of I just often see those claims being made and not
 
      02:12:42.160  really being backed up in that kind of way I just wanted to pick up on something Finn said about
 
      02:12:47.520  how it's not that we're saying short-term suffering doesn't matter it's just that in absolute terms
 
      02:12:55.120  when you compare it against future suffering it's a it's a smaller slice of the pie it's it's
 
      02:13:01.840  there's some suffering now but there's a potentially infinite amount of suffering in the future and
 
      02:13:06.000  that's what we need to address I would just say that your response is what any utopian philosophy
 
      02:13:14.480  would be able to say right we're not saying that the lives of the working class don't matter we're
 
      02:13:20.320  just saying that the best way to improve the lives of the working class is to overthrow the capitalist
 
      02:13:25.280  bourgeoisie system right and so of course the suffering matters it's just the most effective way
 
      02:13:33.280  to reduce suffering is to try to overthrow the capitalist hegemony this would be kind of the
 
      02:13:38.720  style of argument that would be made and the problem is always comparing yeah potential infinite well
 
      02:13:44.960  a future goodness against finite suffering right now and that's the similarity between
 
      02:13:49.920  long-termism and other utopian forms of philosophy yeah sure so we were talking about it kind of
 
      02:13:57.760  slightly more technical worries about formal frameworks and now it feels like we've moved on
 
      02:14:03.680  to kind of practical and epistemic yeah so there's an interesting distinction that I've seen made a
 
      02:14:11.520  couple times now where people say listen you're attacking say the the implementation of these
 
      02:14:18.480  ideas and in practice like your critique that makes sense in practice fine but theoretically you
 
      02:14:23.760  haven't really touched the theory you haven't untouched the underlying core ideas it's just
 
      02:14:27.840  your implement you're criticizing the implementation um I only care about implementation I only care
 
      02:14:34.080  about the practical consequences of a philosophy the the deep theoretical underpinning is not a
 
      02:14:41.040  concern of mine that is for interests other people that's fine but when it comes to ethical and
 
      02:14:46.880  moral philosophies I only care what it actually does in regards to how people treat one another
 
      02:14:54.320  and if so what it does is it makes people say down weight the significance of providing
 
      02:14:59.920  pain relief medication to the poor and appoint the significance of working on John Hamm problem
 
      02:15:06.800  scenarios then I don't care how beautiful the underlying theory is I care only about the
 
      02:15:12.160  practical implications yeah you can get rid of the baby and keep the bathwater in the sense that you
 
      02:15:17.920  can change course realize that there's some core to the theory that is plausible and then the
 
      02:15:24.720  practical working out is really worrying let's kind of notice that worry and then realize that
 
      02:15:31.440  kind of core thought behind long-termism in a lesser end this way so presumably the theory and
 
      02:15:39.440  the practice both matter um but I was going to mention here there's a kind of interesting
 
      02:15:45.680  comparison to other kind of arguments that you hear um so what's the criticism here it's something
 
      02:15:56.080  like um you know long-termism claims to make the world go better in the very long run if it were
 
      02:16:05.600  put into practice something like that and then you say look the kind of reasoning that long-termism
 
      02:16:12.160  is using is just the kind of reasoning that totalitarian regimes of the past used to justify
 
      02:16:20.400  all manner of harms and persecution and human tragedies so even if in theory you claim to care
 
      02:16:31.520  about doing the good thing in the long run I have a reason to expect that in practice even if your
 
      02:16:38.320  intentions are perfectly good that you could very well end up doing an awful lot of harm right
 
      02:16:45.120  okay I said there was a comparison and then the comparison is to these kind of self-undermining
 
      02:16:50.160  objections in the case of other ethical views like utilitarianism where you get a thought
 
      02:16:56.560  like this utilitarianism says that um the best actions the best things to do are those actions
 
      02:17:06.320  which maximize well-being or other good consequences but if we only acted on that basis according to
 
      02:17:15.120  that rule and we didn't care about things like telling the truth and respecting people's autonomy
 
      02:17:22.000  or rights then people would lose faith and trust in one another and just like all kind of institutions
 
      02:17:30.160  of truth telling and trust would just crumble and actually the consequences would be terrible so
 
      02:17:36.320  like far from you know maximizing well-being utilitarianism implemented in practice and that
 
      02:17:42.160  God is not um would actually make people worse off and the conclusion is then that utilitarianism
 
      02:17:49.200  like undermines itself right that's a pretty bad argument because the response is well you're
 
      02:17:56.000  not arguing against utilitarianism you're arguing there again it's a kind of naive first working
 
      02:18:00.800  out of what utilitarianism might say and what you've showed is that actually it doesn't say that
 
      02:18:06.000  right you kind of just revise the uh the recommendations which you think it makes
 
      02:18:11.440  and then in the case of long-termism if the argument is something similar which is look long-termism
 
      02:18:18.000  says that the best actions are those actions at least in a lot of context which make the very
 
      02:18:24.160  long-run future go best um but those actions in practice would like be really disastrous
 
      02:18:31.200  well I think the most of that shows is that a naive version of long-termism is bad and actually
 
      02:18:37.680  that's still quite important because maybe like we need to kind of keep revising what we think it
 
      02:18:44.000  implies in order to just avoid these like dangerous naive first workings out does that make sense?
 
      02:18:53.440  so let me reframe for long-termism just to make sure I understand so the critique is something like
 
      02:18:59.440  fine if you assume that long-termism is going to tell us to only care about the future
 
      02:19:06.160  and demonstrate that that's going to lead to bad consequences um and in the process therefore
 
      02:19:13.440  destroy our future potential then that will demonstrate that we just need a more sophisticated
 
      02:19:22.160  version of long-termism right so that doesn't undermine long-termism per se it just shows us that
 
      02:19:27.440  our initial conception of what it looks like to work on the long-term future is flawed
 
      02:19:34.080  is that right yeah yeah that's right I think one good thing to bring in at this point as well is
 
      02:19:43.360  right back in the intro right we were very like broad or like general in defining like what
 
      02:19:48.160  long-termism is there might be useful to bring in like some more specific definitions where
 
      02:19:54.000  the yay community is at the moment trying to work out you know what there is and there's debates
 
      02:19:58.320  about like what long-termism it implies and I think so far we've generally it feels like we've
 
      02:20:04.080  been talking about this form of um like either patient long-termism which says that there's no
 
      02:20:10.240  point doing anything today you should invest your resources gain interest and then you'll have more
 
      02:20:16.000  to do in the future um where there's like a lot more value to be created um you also have this
 
      02:20:23.520  form of urgent long-termism which relies on this idea that today is really special and it means
 
      02:20:30.080  that we can lock things in today that will have an effect like many years into the future whether
 
      02:20:35.920  that be a hundred years or a thousand years I think it's more it's more dubious but that's like that
 
      02:20:40.720  kind of way and then within that you've got like two other kind of schools of thought you've got
 
      02:20:45.440  broad long-termism which says that you're not really sure what anything does um like specifically
 
      02:20:51.600  but you should just generally invest in institutional capacity or just like broader ways that you
 
      02:20:57.360  might help the future and then you've got targeted long-termism which is much more specific where
 
      02:21:02.400  you take a particular S-risk or a particular path change and you focus your efforts there so
 
      02:21:07.200  all of those definitions is what I last read on the on the 80,000 hours website but it might be like
 
      02:21:12.880  goods to kind of distinguish as well as in what Finn was saying um that long-termism can mean a
 
      02:21:18.560  bunch of different things and we're still trying to work out what that is and some of those things
 
      02:21:22.720  just might be naive and some of them you know might actually be getting it at something useful
 
      02:21:28.400  strikes me as decently analogous to arguments about what is true Christianity or what is true
 
      02:21:33.840  Marxism um where you get this like splintering effect uh but in fact Marxism and Christianity
 
      02:21:41.200  and long-term it is just what Marxists and Christians and long-termists do it's not a
 
      02:21:47.520  specific like we're just arguing about different slivers of something but again all that matters is
 
      02:21:56.800  how it makes people act of practice um yeah yeah this goes back to what Baine was saying about
 
      02:22:01.440  like in practice versus theory right like I think Finn your question assumes that there's some
 
      02:22:05.280  theory this I'm like theory of long-termism some true essential theory of long-termism and we are
 
      02:22:10.640  just lowly humans trying to interpret this like amazing mathematical slash symbolic um theory
 
      02:22:17.600  that we like have partial access to and then we're exploring in what ways it should be correctly
 
      02:22:21.840  instantiated but like I just don't see the distinction I see like people have ideas and they act on them
 
      02:22:27.200  and we should be criticizing the ways in which it makes them act the process is so clearly it's
 
      02:22:31.120  not the case that some idea is handed down to us and we're in the business of trying to discover
 
      02:22:35.280  what that idea is and we're kind of getting it wrong and we're like archaeologists uncovering
 
      02:22:40.960  this yeah I kind of got that though from your conception of like we you know we have this naive
 
      02:22:44.960  implementation and then based on the results we like we adjust so we can try to try and find the true
 
      02:22:50.560  conception of long-termism yeah maybe true is the wrong word maybe just best is another word
 
      02:22:54.880  okay it's you know you can you can refine theories in other contexts and it's like
 
      02:23:00.400  yeah there's no suggestion that you're uncovering something but it's possible just to improve
 
      02:23:07.920  with theory and maybe that's what we do yeah for sure and that's that's fine um I think a more
 
      02:23:12.080  fruitful way to improve a theory would be to recognize that certain moves should be off the table
 
      02:23:18.480  I will continue to repeat the multiplication of big numbers and small numbers and then making
 
      02:23:22.880  serious decisions on that let's just say whatever form of long-termism we're going to have is going
 
      02:23:27.520  to not do that because now we're talking about the consequence of certain move and contrast that
 
      02:23:33.840  against here is a taxonomy yeah 500 different variations of long-termism yeah you have patient
 
      02:23:39.360  long-termism and slow patient long-termism and and then what's the true definition of of
 
      02:23:44.000  and what's the best one that seems to me to be a less fruitful way to improve theory let's just
 
      02:23:50.960  focus on specific forms of argumentation which we all recognize are ineffective at accomplishing
 
      02:24:00.320  our goals and our goal in the broadest cases to make the world better for everyone that's that's
 
      02:24:05.760  just the the broadest goal that we all are united again around and so that I think is better than
 
      02:24:12.240  trying to say well that sounds like a naive implementation of long-termism and so let's
 
      02:24:16.720  consider patient strong soft flavors of long-termism I don't know but but there's like a thousand
 
      02:24:22.640  different definitions and now we all have to memorize and understand the various definitions
 
      02:24:26.640  um and this is a waste of our time when we can just say let's just not do this thing anymore
 
      02:24:31.600  and let's not care about the different definitions let's just all agree we shouldn't do this thing
 
      02:24:35.360  well not if one of them turns out to be plausible you you said i'm you know in the example of Christianity
 
      02:24:41.440  well first of all you said look we're only interested in how this thing works out in practice
 
      02:24:44.960  um and that obviously matters enormously but there's also a space to be interested in just what's
 
      02:24:52.480  true or what's the best version even if it doesn't get implemented and you know the example of
 
      02:24:57.600  Christianity is even if everyone or almost everyone who calls themselves a Christian don't live up to
 
      02:25:04.480  like true Christian values it would still be interesting to find out if some version of Christianity is
 
      02:25:09.280  true even if no one lives up to that version and especially in this case where you know presumably
 
      02:25:15.280  people are interested in how to improve this thought before the wheels really start turning
 
      02:25:21.600  and the movement takes off as you know they're kind of hoping it opening up words so it's useful
 
      02:25:26.720  to think about what the best theory is and to do that taxonomy in order to to make that decision
 
      02:25:31.680  right it's all fixed but when we're talking about ethical theory theories that are true are those
 
      02:25:35.520  which improve the most amount of people's lives right ethics is a unique case in this setting
 
      02:25:40.960  because we're not talking about theories of physics we're talking about theories of how
 
      02:25:43.600  human beings relate to each other um and then the like theory and practice are the same um
 
      02:25:52.000  ethical theories are only as good as they improve the way that human beings relate to each other
 
      02:25:56.960  at least when i say i'm interested in ethics that's what i'm interested in is improving the way
 
      02:26:01.920  that human beings relate and if you say well ethics is actually about um figuring out what the right
 
      02:26:07.680  discount factor is on the future well-being of people between now and the heat death of the universe
 
      02:26:14.960  then i don't that's not something that i think ethics is interesting from an ethical perspective
 
      02:26:20.400  but i can't control obviously what the the whole field is talking about but uh but i just think
 
      02:26:25.520  that this distinction between theory and practice is a silly one when we're talking about ethical
 
      02:26:30.880  theories i might be misunderstanding what what you're saying there but when you're talking about
 
      02:26:36.160  not being as interested in in like the theory behind like the discount right i do see like how
 
      02:26:42.640  that has incredibly important like impacts in practice as well though like what like for example
 
      02:26:49.440  i mentioned this like um you know thing in like government cost benefit analysis where you have
 
      02:26:54.080  like an explicit term for your like pure time preference right and that is something that for
 
      02:27:00.560  philosophers have made like a case for being zero but that isn't zero because of norms or
 
      02:27:06.240  other reasons like with within economics that we can get into but i think it's like a bit of a
 
      02:27:11.440  a bit of a tangent right but that is like a very clear avenue to me where like theory and
 
      02:27:15.840  being able to work these things out maybe in a more abstract way can really impact like real
 
      02:27:20.880  world decisions yeah so the idea here is that deciding which theory is best is just deciding
 
      02:27:27.200  which would be best in practice and there shouldn't be daylight between those two things
 
      02:27:31.040  that's not true i'm not sure we disagree because you were you were making yeah i don't i think
 
      02:27:35.440  there's a semantic difference in here i think yeah because i guess maybe i was
 
      02:27:40.160  misunderstanding um that there is a claim that this could be true even if it leads to horrible
 
      02:27:46.720  consequences in practice um and i'm saying that if it leads to horrible consequences in practice
 
      02:27:50.960  then it's not true when it is uh yeah yeah i mean we're all kind of like broadly
 
      02:27:55.440  consequence so i think we agree there there is but but truth in this context refers to practice
 
      02:28:01.680  and so there we can't make this distinction is is my my but people want to make the claim that
 
      02:28:06.240  yeah sure long-termism could kill everyone but it could still be true right um and it could still
 
      02:28:11.440  be the case that long-termism is true even if it creates a huge amount of suffering
 
      02:28:15.280  nope no i think maybe it's something like a naive version of long-termism
 
      02:28:23.840  can end up being incredibly harmful but it doesn't follow that some version of long-termism could be
 
      02:28:29.680  true and therefore really good like i i think maybe to give like an example if we take naive
 
      02:28:38.160  long-termism to mean that all the money that we were going to donate to malaria nets or deworming
 
      02:28:43.920  we're now just going to put into a bank account and we're going to wait for interest to accrue
 
      02:28:49.120  and then at some point we'll we'll spend it and we can imagine right where that leads to just a
 
      02:28:53.280  really bad scenario where you never end up spending it it's just a massive drain on resources and
 
      02:28:58.960  the world becomes a worse place and that would be like a naive example of long-termism which we
 
      02:29:03.680  might fall into right i'm not i'm not saying that's the case um the argument for for patient
 
      02:29:09.040  long-termism is much more nuanced but we'll just take that as like a naive kind of example we could
 
      02:29:13.840  also imagine a more sophisticated version which i suspect uh vayden and ben you might be more
 
      02:29:18.480  sympathetic to which is this case that okay we want to improve the the long-term future and
 
      02:29:23.040  save this astronomic value and we do that by investing in education investing in malaria nets
 
      02:29:29.280  and the like because that is our best route to that future which i don't think you disagree with
 
      02:29:34.000  you just that then say okay well then it's obvious what's the point in doing all of this and i think
 
      02:29:39.360  that leads them to like another disagreement we might have where i don't think that's like the
 
      02:29:43.360  most obvious case and there is like interesting things long-termism can bring in but i think that
 
      02:29:47.920  might help distinguish between like a naive interpretation of long-termism and maybe a more
 
      02:29:53.840  a more like um yeah reflected one i think there's i think again this is mostly just semantic
 
      02:29:59.040  differences i think if you just replace interpretation of long-termism with just calling it a separate
 
      02:30:05.120  theory then like yeah i completely agree with it like well i'm just i'm just concerned about like
 
      02:30:09.040  the actionable elements of the theory so if if what you know you call theory a the theory where
 
      02:30:14.320  you're using expected value calculus to say that the amount of potential future um or
 
      02:30:22.080  a amount of happiness in the future is like infinite and this causes you to act in a certain
 
      02:30:26.240  way i'm criticizing that and then theory b is the one where you adopt that but then you argue for
 
      02:30:31.920  like patient philanthropy or whatever then i'll criticize that in a different way and so we can
 
      02:30:36.240  call these just like different interpretations of the same theory or we can just call these like
 
      02:30:41.040  theory a theory b theory c whatever and then i'll just like criticize each theory in in turn right
 
      02:30:46.400  and and and and you don't even have to criticize the entire theory like when i'm critical of
 
      02:30:51.840  long-termism like i can grant almost everything up and like to the expanding cone that's fine
 
      02:30:59.920  let's say all of that is true i frankly don't really care too much about that i just care how it
 
      02:31:05.680  makes people act in practice um and and and so let me just grant all of that um and
 
      02:31:12.000  and so the thing that i claim is making people act poorly in practice is this one rhetorical move
 
      02:31:18.720  and let's just excise that and then we can continue on our way talking about the 50 different
 
      02:31:24.000  variations of long-termism and i'll be happy and i'll be happy because the concern about it making
 
      02:31:29.920  the world worse for people who are alive right now will be um will be gone or at least um a predominant
 
      02:31:36.480  source of i think suffering will be removed from most of these variations of long-termism because
 
      02:31:42.480  the the suffering of people alive right here right now today won't be um swallowed by this
 
      02:31:50.000  comparison to an infinite amount of people in the future yeah i mean presumably no one
 
      02:31:54.400  wants to believe in a version of long-termism which they also expect to have terrible consequences
 
      02:31:59.120  um so yeah what you're saying is it's like definitely true uh yeah but same with all utopia
 
      02:32:05.200  philosophy right yes you have no utopia philosophy wants to have a philosophy that's going to have
 
      02:32:09.680  terrible long term or short-term consequence there's no one that's in favor of that it's just an
 
      02:32:12.800  inevitable byproduct it's an inevitable byproduct of this kind of moral calculus yeah uh i mean i
 
      02:32:19.440  take it the the problem isn't so much um caring about um your actions having
 
      02:32:27.120  really great consequences in the long run which is something that totality and ideology is having
 
      02:32:35.200  common the problem is being wrong about which actions do that uh but one thing i wanted to
 
      02:32:41.520  mention which you can get your teeth into is we were thinking about examples of like naive
 
      02:32:46.720  working it out of long termism which could potentially be harmful and it occurs to me that maybe one
 
      02:32:52.160  example is this kind of camp of uh like de growth environmentalist types uh yeah and the idea there
 
      02:33:03.520  is um you know look what we're doing right now well all kinds of resource extraction and other
 
      02:33:10.400  practices is like totally unsustainable you just kind of draw out the line and so what do we do
 
      02:33:17.520  we've got a like massively down scale everything we're doing maybe revert back to to you know more
 
      02:33:24.480  natural uh modes of life tighten our belts and in general we'll just like slow down what gets
 
      02:33:31.280  called progress but is really a kind of acceleration towards like mad max yeah uh hellscape um
 
      02:33:38.480  that seems kind of instructively mistaken and it feels like you know lots of people call
 
      02:33:47.120  themselves on termists and then also buy into that that kind of handful of views so yeah i mean
 
      02:33:52.240  curious what you think about that but i'm skeptical just really quickly which is like worth just
 
      02:33:57.200  explicitly pointing out how different that vision is from like the nick bostrom stuff uh Ben and
 
      02:34:02.640  Vayne you were like i think mostly criticizing which is this like infinite value kind of we
 
      02:34:07.520  colonize space on like the whole universe right this is like a very different interpretation that
 
      02:34:12.560  you can reasonably also like call yourself i guess like long term is Trump right um
 
      02:34:17.600  although just to be just to be explicit like um man i forgot his surname roman crozaniic i think
 
      02:34:25.280  he wrote this book uh the good ancestor which i think is actually overall like really great
 
      02:34:29.840  and um you know it's like very explicitly a kind of sustained defense of long termism but he on
 
      02:34:34.640  one hand he does kind of um mentions just how big the future could be as a motivation for caring
 
      02:34:41.120  about doing things now to make it better and then he kind of makes us move and says well therefore
 
      02:34:46.960  we should just like um do all this kind of slowing down de-growth stuff um yeah no i i was just
 
      02:34:53.840  gonna say i'm basically equally as critical of all the de-growth stuff as like the infinite value
 
      02:34:58.720  of colonizing space stuff um i'm i'm just curious uh the last comment about roman i forget his last
 
      02:35:05.920  name how does he get from like valuing tons of future people to de-growth environmentalism because
 
      02:35:12.560  my take on like the de-growth stuff is um sort of it's this like ideology that humans are sort of like
 
      02:35:19.920  naturally bad uh for the environment and we should only take up like a certain amount of space and
 
      02:35:24.560  we're ruining uh this glorious earth that without us would be flourishing and um we don't actually
 
      02:35:30.800  want that many people in the future because humans you know what's so special about us we're not so
 
      02:35:34.560  good we're just like another animal um and so i'm very curious how he squares um i'm like equal
 
      02:35:39.760  is critical to both of these views but i'm very curious how he sort of squares both of them yeah so
 
      02:35:43.920  i think the thought is that more or less the number of people who are we should expect to live in the
 
      02:35:48.720  future is pretty much fixed um as good or bad as that number is um and the number in particular i think
 
      02:35:58.800  he was thinking along the lines of how long uh as the earth got left uh before you know it's scorched
 
      02:36:05.760  by the sun let's just expect to kind of roughly similar number of people to live century on century
 
      02:36:14.000  until that kind of timescale and you get a number from that holding that fixed how do we make that
 
      02:36:22.080  go better well i think i see i see there's a flavor of technological skepticism where he raises the
 
      02:36:29.440  boston stuff and the escaping earth stuff and transhumanism stuff and he thinks that's nice
 
      02:36:35.040  but feels a bit sci-fi for me yeah so we have a choice between continuing to do what we do
 
      02:36:43.680  uh or we kind of start shifting away in scaling down and then we kind of spread out our resources
 
      02:36:51.600  like uh more equitably or something i haven't read roan's book so i don't know if this like uh
 
      02:36:58.000  the way he kind of thinks about it but like to go back to fin's cone right i think one way you
 
      02:37:01.440  can think about it is that if you try to expand the cone too much it just collapses right so instead
 
      02:37:06.240  what you focus on is just like making the the cylinder as long as possible and that is like kind
 
      02:37:11.040  of where the value comes from i was going to say that i think both uh camps are basically doing the
 
      02:37:16.720  same thing which is worrying about the long-term future and thinking the best way to preserve it
 
      02:37:21.280  is to stop doing everything that's got us this far um stop improving technology or stop working on
 
      02:37:27.040  short-term problems which um by solving we can make it a little bit better for the next generation
 
      02:37:32.880  there's this idea that all the progress which has come before us is about to end and the best
 
      02:37:40.720  way to preserve the long-term is to change radically what we've done up until this point
 
      02:37:46.800  and so the long-term is short-term is thing is one flavor of this but just the environmentalist
 
      02:37:52.480  and anti-technology thing that comes around all of the time and like environmentalism yeah in this
 
      02:37:58.080  to this extent is so funny because if you like if you hate half of the human race you're a bad
 
      02:38:03.280  person for being a misogynist if you hate like a particular race you're a bad person for being a
 
      02:38:07.600  racist but if you hate every human being on the planet then you're an environmentalist and i don't
 
      02:38:12.400  understand this anti-human being um tendency that you get from the extreme environment you're
 
      02:38:17.920  a certain kind of environment yeah of course yeah generalizing for rhetorical purposes but uh but
 
      02:38:24.240  it's a strain of environmentalism which basically views human beings as a pestilence a plague um and
 
      02:38:31.200  wants to eradicate it and i think it's a crucial insight i think that what's driving both of these
 
      02:38:37.360  camps is the same thing um i think that same thing is like utopian thinking so i think the long-term
 
      02:38:44.240  is convinced that the long-term future can go exceedingly well uh which is what we need to do
 
      02:38:50.160  now is like you know it just ignore like a little bit of the suffering that's going on in the world
 
      02:38:54.000  make sure AGI happens once we have AGI we can like simulate consciousness and make sure everyone's
 
      02:39:00.000  happy for billions and billions of years that'll be great and and we'll enter this um like long
 
      02:39:04.480  reflection period which has cited a lot actually in EA is taken pretty seriously we'll enter this
 
      02:39:08.640  long reflection where we'll have developed all the technology necessary to meet our needs make us
 
      02:39:13.040  happy problems will have stopped and we'll just think for like thousands of years about exactly
 
      02:39:19.520  how we want the world to look because and so so we'll enter this non-problematic period and then
 
      02:39:23.520  i think the environmentalist or the radical environmentalist thinks along similar lines like
 
      02:39:27.040  okay okay you know we we seem to just we're right now we're just there's continual problems right we
 
      02:39:32.000  keep developing technology faster and faster we keep running into problems we barely avert
 
      02:39:36.400  disaster every year if we could just scale things back and just all be happy with like you know a
 
      02:39:43.040  certain finite number of resources realize that we can be happy like that stop having kids um or
 
      02:39:48.080  stop having so many kids and just enter the state of equilibrium all will be well like we can all
 
      02:39:52.400  sort of live off the land um etc and what these at at core what both of these philosophies miss is
 
      02:39:58.080  that problems are inevitable right so problems are just a product of the fact that we can't foresee
 
      02:40:03.920  all the consequences of our actions um even if we run expected value calculations
 
      02:40:08.720  and so and because we can't foresee the consequences but we're always trying to make things better
 
      02:40:13.920  we're trying to solve problems that means there will always be problems and if we enter the state
 
      02:40:19.200  where there were no problems to solve um that would be really bad it would imply there's nowhere
 
      02:40:25.280  left to go it would be like a state of death basically um problems just arise because of
 
      02:40:30.000  conflicting ideas right conflicting ideas about like how to live better how to make things slightly
 
      02:40:34.320  better moral ideas all these are these are just problems to be solved and and having problems is
 
      02:40:40.400  a good thing right it means we're making progress we're moving forward i i just want to highlight
 
      02:40:44.720  one tiny little thing which is that i think in your beautiful um comments you actually gave a proof
 
      02:40:51.520  by contradiction that this is impossible because if we had a uh got to a place where there were no
 
      02:40:57.440  more problems that would be bad that would be itself a problem and so it's a logical contradiction
 
      02:41:02.320  that it's right impossibility right um and and so just completely wholeheartedly agree with everything
 
      02:41:07.440  you said and i think that there actually contains in that more than just a strong argument but actually
 
      02:41:11.840  a refutation of this idea i feel tempted to pour cold water on some of this because it feels possible
 
      02:41:18.560  to get ahead of ourselves and start worrying about something which no one believes maybe i'm the utopian
 
      02:41:23.520  part of the world um if it wasn't the middle link here um
 
      
      02:41:33.920  yeah it sounds like you are worried the the kind of transhumanist utopian camp of bostrom and his
 
      02:41:44.880  acolytes uh or just like stronger long-termists if they got their way they would recommend
 
      02:41:53.520  only focusing on those actions which we can be confident pay out over the very long run
 
      02:42:00.160  and shutting down this kind of short-term problem solving um and technological progress
 
      02:42:08.320  and that's just kind of obviously not the case like one of the things these people care about
 
      02:42:13.200  the most is technological progress uh in particular and there's a point to be made here which is i
 
      02:42:20.720  think long-termism or at least the kind of most interesting form of long-termism is a marginal claim
 
      02:42:27.200  it's pointing out that this thing is currently way neglected uh at least seems to be and so there is
 
      02:42:36.960  an awful lot of potential given that it's currently neglected now if they got their way and like you
 
      02:42:44.160  know more people cared about it then it would become less neglected and um it's less important
 
      02:42:51.920  as a consequence um but it would never reach a point where it's like significantly eating in
 
      02:42:58.720  to short-term problem solving and if it does then i think i'm like absolutely on team Ben and
 
      02:43:04.400  Vaylen where we should just be like very worried because this is just like missing an absolutely
 
      02:43:09.360  crucial fact about pretty much all human progress up to this point which is that it's just been like
 
      02:43:14.400  solving a problem not anticipating the next problem that comes along and then like playing
 
      02:43:18.400  whack-a-mole with all the subsequent problems can we all just look at the 80,000 hours problem
 
      02:43:24.000  profiles website together because this is already happening i don't even know how you could
 
      02:43:28.560  possibly say that it's not happening like so here's a question right so why is uh for it is
 
      02:43:33.680  working on climate change feels important because climate change itself is an enormously important
 
      02:43:38.960  problem but there are other things that matter other than the absolute importance of an issue
 
      02:43:43.840  you might also care that the issue isn't neglected that is not many people care about it and not
 
      02:43:47.920  many resources are currently spent on it and it would also care about how silver ball the issue is
 
      02:43:52.160  like how tractable it is and one of the reasons maybe that 80,000 hours currently thinks that these
 
      02:43:59.600  kind of weird long-termist-caused areas are so important isn't because the problems themselves
 
      02:44:07.840  are absolutely more important than the kind of like pressing problems that the world faces right
 
      02:44:14.160  now maybe it's also because um just as a matter of fact it looks like very few people are working on
 
      02:44:22.000  on them relative to how important they are and also they kind of seem solvable and that's why
 
      02:44:27.680  it's a marginal claim because you can imagine a point in the future where the kind of talent
 
      02:44:32.800  gets gets filled and the kind of more obvious problems get solved and now it's on like a more
 
      02:44:39.760  level fitting without the things um that's the sympathetic case and then just like i'll say for
 
      02:44:43.840  the 15th time like i'm not claiming that's what i think like i basically didn't know what to think
 
      02:44:48.800  right now um but you can imagine that's the pushback um here's my analogy uh and forgive the cheekiness
 
      02:44:54.960  but i think this gets the point across so in that there's a bunch of scientists thinking about
 
      02:44:58.720  evolution thinking about how old the world is you know trying to work out the hard problems of
 
      02:45:02.640  geology and plate tectonics and wrestling with how the climate affects the earth of all's over time
 
      02:45:09.680  etc etc and then there's like some religious fanatics who are like uh well no one's really
 
      02:45:15.760  focusing on the problem of like what if the world is just 6,000 years old and it was created by an
 
      02:45:20.720  almighty god you know this is a that's a marginal claim right it's very neglected um it could have
 
      02:45:26.160  huge impacts if it was correct um and so we should have a portion of like the scientific community
 
      02:45:31.440  thinking about this question right it doesn't it's it's not about like the content of the theory
 
      02:45:35.920  it's just like what if it was correct we should have we should we should expand our portfolio to
 
      02:45:40.000  like cover this case um and obviously in that case it's the you know the reasons for believing
 
      02:45:46.080  that are bad and i think just think it's similar to the long-termist case right like um we we
 
      02:45:51.040  shouldn't be favoring uh AGI over climate change because they're in my opinion the reasons for
 
      02:45:57.120  focusing on something like AGI are just bad reasons so i'm just criticizing those regions i'm not
 
      02:46:01.120  criticizing that like you know however much percent of right anyone's altruistic portfolio is dedicated
 
      02:46:09.760  to the problem of long-termism i i don't care i'm just i'm just whatever percentage they're dedicated
 
      
      02:46:17.520  Finn is not like impressed i just have a weird resting expression
 
      02:46:23.920  okay i mean i'll just agree like where the reasons are bad then don't do the thing that the
 
      02:46:31.040  reasons point towards um i always just try to clear up a like potential misunderstanding between
 
      02:46:37.280  what's just best at the margin because something's neglected and right eventually issues are more
 
      02:46:43.200  important than other issues and you know what you mix those things up then things are going wrong
 
      02:46:48.080  but i think we're on the same page and it just comes down to like a more straightforward object
 
      02:46:52.880  level potential disagreement about which like reasons you're talking as if we don't have just a
 
      02:46:58.640  really great list in front of us that shows exactly how people prioritize all the problems on the
 
      02:47:04.080  80 000 hours website and says please listeners go to 80 000 hours dot org slash problem dash profiles
 
      02:47:10.880  and you will see first that climate change is already slipping down the list so on the point
 
      02:47:16.000  of climate change just give it time it'll be off there soon um but above five doctrine yeah above
 
      02:47:22.480  things like mental health and um biomedical research are such solvable problems as aging and
 
      02:47:29.520  global governance and outer space and um how to do whole brain emulation which means literally
 
      02:47:37.040  downloads your brain into a computer like black mirror and s-risks these are interesting things
 
      02:47:42.240  to talk about and discuss but my god these are not even in the same camp as dealing with suffering
 
      02:47:47.600  and poverty in developing countries right now um and the slip is already happening it's like
 
      02:47:53.200  it's not hypothetical we don't have to think like what is might gonna happen in the future just look
 
      02:47:58.000  at the website and see what i'm referring to what we're referring to um i just want to highlight
 
      02:48:01.920  that because it's not a hypothetical point i am worried or have like an intuition that there is a
 
      02:48:08.880  problem in not addressing these short-term things more because you're not going to get be in a position
 
      02:48:13.920  to address these long-term causes before you've solved a lot of the stuff so we kind of talked
 
      02:48:18.000  about the example of climate change right where i'll be like very upfront as well is like personally
 
      02:48:22.160  something i care i think much more than the average EA so my judgment might be clouded there
 
      02:48:27.120  but i don't see a world where even if we avoid the like really extreme risks of climate change
 
      02:48:32.160  right and we still have a future that we're going to be like in any capacity to solve things like
 
      02:48:37.520  AI or existential risks or what have you and i think that is a point actually i think you are
 
      02:48:44.160  getting at right where you kind of have to solve problems almost as they kind of come along
 
      02:48:47.680  and a lot of the other work feels much more speculative so i agree with you completely there
 
      02:48:53.040  i would just say that the way i kind of reach to that conclusion is still from some way of like
 
      02:48:58.800  long-termism where it's all the future i care about but i agree that's probably more semantics
 
      02:49:02.960  than like a fundamental disagreement did you care about climate change before long-termism
 
      02:49:07.520  yeah i mean that's probably my bias right um but it also indicates that long-termism didn't
 
      02:49:12.160  get you there right no okay okay no that's fair but i would say that like i think i feel
 
      02:49:18.240  within like my EA journey like in quotes right um i was much more focused on RCTs and the random
 
      02:49:26.000  mr stuff and long-termism for me was a way to look at solutions which i had like previously
 
      02:49:31.840  discarded or like pro like cause priorities that i hadn't thought could be justified within the
 
      02:49:36.800  a framework and that is where i see like a lot of value within that might be like a very niche
 
      02:49:42.160  thing that just happens to speak to me and i hope like a few other people but i understand that
 
      02:49:46.640  is kind of like from a very different place where where you guys are kind of coming out of from
 
      02:49:50.560  i'll just jump on this wagon and just agree there were these edluca yeah so i'm also really
 
      02:50:00.640  worried about ways that long-termism could play out especially in the scenarios where it becomes
 
      02:50:09.120  parasitic on these other just like obviously good cause areas and a lot of attention gets funneled
 
      02:50:18.000  to these like massively speculative ideas which turn out to be badly misguided and then the entire
 
      02:50:25.040  movement ends up stalling um but what do those kind of objections have in common i think they're all
 
      02:50:32.560  like really practical worries so like earlier we're talking about all the kind of slightly more
 
      02:50:40.800  philosophical airy methodological problems and actually i don't think you need to worry about that
 
      02:50:47.840  in order to worry about long-termism already some of its uh potential recommendations i think all
 
      02:50:55.680  the problems can get going in a much more down-to-earth sense which is just thinking about what works
 
      02:51:02.400  in the real world um and so that's where i'm like most most on board um i still think that there are
 
      02:51:11.040  there are like things which we can do which like plausibly uh stands to improve the very
 
      02:51:19.600  long-run future which long-termism draws out in a really unique way in just the same way that
 
      02:51:25.600  effective altruism originally drew out the importance of certain kinds of effective charities even
 
      02:51:31.280  though those charities existed before um so i think there's like a space for some kind of
 
      02:51:36.720  you know synthesis or agreement um but these practical problems the way in which you resolve
 
      02:51:46.720  those problems seems most straightforward it doesn't involve like doing lots of complicated
 
      02:51:51.120  philosophy so those are kind of it gets called the like epistemic objections to long-termism which is
 
      02:51:57.600  look i buy into these ethical claims about the far future mattering just as much as
 
      02:52:02.960  near future we don't want a pure discount rate future people matter as much as present people
 
      02:52:08.880  but you also need to find the things we can do now again which stands to reliably improve how
 
      02:52:16.880  long and future goes and maybe it's just the case that there are no such things or if there are
 
      02:52:22.720  we can't find them because we're not like omniscient and in that case as like super clever and robust
 
      02:52:35.520  as the kind of ethical philosophical part of the of the argument is there's nothing in the actual
 
      02:52:41.440  world to feel it out so there's nothing we can do about it that seems like the most plausible
 
      02:52:50.400  criticism of long-termism in general few things to say there though and i could imagine this
 
      02:52:56.320  speaking the kind of pushback from the long-termist one is well if we don't know now then let's not
 
      02:53:02.000  just give up let's fund and throw our time and effort into doing research right so we take a step back
 
      02:53:12.880  and the new problem is like finding the object level problems to go out and solve which like
 
      02:53:19.680  makes some sense and the other thing is that i'm pretty confident that there like are at least
 
      02:53:26.000  some things that we can do which stand to reliably influence a long-term future and probably those
 
      02:53:33.680  things have to do with existential risks especially in cases where we're falling so short i think i've
 
      02:53:41.120  mentioned it like 15 times already but biological weapons for instance and nuclear weapons and
 
      02:53:47.600  risk from AI those things actually seem quite concrete and solvable now so there are some
 
      02:53:51.920  examples and the question is how many of those examples there are i agree with everything except
 
      02:53:56.480  the the risk from AI i mean the other ones are like problems that we you know we know exist and
 
      02:54:01.920  of course we should you'll be the first to be enslaved yeah i already am enslaved it's too late
 
      02:54:06.560  i put it i put a good word and building the overlords as we speak because i know
 
      02:54:13.040  fainting expert in the AI so he his credence right we have to take that into it right
 
      02:54:16.960  his credence is a hundred percent that a i wish it it would be so much easier if i was
 
      02:54:22.080  allowed to just use that move like you're talking to an expert but obviously that's
 
      02:54:25.680  bullshit like i'm just a person who has biases like anybody else but from an outsider perspective
 
      02:54:32.080  you could describe me as a quote expert which i just cringe every time i say that word but
 
      02:54:36.160  i'm in a top research lab studying artificial intelligence with people who are the ones
 
      02:54:41.200  that people think about when they talk about AI and like i'm not just a human being right
 
      02:54:46.960  and and i would never dream of talking in this way that you should believe my beliefs over
 
      02:54:55.200  somebody else's beliefs even if i was asked via survey from tobiord and nick bostrom i would still
 
      02:55:02.880  like not think this is a valid way to to argue and it seems like the whole ai risk and ai alignment
 
      02:55:10.640  conversation is entirely propped up from arguments of basically that form that a lot of experts are
 
      02:55:18.240  worried and therefore we should be worried too and and in general i think thinking about ai safety
 
      02:55:23.600  is is okay it's it's one of many important things to think about um it's just not something
 
      02:55:29.280  which should be funneling money away from poor people um is it and or ander samberg or what's
 
      02:55:36.560  the guy saying you know you gave a talk about popper yes exactly so that's the one i was trying to
 
      02:55:40.400  um there's it did a beautiful analysis of um the poverty of your source ism it was it was excellent
 
      02:55:45.840  and he addresses the importance of having a feedback mechanism in the talk he um like this is the
 
      02:55:52.160  only way we can actually get traction um when it comes to evaluating our theories and it's super
 
      02:55:57.600  interesting like when you um really digest a lot of poppers thinking because you can see how um
 
      02:56:04.400  there are domains well outside of academia that are thriving because i have this feedback
 
      02:56:08.560  mechanism that you wouldn't typically think of so for example um stand up comedy is a really
 
      02:56:12.720  interesting example because you can try it boston jones was a stand up comic and he said that his
 
      02:56:17.920  favorite part of doing it really was the feedback because you immediately know whether you've made
 
      02:56:22.400  a bad j-cra good jig yeah and so so there is like a development there um same with uh jujitsu so bjj
 
      02:56:30.160  like brazilian jigitsu um a friend of mine teaches it and just the way that he was um talking about
 
      02:56:35.440  his teaching strategies was completely paparion um he just he didn't know about popper and stuff but
 
      02:56:40.800  it was like i want my students to have certain realizations and um they'll try things and they'll
 
      02:56:44.880  be then falsified by being dropped onto the mat and stuff um i mean popa didn't invent trial and
 
      02:56:50.320  error or invented it's not that he invented trial and error is just he recognized its deep
 
      02:56:55.360  philosophical significance compared to other methods which are um proposed but anyway like
 
      02:57:00.320  the point is sambir recognized the importance of this feedback mechanism and we don't have this we
 
      02:57:06.160  can't have this dealing with the long-term future okay and this is this is why uh i think all research
 
      02:57:15.440  along these lines is ultimately going to come up with nothing because all we really have to get
 
      02:57:22.400  traction are arguments and stuff but all the most um let's let me let me say most most uh like
 
      02:57:30.400  until we can start getting traction um through feedback mechanism we're just writing science fiction
 
      02:57:35.200  and science fiction is interesting for idea generation and stuff and you can generate good
 
      02:57:39.120  ideas but you only can evaluate which ideas are good and poor by some feedback mechanism and
 
      02:57:44.880  unless we have that injected into the system and importantly even if some are right you won't know
 
      02:57:50.320  exactly right that's the issue it's not that you're coming up with like necessarily wrong ideas it's
 
      02:57:54.240  that you can't differentiate the right ones from the wrong ones or the better ones from the worst
 
      02:57:57.520  ones and so that's the issue it's not like obviously we can't say everything coming out of the community
 
      02:58:01.440  is wrong it's just that we have no fucking idea which one we can't falsify any of it like and also
 
      02:58:07.120  like um there it isn't just some low risk thing that you're really dice it's it again is um
 
      02:58:13.600  pitting the well-being of people alive today against the well-being of an infinite number of
 
      02:58:18.800  people in the future so it's every dice roll is is of a way to forget about a current problem
 
      02:58:24.640  through one more bullet on the 80,000 hours problem profile yeah yeah it's like here's another
 
      02:58:31.360  dice roll here's one reason why we can't have both s-risk and climate change but let's just
 
      02:58:37.200  climate change go down one more notch let's roll the dice again and now we have john hammed and
 
      02:58:41.520  roll the dice again and every single time it just goes down further and further and further and
 
      02:58:46.560  yeah i think that's really interesting i think like it's it's getting me to think about this
 
      02:58:53.200  important distinction which admittedly i haven't really made i think before this this conversation
 
      02:58:57.360  which is that there's two very different like conclusions from long-termism oh i guess like like
 
      02:59:03.040  two different ways you can use it and i think i'm finding myself like more and more on one side of
 
      02:59:07.760  it so i guess like on the one hand we talked about this like long-termism taking these bets and
 
      02:59:13.200  mostly focus around these these s-wisks which can often seem sci-fi which i like fen i think
 
      02:59:18.080  x-wisks are like something that is like worth considering and i think we're paying too little
 
      02:59:22.880  attention to but i definitely see the danger from that i think the other point where i might be like
 
      02:59:27.040  falling more or like where i'd be more comfortable getting behind is this thing of using like almost
 
      02:59:33.680  in a political economy sense like these future generations just to like be able to make decisions
 
      02:59:40.000  where i just think we are like very short term at the moment and making that moral argument might
 
      02:59:44.480  be enough to like sway a decision and i don't think i'm gonna be interested to hear if like any of
 
      02:59:48.640  you like disagree with that like as a statement i'd i'd interact the the statement or at least
 
      02:59:54.800  perhaps about did you um i i think i just understood um what the actual uh second part was yeah so let
 
      03:00:02.080  me rephrase so i think like um an example to maybe give is that like let's say we take something like
 
      03:00:09.200  climate change right and we run a cost-benefit analysis and we find out that like we should be
 
      03:00:14.800  limiting it to two and a half degrees but you know the extra effort it's going to take to like two
 
      03:00:20.400  or one and a half degrees is not worth the costs then you could like make the like argument that
 
      03:00:26.880  okay but like it's not just us right who are going to live in as well it's all these future
 
      03:00:30.560  generations and because it's such a huge amount right you then have the moral argument to say
 
      03:00:36.080  okay we're going to go down and keep a below one and a half degrees and i definitely understand
 
      03:00:40.720  the concern that i think you rightfully raised is that this can be a really slippery slope
 
      03:00:44.240  where then all of a sudden you're doing these really horrible things um in the name for this
 
      03:00:48.800  like utopian future but that is still where i see a lot of value in long-termism just where you look
 
      03:00:54.560  at like human age or you look at society as a whole we are like incredibly impatient and that
 
      03:00:58.720  causes a lot of problems and thinking about this and i think reflecting on this even if it's not in
 
      03:01:04.080  this like expected value kind of sense i think is really really worthwhile and i do think has
 
      03:01:09.760  important conclusions for like effective altruism when you think about what the most effective things
 
      03:01:14.160  out to do in the future and i think that just means being a bit more paranoid about like nuclear war
 
      03:01:18.880  or biological risks and the like and climate change as well right for me but i think like that's
 
      03:01:24.880  where i see the value the practical value right of of long-termism is in being able to make that
 
      03:01:30.640  thought moral case i guess one one interesting thing to mention here is just our success
 
      03:01:40.720  like our record of success in dealing with previous other things that would have been
 
      03:01:47.280  described as existential risks for example ozone layer acid rain overpopulation lack of fertilizer
 
      03:01:54.960  that caused like wars in the end of the 1800s and stuff and human beings are incredibly good
 
      03:02:04.800  solving problems which at the particular time in history seemed insurmountable right seemed
 
      03:02:11.520  like there was absolutely no way we were going to overcome this and and before climate change
 
      03:02:18.160  was called global warming and before that there's global cooling developing a vaccine in a year
 
      03:02:23.120  getting colder yeah yeah developing a vaccine in a year and so human beings are like really great
 
      03:02:31.040  at solving seemingly insurmountable existential risks and and i just want to emphasize that because
 
      03:02:42.800  it seems like a background assumption is that like if it wasn't for long-termism we would
 
      03:02:48.000  not care about the long-term future and we wouldn't be able to proceed when really i think
 
      03:02:52.560  much of history can be viewed from the lens of preserving the long-term like what do you think
 
      03:02:58.400  the fight against fascism was all about in world war two if not like ensuring the long-term future
 
      03:03:05.840  of of the species this is a long-termist cause it's not described that way point taken i think
 
      03:03:13.040  the thing where and i might be wrong here but like the thing where i'm kind of i think getting
 
      03:03:17.360  at is that it's not like an either or thing you're you're right that we may be avoided like the
 
      03:03:21.360  worst consequences right of these things and you know in all likelihood we'll survive the worst
 
      03:03:26.160  consequences of climate change as well but there is still right like a meaningful difference as if
 
      03:03:30.480  you just barely make it or if you do it with like room to spare and millions of lives saved right
 
      03:03:35.440  that i think that's like where i'm getting at where like that thing might seem marginal and like
 
      03:03:39.440  maybe long-termism only really makes a marginal difference but that can still mean saving like a
 
      03:03:44.400  million lives for something that just is worthwhile right and like i think that's kind of where it
 
      03:03:49.200  gets up i agree like the disagreement fundamentally there is small and i'm also like super optimistic
 
      03:03:54.400  right about like value and like value creation and knowledge and all the like and i definitely
 
      03:04:00.160  think we we have that in us and um this might be at a place but one of the nice analogies that um
 
      03:04:08.480  will mccaskill came out with is to picture our trajectory as a cruise ship setting off from
 
      03:04:16.000  uh like the uk to new york for instance and he said look imagine you jump out and he starts
 
      03:04:23.280  swimming and he start pushing against one side to change its course you know in the next half hour
 
      03:04:29.760  or hour no one will notice the the difference in trajectory it's making um but if you keep swimming
 
      03:04:39.680  if you keep it up and to arose at the states it'll arrive in like florida or something rather than
 
      03:04:46.000  new york and maybe that's the point luke is making about it's not so much neither or it's just that
 
      03:04:53.520  when we're talking about trajectory changes rather than these kind of step changes like
 
      03:04:57.600  existential risk it's nice to think about those metaphors as kind of you know making these things
 
      03:05:05.600  more salient since i'm rambling um i want to say that i feel like we're kind of rounding things up
 
      03:05:12.240  at least i have i read out things to say about two hours ago um but i'll just like say kind of where
 
      03:05:19.600  i am so i think i'm kind of like normatively and morally uncertain just like probably you
 
      03:05:27.200  should be in most people are but i feels like the most important and plausible objections to long
 
      03:05:34.320  termism are the practical and epistemic ones that you and others raise in other words can we actually
 
      03:05:43.040  find anything that we can do right now to make the long run future go better at least beyond existential
 
      03:05:49.920  risk and then once we've found those things um can we put them in practice without trampling on the
 
      03:05:59.360  feet of these other cause areas which also matter and can we implement these kind of long-termist
 
      03:06:07.120  trajectory changes without some kind of dangerous like ideological slide where we are justifying
 
      03:06:15.680  like all manner of of present sacrifices and harms in the name of and like in a way which just things
 
      03:06:24.960  turn out really badly and where that ends me up is like Luca i think you know very sympathetic to
 
      03:06:31.840  what kind of watered down long-termism and then like very interested in a stronger version of
 
      03:06:37.040  long-termism you know kind of curious to learn learn more and um i do think i actually set this
 
      03:06:44.960  at the start but yeah that's kind of where i am yeah so just to i'll just give my little closing
 
      03:06:50.960  thoughts which is everything you said basically sounds like a win from my perspective because i
 
      03:06:55.600  my goal is not to get everyone to abandon using the phrase long-termism it's just to recognize that
 
      03:07:01.520  the pursuit of this one idea has the potential if not has already started um hurting a lot of
 
      03:07:10.800  people by just uh reallocating all of the focus and attention away from pressing current problems
 
      03:07:16.720  right now um and it's done via this one move which i'm going to continue to repeat until the cows
 
      03:07:24.000  come home which is multiplying small numbers and big numbers uh and associating that with some
 
      03:07:27.840  crazy sci-fi scenario and then using this as some sort of morally significant argument and if that
 
      03:07:36.080  these components can be taken away from long-termism then i'm completely in favor of long-termism um
 
      03:07:41.680  it's just just those things that are quite worrying and lastly i think that the subject of ethics
 
      03:07:48.480  and morality is one that every human being who is interacted with another human being and has to
 
      03:07:54.560  figure out how to treat people kindly has a voice uh in and not one person is one person is not
 
      03:08:01.680  or more or less capable of talking about the stuff than any other person so yeah that's my comment
 
      03:08:07.440  one thing i wanted to to add um is there's like a great um yay forum post by by Gregory Lewis about
 
      03:08:16.000  called like the wear surprising and suspicious convergence which i think just relates to a bit
 
      03:08:21.200  about what we talked basically like the thing is is that generally we might expect two things to be
 
      03:08:26.000  correlated but not really at the very top so as an example to give to listeners we would
 
      03:08:31.360  expect that somebody who's very good at tennis um would also be good at basketball because they're
 
      03:08:35.280  both generally fit and you imagine that there is some correlation but right at the very top you no
 
      03:08:39.920  longer suspect that correlation to hold and it might even be negative right and i think the way
 
      03:08:44.800  that i kind of see that relating to our debate about long-termism and some of the questions about
 
      03:08:49.760  what it actually means in practice is that i think in general acting in the favor of the short term
 
      03:08:54.880  is also good for the long-term which is why i don't think personally i'm too concerned about these
 
      03:08:59.120  more like authoritarian slips and stuff i think generally i mean to be proven is just my intuition
 
      03:09:04.560  but generally i think that will be the case but when you're really optimizing i do think long-termism
 
      03:09:09.120  promises to reach some counterintuitive things that we might not have thought about before or
 
      03:09:13.920  maybe correct some some failist that i see in the market or or in government at the moment and that's
 
      03:09:19.040  like the value isn't where i see long-termism but i think this discussion has really helped me
 
      03:09:22.960  hone in maybe on those aspects more than the the yeah the expected value things which i definitely
 
      03:09:28.480  see yeah i don't think i mean i'm certainly not arguing against using future generations as an
 
      03:09:34.560  argument in the political sphere so if you're arguing for example about like climate change and
 
      03:09:40.160  what the carbon tax should be i'm all in favor of saying um we you know we should think we should
 
      03:09:45.040  take future generations into account right we should think of like the possible devastation
 
      03:09:49.920  we're enacting on the world and use that to just like argue about uh the trade-offs between various
 
      03:09:55.280  policies and how um how intense you want to be about shutting down certain industries for example
 
      03:10:02.320  um but what i'm what i'm not in favor of is like pretending that we have knowledge about
 
      03:10:09.200  um future events and how certain actions now we're going to influence the future um and even trying
 
      03:10:15.440  to get at that information via math um and trying to conjure information sort of out of thin air
 
      03:10:21.200  even though no one would claim we're trying to do that but i think by just doing the math and
 
      03:10:24.480  writing these numbers down it conveys a certain sense of uh of certainty that we just like don't
 
      03:10:30.000  have when it comes to that um i can see fan wants to robot my point no no no no no i was just thinking
 
      03:10:38.160  of a way to sum up your worries and you know one thing maybe worth saying is that it seems to me that
 
      03:10:46.560  the parts of long term is in which you're worried about actually have little to do
 
      03:10:53.280  at least intrinsically with either the future or the fact that that future is long
 
      03:11:00.160  um it's to do with all the paraphernalia they get associated with it and the ways the kind of
 
      03:11:07.040  the frameworks for reasoning which yep go along for the right and the kind of formal methods which
 
      03:11:16.560  people buy into to match and so on so there's nothing wrong with caring about the long-run future
 
      03:11:21.040  of course you just care about kind of ways of or reasoning about it and how those reasons are
 
      03:11:26.720  used to swamp every other consideration sure sure um yeah the fun yeah the final point i'll make
 
      03:11:31.920  which is very unfair of me to bring up four hours and 25 minutes into this conversation which i
 
      03:11:37.760  realized we didn't touch on which is a bit of a shame but um either we can talk again or i can
 
      03:11:41.520  just have the final word on it because i'm right and have authority um is a common rebuttal here to
 
      03:11:47.120  like um to people arguing against long-termism is like look every action we take has long-run
 
      03:11:53.120  consequences and so you have to care about the long-term future no matter what right like even
 
      03:11:58.320  by distributing bed nets in africa those are going to have super long-run consequences so either you
 
      03:12:03.440  have to start doing expected value calculations and think really hard about what the long-run
 
      03:12:08.400  future entails um because any action you take has significant influence on it and so by not
 
      03:12:13.920  dealing with that question you're just like um a priori assuming that your favorite short-term
 
      03:12:20.720  action has beneficial long-term consequences um but what this misses is the option of
 
      03:12:26.800  refusing to answer questions that we can have no way of answering so the instead of trying to
 
      03:12:34.720  right now optimize exactly how we want the future to go um it's to recognize that every action that
 
      03:12:41.280  we take is going to have consequences they're going to lead to future problems and the best
 
      03:12:45.600  thing we can do is set up a societal infrastructure and have as many people ready to tackle those
 
      03:12:50.640  problems as they inevitably arise so right now we have reason to believe that like distributing
 
      03:12:55.520  bed nets um is an incredibly good thing can save people's lives um and then of course this is
 
      03:13:00.800  going to have certain consequences that we can't foresee right this might this is going to speed up
 
      03:13:04.800  economic development of certain countries which perhaps leads to more greenhouse gases or more
 
      03:13:09.840  factory farming or something that we can't imagine right there's going to be ideas and political
 
      03:13:13.920  institutions that arise out of these countries but instead of trying to guess at what those are
 
      03:13:17.520  right now we can just recognize that we will tackle those problems as they arise and um you know we
 
      03:13:23.600  could just call this problem cluelessness and then freak out about how we can't predict the
 
      03:13:29.040  long-run future or we can just of course recognize that yes predicting the long-term future is
 
      03:13:33.360  impossible um and the way we generate knowledge is by solving current problems um recognizing those
 
      03:13:39.360  problems will yield inevitably to future problems and then solving those and they arise and this is
 
      03:13:43.760  good this is how we make progress and generate knowledge um and so the best thing we can do is
 
      03:13:47.600  like have institutions and um and tallities um of in terms of like error correction that are ready
 
      03:13:53.120  to solve those problems as they arise so there is i just wanted to highlight that there is a third
 
      03:13:57.520  option and uh here um and it's not to just ignore the future or try and optimize it right away it's
 
      03:14:03.040  just to recognize there will be problems um and uh we'll solve those as they come. Anyone object
 
      03:14:09.200  to letting Ben have the last word? I think I think the war of attrition uh four and a half hours has
 
      03:14:14.400  worked I think I'll just throw in the towel find whatever problems inevitability error correction
 
      03:14:22.320  I don't care I don't care anymore we are actually now living in a very long way