00:00:00.000  Like if anything, the bestiality and vegetarian example makes me more pro bestiality than I ever did.
 
      00:00:06.000  But my parents, who rarely listen to the podcast, went on a road trip to Vancouver
 
      00:00:22.000  and decided to listen to our most recent episode on AI. Apparently, our roles have reversed because
 
      00:00:27.920  now I'm swearing every second word. And they were so impressed that you weren't swearing at all.
 
      00:00:34.080  Like the only feedback my mom had for me was, yeah, pretty, you know, pretty nice episode.
 
      00:00:41.200  It's amazing that Vaden doesn't swear. He's so good about not swearing and yet staying so passionate.
 
      00:00:45.920  And then there's kind of fades to silence and I say, okay, what about me? And she says, oh,
 
      00:00:51.360  it's terrible. You're swearing all the time. It's so funny. There is an old clip of us talking
 
      00:00:55.600  about long-termism. So it's kind of apropos that somebody recirculated on Twitter. And I was
 
      00:01:00.240  just swearing like a sailor. And now I'm like a bit more self-conscious about how much I curse.
 
      00:01:05.600  And so I'm trying to try to limit it a little bit, but it's hilarious that your mom picked up on it.
 
      00:01:09.840  And it wasn't all the brilliant things that you and I said that was important, of course.
 
      
      
      00:01:18.880  But should we fill up our drink and get into this topic for today? I don't know.
 
      00:01:22.640  How do you feel? Anything you want to say last minute before we dive into it?
 
      00:01:26.560  No, I guess not. Let's just fucking let's rip off the band-aid, get it over with.
 
      00:01:29.920  Okay, let's go. Okay, let me go fill up my drink. And...
 
      
      00:01:33.520  Cool. Well, let me set up this conversation and then you're going to do an opening burst,
 
      00:01:41.840  because I don't know how to even summarize my thoughts. But so today we are talking about
 
      00:01:47.200  William McCaskell's newest book, What We Know the Future. This is my good friend Benjamin Chug
 
      00:01:53.200  from Carnegie Mellon University. I'm Vaid Masarani from University of British Columbia.
 
      00:01:57.840  And we, in episodes past, have talked a lot about long-termism, kind of stepped away from the
 
      00:02:02.160  conversation for a year or so. But McCaskell's come up with his latest book, Can We Both Read It
 
      00:02:06.880  and Have A Lot of Thoughts About It. And I think today we're just going to go back and forth a
 
      00:02:10.480  little bit, talk about what we liked, what we didn't like. It's kind of an...
 
      00:02:13.600  else between being a tough book to summarize versus being almost trivially obvious to summarize.
 
      00:02:18.640  So I'm struggling to come up with a coherent opening monologue. So Ben volunteered to do so.
 
      00:02:24.880  And I'm going to react to what you think. But that's my setup.
 
      00:02:29.440  It's generous to say I have a lot of thoughts. I'm not here. I feel like every day I wake up and
 
      00:02:34.240  I think about it more. Yeah, I go back and forth as to what's actually going on and how profound
 
      00:02:40.320  I think it is. So I think maybe a helpful backdrop is considering Peter Singer's work on global
 
      00:02:50.960  poverty in the 1970s. And in particular, he brought this influential philosophical article,
 
      00:02:57.760  "Famine Affluence and Morality," sometime in the 1970s. And I think the goal there was to catalyze
 
      00:03:06.240  a moral revolution, so to speak. So he sort of made these arguments that there's these people
 
      00:03:12.880  living halfway around the world. And they're suffering a lot and we're spending money on
 
      00:03:19.840  pretty inconsequential things. And with that money, there's good evidence that we can improve
 
      00:03:25.280  their quality of life drastically and even save their lives in some instances. And basically
 
      00:03:29.680  arguing that we should do that. Okay, so that's a thesis. You can disagree or disagree. But if you
 
      00:03:35.840  adopt that thesis, the consequences are very actionable. So my cast goal, I think,
 
      00:03:41.040  acknowledges that he's also trying to catalyze a moral revolution. But this time, thinking about
 
      00:03:45.120  people in the future, he's got these three phrases sometime at the beginning of the book that basically
 
      00:03:50.320  sums up the thesis. I think future people matter, we can- So future people count, there could be a
 
      00:03:56.640  lot of them. We can make their lives go better. I just noticed your screen name on here is made
 
      00:04:00.400  in the merciful. It's funny to change the name. So he's trying to tell us that like, you know,
 
      00:04:13.840  Singer was saying we should be concerned about people in the world, at my castle hang, we should
 
      00:04:16.880  be concerned about people, thousands, hundreds, thousands, even millions of years in the future.
 
      00:04:22.320  And he wants this to like revolutionize sort of our moral landscape and now we go thinking about
 
      00:04:26.640  the world. But after that, the analogy kind of breaks down because while it's more clear what
 
      00:04:32.080  we can do to help people halfway on the world, obviously you run into the problems that just
 
      00:04:35.760  like predicting the future is quite hard. And so it's much less obvious what we can do to help people
 
      00:04:41.440  in the far future. And my basic read of the book is that he's trying to argue that there are things
 
      00:04:49.920  we can do to positively impact this future. But I don't think he can escape the black hole of
 
      00:04:58.560  uncertainty that surrounds trying to help people so far in the future. And because of that,
 
      00:05:05.920  ends up focusing on problems that are quite present and pressing right now, right? So he ends up
 
      00:05:12.320  focusing on pandemics, whether engineered or natural, ends up focusing on climate chains,
 
      00:05:19.040  ends up focusing on trying to promote economic growth and avoid nuclear war, all things that are
 
      00:05:25.840  very pressing problems that I'm also concerned about. I love that people are talking about. But
 
      00:05:30.800  after he finishes, it's very unclear what the long in long-termism is supposed to do for us, right?
 
      00:05:39.840  These are all problems that you can focus on without adopting his thesis that we should care
 
      00:05:44.880  about people a billion years in the future. So sort of the delta between our current concerns
 
      00:05:50.160  and what the book advocates we think about is pretty small. But because he's trying to be so
 
      00:05:55.840  expansive in his claims, he ends up not really zooming in much on any of the particular problems
 
      00:06:01.040  and just sort of says at a high level, we should think about these problems more carefully and
 
      00:06:07.280  perhaps give more money to them, which perhaps is fine. But if you're trying to catalyze a more
 
      00:06:11.920  revolution, I kind of expected some more actionable things. Like if everyone, I want everyone's going
 
      00:06:16.880  to wake up as a long-termist to borrow. What would change? And I'm not sure that much would
 
      00:06:20.560  change because now we're just being the position about all arguing what we're supposed to do to
 
      00:06:25.520  make the future better. And I think that's what largely we're trying to do anyway, all the time,
 
      00:06:29.920  right now. I think we all have opinions about the direction that we want to see the world go.
 
      00:06:35.200  And that's in part why people disagree about shit all the time.
 
      00:06:40.080  And so these questions are very hard. Like what we do about climate change, what we do about
 
      00:06:43.760  nuclear war, these are intense questions touching on philosophy and geopolitics and technology and
 
      00:06:50.880  economics. And it's fine to write a book saying these are hard problems and we should think about
 
      00:06:56.800  them. But it's sort of inherently in actionable because he can't give actual solutions to all
 
      00:07:03.360  these problems in one book, right? Because they're all like very deep problems. And so I'm just
 
      00:07:08.080  kind of left not sure what I'm supposed to take away from it.
 
      00:07:12.800  I love that I completely agree. And just to give a flavor of what you're talking about with the
 
      00:07:18.800  actionable or lack thereof characteristic of this book. So at the end, he gives three concrete
 
      00:07:24.560  recommendations for what we should do. And they are first, take actions that we can be comparatively
 
      00:07:30.880  confident or good, two, try to increase the number of options open to us, and three, try to learn more.
 
      00:07:37.760  And I get to this point in the book, I'm thinking, well, what is long-termism actually doing to
 
      00:07:41.600  arrive at these recommendations? So if listeners go and check out our episodes from the archive
 
      00:07:49.040  about long-termism, you'll hear us be quite opinionated that it's a bad idea. And we are
 
      00:07:55.760  critiquing a paper that he wrote with Hilary Greaves called The Case for Strong Long-Termism.
 
      00:08:00.240  And it seems like he's just taken out all of the stuff that we were criticizing. And we're left
 
      00:08:05.440  with a book that I just feel like I don't really know what the point of the book is. He's trying
 
      00:08:10.880  to make this case for caring about the future. But when it comes to anything is actionable,
 
      00:08:15.120  you basically don't need it. Most of his recommendations for what we should do are things like
 
      00:08:20.000  find energy sources that are less polluting, try to prevent bioterrorism, try to prevent
 
      00:08:26.240  nuclear weapons. And all this is, from my perspective, just a trivially, obviously good thing to do.
 
      00:08:31.920  And then the stuff that goes a little crazy is the AI stuff. And he's significantly downplayed
 
      00:08:36.400  that in this book, which I can't tell if that's to his credit or not. There's a blog post
 
      00:08:40.560  written about this book by an effective altruist author who is much more long-termist than
 
      00:08:47.840  McCaskill seems to be. And he said, and I agreed with this critique a lot because he said that
 
      00:08:53.840  he feels that people are going to get a bit of, they're going to experience a bit of a bait and
 
      00:08:57.520  switch where they read the book. And they think, oh, this is all kind of reasonable and
 
      00:09:01.840  maybe obviously true. And then they go have an 80,000 hours phone call or they go to any
 
      00:09:07.600  effective altruist clubs or they join the forum and they see that the majority of what people
 
      00:09:11.120  are actually talking about is AI safety and AGI super intelligence taking over the world.
 
      00:09:15.200  They're going to be kind of taken aback because the AI stuff, which is a lot of what you and I
 
      00:09:20.720  were critiquing in previous conversations, has basically been reduced to half a chapter.
 
      00:09:24.560  And it's only one of 10 things that he recommends. But there's like then long-termism in the wild,
 
      00:09:31.440  which is a very different beast, which is kind of created by McCaskill and Graves and Ward and
 
      00:09:36.720  stuff. But this book doesn't really represent that, it seems to me. It represents this very
 
      00:09:41.600  watered down version of it that is hard. It's just a unsatisfying read from my perspective.
 
      00:09:48.160  There's something I liked and we'll talk about what some good hearts were. But it just seems to
 
      00:09:52.000  be so watered down that what's actually presented to the public is this weird mix between trying to
 
      00:09:57.840  make rather extreme claims about the billions of people in the future. But then when it
 
      00:10:02.640  bottoms out to what he actually recommends, it's like try to learn more. Well, okay.
 
      00:10:06.720  Obviously I want to learn more. So I just found it this weird super position of being extreme,
 
      00:10:12.960  but then banal at the same time and I couldn't totally grab onto it.
 
      00:10:17.040  Does any of that land with you or resonate? Totally. So the last chapter is about
 
      00:10:23.440  basically summarizing some success stories of people who he thinks are doing good work.
 
      00:10:29.360  But from what I can tell, most of them are not actually, or at least we're not
 
      00:10:33.600  originally motivated by the EA community. Maybe they found some solidarity in the mission in their
 
      00:10:38.320  work. But for example, he cites Isabelle Bomiki doing nuclear power activism, which is fantastic.
 
      00:10:46.160  I love what she's doing. I don't think she did it for long-term as regions. I didn't think she did
 
      00:10:50.480  it because she saw a huge political problem wherein people were pushing only renewables at the
 
      00:10:57.920  expense of nuclear. And this was causing huge energy demand. It was a bad idea. So she started
 
      00:11:03.280  talking about it in public. She didn't need to be motivated by long-termism to do that. Because
 
      00:11:07.600  I almost agree with everything, maybe modulo some of the AI stuff. But even there, most of the actual
 
      00:11:13.520  money, when they talk about AI stuff, it's different from where the actual money is going.
 
      00:11:17.120  Typically, it's just going to run in the middle AI research, which is great. But anyway, so the
 
      00:11:21.920  last chapter is I agree with basically all of these causes that he's talking about. But then I'm
 
      00:11:28.160  like, "Well, it's kind of unfair to credit this to long-termism. I think they would have been doing
 
      00:11:33.520  this anyway." And so maybe that's unfair. Maybe he would say, "Well, what I'm trying to argue is
 
      00:11:41.600  not that there's a whole new set of problems we should focus on. It's instead a reshuffling of
 
      00:11:46.400  our priorities." So it's not that I'm saying no one up to now has been focused on nuclear war or
 
      00:11:53.440  engineered pandemics or something. But I just want people to take that problem much more seriously.
 
      00:11:58.320  Which perhaps is fair. Again, we don't need long-termism to do that. There was like,
 
      00:12:04.480  X-Risk was in the culture well before the phrase long-termism went into the culture.
 
      00:12:08.880  After the pandemic, people are going to obviously be putting much more money in time into pandemic
 
      00:12:14.080  awareness and not awareness pandemic preparation preparedness. And similarly with nuclear weapons
 
      00:12:20.640  and stuff. And so it's like, do I need to make all these tortured philosophical arguments about the
 
      00:12:28.160  importance of people a billion years in the future? Or can I just make down-to-earth concrete
 
      00:12:35.280  arguments about the importance of nuclear energy of cleaning up fossil fuels of preventing bioterrorism,
 
      00:12:43.760  like these, we don't need to go this route. We only need to go this route for the AI stuff,
 
      00:12:48.640  which he's downplayed so significantly that I feel weird critiquing him for that in this book.
 
      00:12:53.920  Because I don't want to critique him for stuff he hasn't written. But it's also, I don't think it's
 
      00:12:58.640  an accurate representation of what long-termism is as practiced by people who call themselves
 
      00:13:04.480  long-termist in the effective altruist community. And again, I will just cite this person who's way
 
      00:13:09.040  more long-termist than McCaskill is. Critiquing McCaskill basically on this point that he didn't
 
      00:13:14.160  adequately or accurately represent how long-termism is being in charge of the community.
 
      00:13:17.840  So that is weird place where it's like, well, do I want to critique long-termism as practiced
 
      00:13:21.600  by the community where I basically think it's like this techno-utopian ideology? Or do I want to
 
      00:13:26.400  critique long-termism as he's presented it here? Which is, it's almost like reading a defensive
 
      00:13:33.920  Islam by a very, very, very moderate Muslim where you don't want to just critique it because it
 
      00:13:40.800  could lead to fanaticism. But you know that there's a fanaticism part of it in existence as well.
 
      00:13:45.120  And it's so it's just this weird place where, and like, and I honestly think that so there's the
 
      00:13:54.720  case for strong long-termism one, and then there's a second case for strong long-termism where they
 
      00:13:58.320  read it in paper. And they took out everything that we quoted as problematic. So in some sense,
 
      00:14:03.200  I'm reading this book thinking that a lot of the stuff that you and I said and wrote
 
      00:14:07.120  actually influenced what is in the book and what's not in the book.
 
      00:14:09.760  Because like, I can't say that for sure, but the evidence for that is that like
 
      00:14:15.040  Shivani and every quote that we raised was stripped for the new version of the case for strong
 
      00:14:20.720  long-termism. And none of that's in this book. And so is this even representing what long-termism
 
      00:14:25.680  is as practiced by the people who call themselves long-termist? I don't think it necessarily is.
 
      00:14:30.960  Yeah, yeah, yeah. I felt that way, especially about, so at the beginning of the book,
 
      00:14:35.680  like I kind of expected him to do, he introduced like the tool of expected value reasoning,
 
      00:14:39.200  right? And he said this is how we're going to reason about things. But then he like,
 
      00:14:42.240  he barely uses it. So he does a bit. So he'll talk about like the probability of certain things
 
      00:14:46.880  happening and you know, the usual conflation happens between his like just guess of what the
 
      00:14:52.960  what the probability is and like some true probability. But he, because if you were to actually take
 
      00:15:00.480  expected value seriously, you'd actually be able to like rank order all these problems, right? And
 
      00:15:05.680  he'd say because the whole point of expected value, you're supposed to be able to tame uncertainty
 
      00:15:11.360  using the numbers, right? That's the whole idea. And so you'd be able to take, you'd be able to bundle
 
      00:15:17.200  all the uncertainty about like which problem to focus on, how much more money to put into that
 
      00:15:21.680  versus some other cause that we're thinking about. Build a bundle that all into the calculation and
 
      00:15:25.600  get very explicit rankings of of how, you know, how much more time and energy should put to
 
      00:15:31.280  problem A versus the problem B. He doesn't do that for good reason, because that would come across as
 
      00:15:35.360  insane to an ordinary reader, right? And be like, well, this is this is this is crazy. But he does
 
      00:15:39.840  cite people who do and he seemed almost sheepish about assigning probability so stuff like he
 
      00:15:44.000  buried some probabilities and footnotes and things. But then he'd cite affirmatively Tobeyour to does
 
      00:15:48.560  this. Yeah. Yeah. Yeah. It's a weird relationship to it. So keep going. Yeah. But so so it's there,
 
      00:15:53.280  but in the background. So so but that's what I mean. Sort of like there's like a partial,
 
      00:15:57.440  only like a partial commitment. And so it's hard to criticize because I do agree with most of
 
      00:16:02.320  these, right? Like I also am worried about technological stagnation. I'm obviously worried about I
 
      00:16:07.440  also don't want to fall into like some totalitarian dictatorship. That also seems bad, right? I'm also,
 
      00:16:11.760  I also want, you know, moral change to like not be locked in forevermore. The part of long terms
 
      00:16:18.320  and long termism that we took issue with earlier was because they were giving like very precise,
 
      00:16:23.760  like you should donate to AI over malaria bed nets because it's better by a factor of 1500 to one
 
      00:16:30.640  or something. And then it was pointing out, okay, well, that's you're not comparing like with like,
 
      00:16:34.640  there's there's some problems here with with your calculations. But because it doesn't make any
 
      00:16:38.400  of those calculations. Yeah, you're just kind of left with this like, there's some problems.
 
      00:16:42.560  People should think harder about these problems, which again is like, I'm like, okay, maybe that's
 
      00:16:46.960  good. Like, well, I do want people to be more concerned about like nuclear war or something,
 
      00:16:51.920  right? Like I think on average, if more people concerned about like, just aware of the fact that
 
      00:16:55.760  it was still a huge nuclear arsenal in the world, that would be a good thing. And so I don't want
 
      00:17:00.320  to say that it's not bad or that he shouldn't have written the book or something. It's just the
 
      00:17:04.400  content. It's like the content, the logical consequences of what comes out of the book. If you take the
 
      00:17:09.280  book and be like, what do I do based on this book? It's like very, very small, like logical content
 
      00:17:14.960  that's this book. Yeah, I totally agree. And to the expected value thing, something that I noticed
 
      00:17:20.560  him doing in this book and in podcast conversations is so it's not assigning explicit probability
 
      00:17:28.000  to stuff, although he does do that and he hides some of it in the appendix. And he's kind of like,
 
      00:17:31.360  almost embarrassed to do it, I feel. But he will propose scenarios, which are in direct
 
      00:17:40.080  contradiction with one another and talk about both of them as being plausible. And this is allowed
 
      00:17:46.960  under the expected value reasoning. So for example, he's very worried about AGI super intelligence
 
      00:17:53.520  taking over the world. And he's also worried about technological stagnation. So he's asking us to
 
      00:18:00.240  imagine one future scenario where technology just increases its capabilities and the speed
 
      00:18:05.600  of development of technology grows and grows and grows and grows. And then there's another
 
      00:18:08.800  scenario where technology slows down, slows down, slows down, stops. This is kind of like writing
 
      00:18:13.680  a book where you're worried about overpopulation and you're worried about underpopulation,
 
      00:18:18.000  where you're worried about the climate heating up and the climate cooling down. And it's not a
 
      00:18:23.440  contradiction because under the expected value calculus, these are all equally plausible scenarios
 
      00:18:28.880  and you just assign probabilities to them. But he's not really on the line. He's not advocating
 
      00:18:34.720  a position which then can be falsified by another book. He just can talk about everything happening
 
      00:18:41.280  all at once. So I wrote down a number of these explicit contradictions that I just thought were,
 
      00:18:47.600  I just called it having it both waysism. So he'll talk about how we can make a big
 
      00:18:52.080  difference to future generations. And then later on, he talks about how the future is
 
      00:18:55.680  impossible to predict and anything that we do, the consequences of these are like ripples in a
 
      00:19:00.640  pond. And he'll talk about technological progress speeding up and technological progress slowing
 
      00:19:05.680  down. And he'll talk about how the resiliency of human beings and how we're able to constantly
 
      00:19:12.240  overcome problems and hurdles and difficulties. And then he'll talk about how we're at a significant
 
      00:19:16.640  risk of being taken over by a totalitarian government or a super or super AGI. And so this is this
 
      00:19:23.200  like weird consequence of this expected value reasoning where you don't have to commit yourself
 
      00:19:28.720  to any one position. You can kind of just have them all at the same time. And it's weird because
 
      00:19:33.760  you could just like swap out the arguments and you could use all the arguments for technological
 
      00:19:38.400  stagnation as being reasons why we shouldn't worry about AGI super intelligence. Similarly,
 
      00:19:44.720  you can take all the arguments from AGI super intelligence and use that as reasons why we
 
      00:19:48.000  shouldn't be worried about technological stagnation. And so it's hard to grab onto because he's kind
 
      00:19:53.360  of just he does it all at once. And then if you argue against any one particular thing, he'll just
 
      00:19:58.800  say, well, yeah, there's a lot of uncertainty. Let's reduce our probability for that. And so you
 
      00:20:02.880  can never really pin him to anything. And then in like podcast conversations and stuff, Tyler Cowan
 
      00:20:09.280  asked him about long-termist implications for abortion. And shouldn't you, if you care about
 
      00:20:14.160  future generations, shouldn't you be pro life? And you want there to, and similarly, he tried to
 
      00:20:19.200  pin him down on stuff like expected value reasoning. Would you want to have a, if there's a button
 
      00:20:25.040  that you could press that would have like a 51% chance of like giving everyone infinite life and
 
      00:20:31.600  49% chance of killing everyone? Or even just doubling the population or something, right?
 
      00:20:36.480  Yeah. Yeah. And then he'll just say, well, yeah, so expected value reasoning doesn't get you
 
      00:20:40.960  all the way and we have to have a plurality of views. And then he'll just basically abandon it
 
      00:20:44.560  whenever there's a tough question thrown at him. And it's like, what is this even doing for you
 
      00:20:50.480  besides being a rhetorical device? Like, I just see it now as kind of a rhetorical device that's
 
      00:20:54.560  being used, because he's not committing himself to actually using any of the calculations. He just
 
      00:21:00.480  talks about it. But then when challenged, the band is it the first sense of challenge, I guess.
 
      00:21:06.240  Yeah, interesting. Yeah. Okay. Let me try and defend just a subset of that, which is,
 
      00:21:11.760  if you write a book, I could imagine someone writing a book saying, underpopulation would be bad,
 
      00:21:17.360  because it would slow down progress, people are awesome, etc. So we don't want everyone to stop
 
      00:21:23.760  having kids all of a sudden, because that would be bad. But similarly, we don't want the population
 
      00:21:29.280  to explode within a generation or something. Right. So we should simultaneously be worried,
 
      00:21:34.240  but I'm not saying this is my position. I'm just saying you can imagine someone writing a book
 
      00:21:37.040  saying we should be, we don't want underpopulation and we don't want overpopulation.
 
      00:21:42.000  I don't view that as like having it both ways. I just view that as like pointing out two
 
      00:21:46.160  possible problems. And then you can criticize, you know, like, I don't know if that's a consequence,
 
      00:21:51.520  one of the expected value calculus or just, I think he's just trying to point out a lot of
 
      00:21:55.520  possible problems. And then we can debate about the feasibility of those scenarios.
 
      00:22:00.400  I don't think the analogy is someone writing a book saying underpopulation is bad, overpopulation
 
      00:22:04.160  is bad. I think the analogy would be someone writing a book saying underpopulation is likely,
 
      00:22:08.240  and overpopulation is also likely. And it's not a contradiction, because under this framework,
 
      00:22:16.560  basically, you can just speculate wildly about stuff that might happen. But it does, I think,
 
      00:22:22.320  strain, julity a little bit to talk about two possible futures, which are diametrically opposed
 
      00:22:29.760  to one another. And if you're a reader of this book, and let's say I think that
 
      00:22:37.360  technological stagnation is really likely, and you think that a GI superintelligence is really
 
      00:22:41.040  likely, we're going to donate money to competing organizations, one of which is trying to accelerate
 
      00:22:46.480  technological growth and the other is trying to hinder it. And what's frustrating is it's not
 
      00:22:51.280  a contradiction. I recognize it's not a contradiction. It's just the way that this kind of expected
 
      00:22:56.160  value reasoning allows you to think. And you can just speculate wildly. I want to talk about
 
      00:23:03.120  some of the stuff I liked about the book, because there was some stuff that was genuinely fascinating,
 
      00:23:06.880  but maybe we'll do that in a bit. But there's certain claims that he made that I just thought
 
      00:23:13.040  were flat-out absurd. So one of them is, I'll suggest that there's a significant chance of a third
 
      00:23:18.240  world war in our lifetime. I don't know how anyone can possibly know that. It's just speculation.
 
      00:23:24.640  And to say that there's a significant chance is, I think he also says one and three at some point.
 
      00:23:32.640  Right? So I didn't see that. Maybe I did. I didn't see that. And he'll say things like,
 
      00:23:36.960  if we could create genetically engineered scientists to have greater research abilities like Einstein,
 
      00:23:44.000  this would compensate for fewer people overall. And it's like, what are you talking about?
 
      00:23:48.640  How could you possibly genetically engineer better research? But you're just allowed to
 
      00:23:54.160  speculate about the future like this. And you're not constrained to anything besides certain chunks
 
      00:23:59.360  of it being plausible when phrased one way. But then you read diametrically opposed arguments
 
      00:24:03.680  like a couple chapters later, and you kind of just forget that he's not contradicting himself.
 
      00:24:08.160  He's just exploring every possible future that could happen all at the same time in a way that
 
      00:24:13.840  you can never pin him down on anything. And it's all probabilistic. And I found it very frustrating.
 
      00:24:19.360  When it leaves all the work to the reader, right? So it's like he's saying, well, this could happen,
 
      00:24:23.120  and this could happen, this could happen. And then the question is, okay, what should we do?
 
      00:24:28.800  But how do we start arguing? How do we debate about which one to do? And just by pointing out
 
      00:24:36.400  all the scenarios, I don't know, maybe there's a role for that. But then when it bottoms out,
 
      00:24:41.920  it bottoms out to stuff that people were doing anyways. It's like, exactly.
 
      00:24:46.480  So what's the extra value of doing this? Sorry, go ahead. I'm just kidding.
 
      00:24:50.960  Yeah, I mean, I'm not sure. I mean, yeah. So it was so, but as an example, like, so yeah,
 
      00:24:56.000  he went on the Tim Ferriss podcast. I mean, dude, they must have pulled out all the stops on
 
      00:25:00.560  their connections. His podcast. Because he was on every podcast in my podcast. But I was just like,
 
      00:25:06.640  this is why I'm like with the exception with the exception of Sean Carroll and Tyler Cowan,
 
      00:25:11.840  everybody else was just like fawning over this book. Like even Sam Harris, and I like have a
 
      00:25:16.880  huge amount of respect for Sam Harris. But I thought he was given such an easy pass on all
 
      00:25:21.360  this whole podcast circuit. And everyone's just like, oh, he's a more of a philosopher. He must be
 
      00:25:25.840  a saint. Sorry. Again, I'm kind of just pitching about things at anointing.
 
      00:25:29.280  Well, because, yeah, I mean, yeah, I, but this is like, sometimes I wake up and I'm like,
 
      00:25:36.400  like today I just, you know, like woke up and like walking to the gym. And I'm like looking up
 
      00:25:40.640  at the stars. And I like, I do sometimes like emotionally, I can like resonate with what he's
 
      00:25:48.240  saying. Like, I think humans are fucking awesome. I want us to like go to the stars. If we are the
 
      00:25:54.240  only like conscious creatures in the university observable universe or whatever. And we did go
 
      00:26:00.960  astink. I do think that would be a huge tragedy. So I it's not like I'm not concerned about like
 
      00:26:08.000  future people, the future of the species. Like I want all the good things to happen to us.
 
      00:26:12.480  In so far as the book makes people think more about that and forget like some of the more
 
      00:26:17.600  parochial problems and, you know, like the classic thing of astronauts going to the space station
 
      00:26:22.640  and be like, Oh, it's just a single blue dot everything so precious and stuff. Those kind of
 
      00:26:27.280  insights are are good. And I do maybe want more people to register that. I think that's my most
 
      00:26:35.040  robust defense is just like in so far as he's talking about that humans are pretty good.
 
      00:26:41.520  And we should run our future to be awesome. And also that we should take our actions and our
 
      00:26:46.800  ideas seriously, right? Like one through line of the book is we can our ideas do matter. And we,
 
      00:26:52.000  you know, if we fuck up, we really we can fuck it up big and nothing's on autopilot, right? Things
 
      00:26:56.960  have gone bad in history before. If we don't stay serious about the power of ideas, then like
 
      00:27:02.320  civilization can collapse and stuff. And all those messages are great because I mean, one,
 
      00:27:07.040  there's one contingent of people in the US, especially who like think civilization is terrible. And
 
      00:27:14.160  or just think like no one has the power to do anything and people are super apathetic and stuff. And so
 
      00:27:19.200  in so far as he's countering messages like that, like fuck yeah, you know, like I. And so
 
      00:27:24.000  sometimes I'm like, can kind of get behind them like yeah, make people think more about this. And
 
      00:27:28.000  you know, like he talks about the value of having kids, which I thought was pretty cool,
 
      00:27:31.840  which no one really wants to talk about anymore. And I was very impressed by right, especially
 
      00:27:35.440  because that's so contra current, current, current environmental, environmentalist messaging,
 
      00:27:40.400  right? So the fact that he took a stand and he said that on multiple podcasts too. So you know,
 
      00:27:43.840  kind of took a stand like yeah, like having kids is a great thing to do totally. And so
 
      00:27:47.200  certain of the messages like that, I'm like, hell yeah, but but then yeah, I mean, once you get
 
      00:27:51.280  into very particular problems, I just thought it was it was quite shallow how we dealt with it.
 
      00:27:55.200  And like one example, you know, he was talking about the green movement in Germany. And how
 
      00:28:01.200  about they sort of catalyzed a bunch of money to go into renewables. And he said, you know, this
 
      00:28:07.600  made renewables a much more respectable thing made other countries take it more seriously,
 
      00:28:11.760  etc. But what he neglected was that that push for renewables also made Germany disband tons of
 
      00:28:17.040  their nuclear plants, which has now landed them in massive problems when it comes to the Russia
 
      00:28:22.480  Ukraine conflict, right? And just environmentally in general. And so but this is a perfect example
 
      00:28:27.200  of how he kind of skates over the complexities when he's dealing with these concrete problems,
 
      00:28:30.960  right? Like imagine being someone who like has spent your whole life thinking about pandemics or
 
      00:28:36.240  nuclear war or something. He's really not adding anything adding anything new to the table. He's
 
      00:28:41.200  just saying this is a problem. And you might be kind of annoyed if you're an expert in this area,
 
      00:28:45.120  like, well, what the fuck I fucking know it's a problem. Like, you know, I'm down here trying to
 
      00:28:48.800  do all the geopolitical heavy lifting to like figure out how to do this stuff. And you're kind of
 
      00:28:54.320  just giving me vacuous statements about how it's important. Anyway, yeah, that kind of sums up my
 
      00:28:58.160  my my my conflicting feelings. Yeah, well, since we're getting to like some of the good parts of
 
      00:29:02.960  the book, I should kind of give a bunch of examples of things that I liked. And I totally agree with
 
      00:29:07.360  you that it was refreshingly like pro human pro progress. One thing that I thought was nice about
 
      00:29:14.960  the book is that there were and you kind of alluded to this, but didn't say it explicitly. But there
 
      00:29:18.640  is like zero traces of volcanism in it. Yeah, which I just thought was nice. Yeah, just quite
 
      00:29:24.560  refreshing. And so he's kind of presenting a different lens, but just in terms of being
 
      00:29:30.640  tainted by seeing oppression everywhere and stuff, there's zero traces of that, which I thought was
 
      00:29:34.160  great. And there's a lot of climate optimism too, which I thought was interesting. Yeah,
 
      00:29:38.400  yeah, which was nice. Like he didn't like he talked about climate change being a problem,
 
      00:29:42.800  but it's not. But he also acknowledged like that we're making a real progress on climate change.
 
      00:29:48.320  And that I think he says somewhere that recent years have given us more cause for hope than any
 
      00:29:53.200  other point in my lifetime. And he pushed back strongly on like the climate movements,
 
      00:29:56.960  encouragement to people for people to not have children. I thought that was great that he was
 
      00:30:03.200  is really a pinker style optimism, I guess. Yeah, totally. And at the end of the book,
 
      00:30:09.440  so just kind of going through a bit of a list of things that I thought was great about it.
 
      00:30:12.640  At the end, I thought he gave some really great career advice. And his career advice was basically
 
      00:30:17.760  to treat your career like an experiment. And so he says, in practical terms, you might follow
 
      00:30:24.480  these steps, research your options, make your best guess about the long term path for you,
 
      00:30:29.200  try it for a couple of years, update your best guess and repeat. And this is just like perfectly
 
      00:30:33.920  poparian advice about how to make a good career. And I thought that was great. And there's just a
 
      00:30:40.080  couple of random points that he made that I just wanted to highlight because I didn't hear these
 
      00:30:43.520  said on podcasts before, but he made the very excellent point that I thought was just such
 
      00:30:48.960  obviously a good idea that I can't believe I hadn't heard of it or thought of it, which is that
 
      00:30:53.040  not a single country allowed the vaccine to be bought on the free market prior to testing.
 
      00:30:57.840  By those who understood the risks, even on the condition that they report,
 
      00:31:00.640  whether they were subsequently infected, just this idea that there could have been and should
 
      00:31:05.040  have been a free market for COVID vaccines amongst volunteer. I thought was a really important insight.
 
      00:31:11.680  And one that I hadn't heard before. And so just props to my Casco for saying that.
 
      00:31:16.720  Yeah, regarding the vaccine, he made some fascinating points there. One, he pointed out how homogenous
 
      00:31:23.280  the global, more or less, how homogenous the global response to COVID was. Obviously,
 
      00:31:28.320  there was some regional variation, right? South Korea comes to mind, right? But if you consider
 
      00:31:34.240  the massive space of possible responses to something like COVID and then how little of that space was
 
      00:31:39.360  inhabited by most countries, even when other stuff seems like actually quite good ideas,
 
      00:31:44.000  like challenge trials, etc. That was quite interesting. And then he follows that up by saying,
 
      00:31:48.480  he proposed the idea of charter cities. So did you have a thought there? That was quite interesting.
 
      00:31:55.600  At first, I thought that was super interesting. And then I started thinking,
 
      00:31:58.720  well, what's the difference between charter cities and just different cities?
 
      00:32:01.520  Like, how do you put into it? So like, maybe you should say for the audience,
 
      00:32:05.920  what charter cities are first before I opine on it a little bit.
 
      00:32:08.560  Yeah. So my understanding is they're sort of like independent regions, so they wouldn't be bound
 
      00:32:14.880  to like, they wouldn't be bound to like a specific country, say. And there would be cities that
 
      00:32:19.200  are trying out a specific, perhaps like economic plan. So he kind of hedges and says, it could be
 
      00:32:25.520  like different moral views or it could be different economic views. But maybe you have a charter cities
 
      00:32:29.520  for like, like democratic socialists who want to try something more along that vein, then you have a
 
      00:32:35.680  charter city for like, libertarians or like anarcho capitalists or something who want to live in a
 
      00:32:40.960  much different way, right? And everything's like voluntary and stuff. But you have these like,
 
      00:32:44.960  cities that are trying out very different experiments on how to live. And to so that civil,
 
      00:32:50.160  you know, we can kind of see it's like running little experiments, like, what's going to work,
 
      00:32:52.960  what's not going to work. Obviously, there's some practical issues, right? Like how, you know,
 
      00:32:56.480  how are these systems economic, they have to relate to each other economically somehow. We need
 
      00:33:00.560  like global free trade and whatnot to some extent. And so they have to be bound in some sort of
 
      00:33:05.440  social, international norms and stuff. So the details seem quite fuzzy, but
 
      00:33:09.040  I can get behind a general sentiment that like, we should be running more experiments.
 
      00:33:14.240  So at first, I liked it, but then I don't see how you could implement this without
 
      00:33:18.320  being an authoritarian totalitarian. See, like you could do it in China, I guess. But
 
      00:33:25.440  you have to give people free movement. Yeah, for sure. And then you have to decide how a city's
 
      00:33:31.360  going to like, which economic policy city is going to implement and to do that, again,
 
      00:33:35.840  in a non totalitarian way, it has to be democratically elected policies. And then you just have people
 
      00:33:41.200  voting for economic policies in different geographical regions, which I think just gets us back to
 
      00:33:46.640  regular cities. Well, like, I don't know how you would go like, how would we do this in Canada or
 
      00:33:50.880  the United States, for example, just you could get a bunch of libertarians and they would go off to
 
      00:33:55.840  it. Like, would we just take existing cities and replace libertarians or replace the people with
 
      00:34:00.880  a bunch of libertarians and stuff? Like, it just doesn't seem like it's at all feasible to do
 
      00:34:07.040  without violating civil liberties. Maybe I mean, I can kind of see natural experiments arising
 
      00:34:15.120  as long as you let cities be a little more autonomous than they're lawmaking. So I'm considering like,
 
      00:34:20.240  something like San Francisco versus Austin, Texas, right? So San Francisco tends to have people who
 
      00:34:24.960  vote for like much more state intervention, right? They might vote on much more left leaning policies.
 
      00:34:29.840  And that's going to also attract people who think that's a better idea. Okay. Yeah, San Francisco is
 
      00:34:33.600  not having a great time at the moment. And so that could almost be considered one type of experiment.
 
      00:34:38.400  Obviously, San Francisco is still bound by like certain state laws and stuff. So what you might
 
      00:34:42.000  want to say is like, okay, now San Francisco's can just like vote on everything. They don't have to
 
      00:34:46.320  abide by the state regulations. They just vote on everything. And then people don't like that,
 
      00:34:50.480  which seems to be a lot of people are like moving out of San Francisco, moving into places like
 
      00:34:54.080  Austin, Texas, and you know, Texas tends to be more on the libertarian side of things. So these are
 
      00:34:59.520  not like mandated charter cities, but they're getting closer to that sort of idea, right? Like
 
      00:35:05.280  San Francisco right now is definitely an experiment in living. One that arguably is not
 
      00:35:10.160  but there's still this big difference between collecting data and analyzing natural experiments.
 
      00:35:15.200  So experiments which have just naturally taken place based on people's free choice and movement
 
      00:35:19.440  and voting policies. First, implementing this as a strategy and saying we are going to
 
      00:35:25.680  decide that San Francisco is going to be more blue and Austin, Texas is going to be more
 
      00:35:30.560  libertarian or what have you. So yeah, because as charter cities were proposed in the book, it
 
      00:35:34.640  seemed like it was like experiments that would be run by who exactly like it. I just don't know
 
      00:35:42.000  how you could do this except yeah, without being like a China and just forcing people to live in
 
      00:35:46.240  places. Yeah, yeah, yeah. So it was an interesting idea for sure, but then when I thought a bit more
 
      00:35:50.640  about it, I don't know how you could do it with it. Yeah, possibly important. Yeah. I just think
 
      00:35:54.640  also there's an issue. I was talking about this with Mira and we're talking about some of the
 
      00:35:58.880  difficulties. She was like, well, okay, but what if you want to have like, what if like a city of
 
      00:36:04.320  people come together, right? So you could think of like certain religious extremists who are like,
 
      00:36:09.360  we want to live in a state where women have no say in how they how they live, right? And so
 
      00:36:18.320  you're just going to allow that to happen because it's like quote unquote an experiment living.
 
      00:36:22.560  Like probably not. So it still has to be even a charter city idea. There's no it can't be anything
 
      00:36:28.800  goes right. It still has to be implemented in some sort of like liberal order or something. So
 
      00:36:33.520  you can only push it so far. So here, what did you think about the whole value lock in? Because I
 
      00:36:38.080  might I'm slightly skeptical, but I'm curious what your thoughts are and that maybe we should
 
      00:36:42.720  summarize for the audience what what value lock in is. Sure. I mean, I guess yeah, even more broadly,
 
      00:36:48.320  maybe we should just give a slight outline of the book because we've been talking free for him so
 
      00:36:52.240  far, but we should say like, right? So he's concerned about the long term future, but he breaks the book
 
      00:36:57.760  into several parts and parts two and three talk about two sort of distinct ways that you might go
 
      00:37:04.000  about influence in the future or ways in which the future could be good or bad. Part two is about
 
      00:37:08.560  trajectory changes. And so this is about values, basically over time. So the two sub parts of that
 
      00:37:16.800  are about moral change and value lock in value lock in being like if we got locked into some
 
      00:37:22.000  total totalitarian dictatorships or something like say the Germans had won second world war and we
 
      00:37:28.080  were all living in a thousand year Reich at the moment that would be like a form of a value lock
 
      00:37:32.480  in assuming like it couldn't be changed or something. And that's where he gets into some of the AI
 
      00:37:36.800  stuff like AI could could help with the cementing totalitarian values. Yeah, it was the AGI. It was
 
      00:37:42.400  like the the driver of the value lock in. Although he didn't say that explicitly, the main arguments
 
      00:37:48.240  for value lock in were about AGI. So that's where the AGI system came in. Yeah, yeah. And then the
 
      00:37:54.400  next part is about safeguarding civilization. And this is I would say most of what we've been talking
 
      00:37:58.320  about so far. So like the extinction risk, technological stagnation, civilizational collapse,
 
      00:38:04.000  things that you most commonly associate under like this sort of a broad umbrella of like
 
      00:38:08.320  more existential risk type type stuff. So yeah, what did I think of the value lock in? I thought
 
      00:38:13.280  I thought there were the parts that were of great merit like it is there are some fascinating
 
      00:38:19.280  historical case studies of how certain ideas emerged over time and we're able to like win out
 
      00:38:24.720  against other ideas and became entrenched. And so like he gives a whole background on
 
      00:38:30.560  Confucianism and the Han Dynasty and the Maoists, right? And like how they were sort of battling
 
      00:38:37.280  for prestige and then how Confucianism like ended up ended up winning out over time and basically
 
      00:38:42.800  cemented itself as a certain ideology over the span of like 2000 years. And so parts of it are
 
      00:38:49.280  just testament to like how powerful ideas are and how much we should care about spreading norms of
 
      00:38:56.320  like open inquiry and debate and making sure that I mean basically like the classic liberal values
 
      00:39:03.520  of like free speech and people need to be allowed to point out what they see as problems and talk
 
      00:39:08.400  about things right because we don't want to get ourselves in a situation where we get we don't
 
      00:39:13.920  tolerate dissenting views in case we're sliding into some sort of bad moral system or something.
 
      00:39:19.120  I found the value lock in stuff to be a little difficult to conceptualize because I think values
 
      00:39:25.520  tend to get locked in when they are maximally beneficial to the most amount of people. The
 
      00:39:30.720  historical examples all run into the problem that you just didn't have global communication back then.
 
      00:39:36.320  So you didn't have ideas being able to be spread quickly. Also when you go back that far in time,
 
      00:39:44.320  we just weren't as morally or scientifically violent to his pre-enlightenment. And so I think
 
      00:39:50.240  post-enlightenment, I have a difficult time imagining a future where we all get locked in
 
      00:39:54.880  to a set of values which are making everybody miserable but nobody thinks to question them.
 
      00:39:59.120  And this is one of the things he says somewhere which is that he's worried about us all getting
 
      00:40:03.440  locked into a state of nature where we're all being made absolutely miserable. It's in the future.
 
      00:40:10.160  So there's obviously communication and stuff but no one challenges these values.
 
      00:40:15.520  Like once you have a dystopia like that, this is a problem to be solved. And I think the
 
      00:40:22.080  history of the 20th century in the Cold War in particular highlights that totalitarian regimes
 
      00:40:27.520  just aren't fundamentally stable because you don't have flourishing and people want to
 
      00:40:33.200  have a better life for themselves and their children and they're naturally going to start
 
      00:40:36.160  challenging certain values like slavery, anti-Semitism, these kind of values which are creating misery.
 
      00:40:42.800  And so I didn't find it very persuasive and also because most of what he argued was like an
 
      00:40:49.280  AGI getting into power and locking in these values. And we don't need to talk too much
 
      00:40:53.520  about the AGI stuff because we talked about that ad nauseam. But he used historical examples in
 
      00:40:58.640  AGI and I don't think both, either of these things work for different reasons but historical
 
      00:41:02.720  examples, particular, it's pre-enlightenment and so the world is just a very different place,
 
      00:41:07.040  post-enlightenment, scientific revolution. So although I didn't find that too persuasive but
 
      00:41:12.320  I guess- You're making me realize that I guess there's two different kinds of value lock-in
 
      00:41:17.600  that maybe he distinguished but I can't remember. So one is sort of what he talked about with just
 
      00:41:23.440  ideas winning out over one another and there I agree with you it's hard to imagine a world
 
      00:41:29.440  right now where everyone just starts to think a certain way and it's making us miserable and no
 
      00:41:34.400  one thinks to question it, especially with things like the proliferation of the internet and stuff.
 
      00:41:38.160  But then there's the question of like, okay, but if the values were enforced from top down
 
      00:41:42.480  and he goes like the AGI route but you can even just imagine like a North Korea type situation,
 
      00:41:47.280  right? Like North Korea takes over the world. There you have certain ideas in force and it's not
 
      00:41:51.920  that no one thinks to question them, it's just that no one can question them because you're living
 
      00:41:55.360  under some form of dictatorship. And so to that I say yeah that would be shitty. I think we're all,
 
      00:42:01.840  no one wants to live in a dictatorship. And so that was kind of one of the things where I was like,
 
      00:42:05.360  yeah, I mean that's a problem. If you are, if you can point to something right now and say this is
 
      00:42:12.240  a sign of a dictatorship and I'm getting really worried about it, you know, like, I don't know.
 
      00:42:17.280  Some country is like sliding into some sort of dictatorship then yeah that'd be a problem we
 
      00:42:24.080  should worry about and then we'd start dealing with it as best we can. But without that sort of
 
      00:42:29.840  concrete problem to point at aside from the actual dictatorships that are surviving right now,
 
      00:42:36.640  something I'm not sure what to do with that information, right? It's kind of again one of
 
      00:42:39.760  those vacuous things where everyone can agree that's a problem that would be shitty. Obviously we
 
      00:42:43.440  should kind of be on guard for that but it's quite unclear what to do with that. It's not really
 
      00:42:49.120  going to change my day to day life, right? Yeah, yeah, yeah. So you were going through the
 
      00:42:52.960  structure of the book and I thought that was super useful so I derailed it slightly by asking
 
      00:42:56.080  okay, but perhaps you want to finish the structure of the book. Right, so part two trajectory changes,
 
      00:43:00.640  part three safeguarding civilization and then okay, we should go at this a bit. So part four is
 
      00:43:09.040  an introduction to population ethics. Fucking population ethics, I can't stand population ethics.
 
      00:43:16.400  Population ethics is the study of how to compare different populations of abstract people basically.
 
      00:43:24.720  So these are the classic questions of like, is it good to bring like a life of positive utility
 
      00:43:32.880  into existence? How do I compare a population of people with a population of 100, 1000 people
 
      00:43:41.200  with utility 75 against a population of a billion people with utility 45 and stuff, right?
 
      00:43:48.000  And this is Derek Parfett's like the one who created this field. Yeah, he created this field.
 
      00:43:53.040  And would you say it's a field or is it more just like an area? I don't know. I've been meaning to
 
      00:43:59.520  look at textbook on population ethics and see one like how well founded, how many people are
 
      00:44:07.920  studying population ethics? Like, I don't know, like because the part that really bugs me here is
 
      00:44:13.120  is in the book and in interviews, McCaskill will say like any view you have on these questions
 
      00:44:19.920  leads you to paradoxes, right? So he'll say like, but then what that means, I think we actually
 
      00:44:25.680  talked about this in like one of the earliest episodes, but what that means in technical terms
 
      00:44:30.000  is if you are willing to assign one number that captures a person's utility and you're willing to
 
      00:44:36.640  aggregate that utility over everyone living, right? So you're a lot, basically to sum up the utility
 
      00:44:43.120  of a bunch of people living in a world and say that this world has utility A, if you allow sort
 
      00:44:48.640  of like moral mathematics like that, then you run into problems of like basic arithmetic,
 
      00:44:53.280  which is which is quite clear. Basic, leading leading to conclusions that people find
 
      00:44:57.840  repugnant. That's the yeah, exactly. Exactly. To end up preferring like, you end up preferring
 
      00:45:03.360  a world, a huge world with tons of people where everyone's like just above misery to a world
 
      00:45:09.440  full of a much smaller world full of like very, very happy people. In some sense, it's true to say
 
      00:45:14.560  that is like an impossibility result, he'll call it, but it leaves out the axiom that you have to
 
      00:45:22.000  be willing to like assign a number. Exactly. Like a number that captures the utility of an entire
 
      00:45:28.880  world. And so I'm not sure where to go with this exactly. Like we could actually talk about
 
      00:45:33.840  population ethics, but even in service of the whole book, I was quite unclear about what this
 
      00:45:37.200  chapter was doing. Like to my eye, like, I don't think you really have to argue that the world
 
      00:45:44.080  ending tomorrow is a bad thing. Maybe there's a few people like David Benatar, who would think
 
      00:45:48.560  it's a great thing, but I don't think they are reading this book anyway. And so I was kind of
 
      00:45:52.800  unclear what that chapter was about, except for like maybe signaling some philosophical
 
      00:45:57.360  bona fides or something. Yeah, like, I totally agree. And it's like, for me, when I read about
 
      00:46:03.360  the repugnant conclusion is just so obvious to me that it's like we can't reduce the well-being
 
      00:46:09.280  of a human to a single number and compare them. And so I just, that's the axiom that I'm
 
      00:46:15.360  saying I want to get rid of this one. Let's not try to compare my well-being against your well-being
 
      00:46:20.240  because it can't be done. And I view the Republican conclusion is just showing that it can't be done.
 
      00:46:26.640  But philosophers view this as being the focus of PhD dissertations and stuff because
 
      00:46:31.760  for some reason they don't want to stop this game of assigning you till points to human beings.
 
      00:46:37.760  Well, I think what they would argue is like, well, from, you know, they would say,
 
      00:46:41.280  this is just an abstraction of what we have to do in like public policy anyway, for example,
 
      00:46:46.080  right? Like there we have to compare the effect of certain policy decisions on groups of people.
 
      00:46:51.680  One could benefit and like a larger group of people to a lesser extent, one could benefit
 
      00:46:56.160  a smaller number of people to a greater extent. How do we think about those kind of trade-offs
 
      00:47:00.000  and then population ethics just emerges from there. But it can't be divorced from the concrete
 
      00:47:06.080  problem. Exactly. I agree that you have to make these kind of trade-offs. And for certain
 
      00:47:12.080  cases, it's a good heuristic to simplify people to single well-being points. But when you try to
 
      00:47:21.040  apply this to every problem and divorce it from real world situations, then you need to
 
      00:47:26.960  absurdize. And so, okay, we can't do that. And also in public policy, you don't have the option
 
      00:47:33.920  of adding a population of 150 billion, barely happy people. But you can't just make up people
 
      00:47:40.480  out of the void like this. When you're talking about real people's, realize sometimes you do
 
      00:47:45.360  have to make simplifying assumptions. And that's only to make progress on real concrete, tangible,
 
      00:47:50.480  specific problems. In terms of what it was doing in the service of the book, I guess it's because
 
      00:47:57.680  we're talking about the well-being of a potentially infinite future population. And so, the philosophical
 
      00:48:05.040  area that deals with this is population ethics. But I honestly, I just read part of it and skipped
 
      00:48:10.240  it because it just seemed to be completely unnecessary for what he was trying to say. So,
 
      00:48:14.880  I couldn't give you a concrete answer for why that chapter was there.
 
      00:48:18.720  I think maybe the distinction between someone like you and someone who finds that chapter very
 
      00:48:25.040  necessary is an interesting focus on consistency. So, I think someone would want to say,
 
      00:48:33.280  I don't want to have beliefs in the public policy realm unless they can be extrapolated to every
 
      00:48:40.480  possible abstract world, lest I realize my beliefs are being inconsistent in some way.
 
      00:48:46.240  And this is also interesting related to Dutch books, arguments and stuff, arguments in
 
      00:48:50.720  favor of Bayesianism. There, it's all about at every moment in time you'll have consistent
 
      00:48:55.120  beliefs. You want to bet 51% on something and then 52% on its opposite and then people are going to
 
      00:49:02.160  be able to make bets against you. Yeah, I don't know. There's some focus on... Obviously,
 
      00:49:07.120  having consistent beliefs is a good thing. But there's something with you take this to the limit.
 
      00:49:15.280  And yeah, I'm not sure exactly what's going on there.
 
      00:49:18.240  I have some thoughts there. So, the focus on consistency, I think it's moral philosophers
 
      00:49:23.360  are trying to do what mathematicians do, which is come up with a perfect framework. They have
 
      00:49:28.800  one framework, call it utilitarianism and they have another framework, call it consequentialism.
 
      00:49:33.040  And they're trying to find a framework which can answer all moral questions no matter how
 
      00:49:39.520  difficult or complicated. And they'll never be able to do this. And so, what happens is they
 
      00:49:44.560  come up with some perfect framework and then everyone finds counter examples to show that
 
      00:49:48.880  this leads to moral scenarios which violate our moral intuitions and everyone throws up their
 
      00:49:54.160  hands. The person who's made the most amount of sense on moral philosophy for me is Noah Smith,
 
      00:50:00.640  who in one podcast interview, he said, "I don't have a framework, a moral framework,
 
      00:50:05.520  what have our basic moral intuitions that sometimes can map onto some frameworks occasionally." But
 
      00:50:12.000  the main thing that I use are my intuitions and then I use these frameworks to kind of help me
 
      00:50:16.560  think through corner cases a little bit. But I think that is consistent because it's not
 
      00:50:21.280  attempting to come up with a perfect framework. This is recognizing that so much of our moral
 
      00:50:26.480  decision making is based on these evolutionary pre-programmed intuitions we have. And that's
 
      00:50:32.080  kind of what I want to say too. I have certain intuitions about the morality of slavery and I
 
      00:50:37.200  have other intuitions about the morality of eating meat. And for whatever reason, I don't wake up
 
      00:50:43.360  every morning thinking too much about the suffering of chickens and cows. Other people do and power
 
      00:50:49.280  to them. But I'm riding on my moral intuitions here. And so, I think that that's not an inconsistent
 
      00:50:56.240  worldview. It's more just an acknowledgement that this is what most people are doing most of the
 
      00:50:59.680  time. And frameworks can sometimes help to think through the corner cases occasionally. But
 
      00:51:06.960  we shouldn't seek to find a perfect framework. And this is foundationalism to use a Deutsche
 
      00:51:12.800  Interm. But it's just seeking this perfect thing that can answer all future questions is a futile
 
      00:51:18.720  quest. And instead, we should try to reason through our moral intuitions and use these frameworks
 
      00:51:23.120  occasionally when they're helpful. But in all instances, we have to just go case by case.
 
      00:51:27.360  In some sense, I don't know. Did any of that land?
 
      00:51:31.040  Yeah, I think so. I mean, I'm trying to think what some more sympathetic to population ethics
 
      00:51:36.160  would say. And I think they would say, like, well, formalizing things lets you spot
 
      00:51:40.080  errors in your reasoning or gaps in your intuition or like conflicts in your intuition, right? Like
 
      00:51:46.160  you can't simultaneously think, I don't know, that like, bestiality is wrong, but like,
 
      00:51:51.920  for the reason because it like violates in animals, what sort of sovereignty or something?
 
      00:51:58.720  Yeah, yeah, but yeah, exactly. Like, by the way, autonomy, by the way, there we go,
 
      00:52:02.240  violates their autonomy. And then simultaneous to be okay with like killing them for food.
 
      00:52:06.560  Yeah. And so I guess that's not a situation in which you're like putting numbers on things.
 
      00:52:13.920  You're just like making an argument that these two are inconsistent. But I think what
 
      00:52:17.280  people are trying to do when they're putting numbers on things is abstract away the details
 
      00:52:22.800  that you can see if you hold to completely incompatible views. Then there's a question of like, well,
 
      00:52:28.720  once you've done the abstraction, if you find any compatibility, is that actually telling you
 
      00:52:33.520  anything useful about the world? I think you would say no, and they would say yes. And maybe
 
      00:52:37.200  that's where that's what the difference is. Like, what's the, what's the actual utility?
 
      00:52:40.640  I wouldn't necessarily say no. It wouldn't necessarily say no. Like, if anything, the
 
      00:52:45.200  bestiality and vegetarian example makes me more pro bestiality than I would have been before.
 
      00:52:51.760  So I think you're pointing out, I think you're like pointing out contradictions. Excuse me,
 
      00:52:58.160  I think pointing out contradictions is very important. And there are many times when our
 
      00:53:02.240  moral intuitions do need to be changed. So I'm not saying moral intuitions are always right by no
 
      00:53:06.480  means. I'm more saying moral intuitions are my starting point, and then they need to be kind of
 
      00:53:13.120  buffeted and constrained and directed by kind of moral argument. But that's not the same as saying.
 
      00:53:20.800  I seek a perfect moral framework that can answer all problems between now and the end of time.
 
      00:53:27.920  And then to throw out my hands in despair when I find counter examples to these frameworks.
 
      00:53:34.880  Because like it's so often a Cascale will talk about how like expected value theory is
 
      00:53:40.160  lots of problems with it, but it's the best tool we have. And it's like the seeking a perfect
 
      00:53:47.280  framework. And so it's like, it was not expected value reasoning that showed me a different
 
      00:53:50.640  framework that has fewer problems rather than just trying to overcome the need to have a perfect
 
      00:53:56.400  question answering framework. It's like they want like a fountain of wisdom that they can just ask
 
      00:54:00.640  questions to and have it tell them what to do in all these moral circumstances when really we
 
      00:54:07.360  should just recognize you have to go circumstance by circumstance and not try to find the perfect
 
      00:54:13.120  framework. Yeah, yeah. I mean, yeah, okay, I guess we've talked about expected value now.
 
      00:54:16.640  Yeah, but yeah, I totally agree. Like the difference between public policy and population ethics is
 
      00:54:24.800  you're constrained by the actual policies that you're considering. And so you're not in this
 
      00:54:28.720  abstracted world with utilities. And it's like very specific to the prominent hand. And also,
 
      00:54:34.160  in public policy, you can also always come up with new options. Whereas like the population
 
      00:54:38.480  ethics is very sort of game theoretic in nature where like you have a set of options in front of
 
      00:54:42.880  you. And that's all you're allowed to do. Because you've abstracted yourself into this mathematical
 
      00:54:47.120  world, obviously you can't just like create new options. You can't say, Oh, maybe this third policy
 
      00:54:51.600  is better. And so which of course is how real life works. And then I well, that's the, that's
 
      00:54:58.240  almost the last chapter of the lat the last chapter is is taking action, which is tiny
 
      00:55:02.800  clarification. So the last part, so the book is split into five parts of the source and chapters.
 
      00:55:07.280  Yeah. So the last part part five consists of chapter 10. Yeah, part five is, is yeah,
 
      00:55:11.600  solely chapter 10, which is just about taking action, which we talked a little bit about earlier,
 
      00:55:18.080  like he talks about what people can do, which in the abstract is quite vacuous. But then gives
 
      00:55:23.520  example of people who are working on problems. But like we said earlier to work on this problem,
 
      00:55:28.000  you don't need to adopt a dog long termism. And so I found that chapter to be quite underwhelming.
 
      00:55:33.200  And even he himself, like I was talking with a Tim Ferriss podcast earlier on, and on there Tim
 
      00:55:37.760  Ferriss kind of presses him and says like, like, you know, where should people donate? Like what,
 
      00:55:44.080  what should we do? Like what are some, what are some wins, some concrete wins from long termism?
 
      00:55:48.080  And McCaskell gives a few examples. He gives an example of, of far UVC. So this is like a type of
 
      00:55:54.720  lighting, I guess, that treats rooms for certain bacteria, right? So this,
 
      00:56:02.160  I thought again, was a great example of like, totally, totally,
 
      00:56:05.680  the craziest. And then he says, early detection of new pathogens, which is, which is related.
 
      00:56:11.520  Also, like obviously, fuck yeah, better PPE equipment for people working with dangerous
 
      00:56:17.840  chemicals, and then some technical AI safety stuff, which again, is mostly cashed out in terms of
 
      00:56:23.360  just giving money to PhD students who are doing classic ML research, just focused on on robustness
 
      00:56:29.280  and whatnot. No, I think that's all I got. I mean, this so that yeah, taking action is,
 
      00:56:32.800  is the last part of the book, it's good, but doesn't seem to require long termism. And so,
 
      00:56:39.680  like one thing I couldn't help while reading, help think while reading the book was like,
 
      00:56:43.280  presumably if you introduce like a new view, right? So I was to introduce like a new scientific theory,
 
      00:56:49.120  and it agreed with general relativity 99.5% of the time. That's not a benefit of the new theory,
 
      00:57:01.920  right? You wouldn't take that as evidence that the theory was true. What you look at is the 0.5%
 
      00:57:06.560  of times where deviates from general relativity and say it doesn't make better predictions in that
 
      00:57:12.080  area, right? So sort of like the difference, the delta between these two theories. And I was trying
 
      00:57:17.200  to do that a bit with long termism, like in so far as it doesn't line up with what I think most people
 
      00:57:24.800  would consider to be totally valid problems. What more is it recommending beyond that? And the answer
 
      00:57:31.680  seems to be relatively vague things and then some AI stuff. And so considering that delta,
 
      00:57:39.440  it's not the most impressive thing ever. Yeah, but I also think that the consequence of this kind of
 
      00:57:46.400  book, and I include Toby Ward, is that in five years, and this is a falsifiable prediction,
 
      00:57:52.400  that in five years long term, this research is going to be dominating the EA giving space,
 
      00:57:57.760  then say global poverty research, which is what McCaskill's first book was focused on. And that's
 
      00:58:03.520  going to hurt real people where there's a finite amount of malarial dead nets, etc. And if funding
 
      00:58:11.280  for these kind of concrete problems is being siphoned away and instead we're getting AI
 
      00:58:16.720  research, AI safety research, and long termism research, etc. That's going to hurt real people.
 
      00:58:21.120  And I think that this book definitely, I think it's contributing to that while being able to deny it
 
      00:58:28.000  and that's maybe an unfair critique, but that's my honest impression that the book is so
 
      00:58:33.280  anodyne, but it's like a gateway drug to the more intense stuff, which is on the internet and which
 
      00:58:39.440  McCaskill's written himself. And so it's just a strange kind of superposition between dangerous
 
      00:58:47.440  and banal, I guess. So I read the book once, or I listened to it, and then prepping for it,
 
      00:58:53.120  I read a few more chapters. I've never had kind of less interesting things to say
 
      00:58:57.600  about a book I've actually dedicated time to thinking about. I almost feel like embarrassed
 
      00:59:03.120  that I don't have that much to say, but I truly was just kind of at a loss for what to think.
 
      00:59:08.960  Like I would, yeah, sometimes I'd be like, yeah, I mean, taking the future seriously,
 
      00:59:12.720  that's a good idea. Phemons are cool, I want to see us flourish. That's great. Other times,
 
      00:59:17.520  I'm like, you're talking about AI replicating, taking over the world, like, you know, this is
 
      00:59:22.800  crazy. And so it's just such a bizarre thing to reach. And I can tell, like, am I just too deep in
 
      00:59:32.560  the historicals? It's like, there's nothing terribly new, but it was just like a boring version of
 
      00:59:40.480  stuff we've already talked about before. So it wasn't like, like, there's a couple things like the
 
      00:59:47.440  vaccines on the free market. I thought that was a great idea. Hadn't heard that, hadn't thought
 
      00:59:51.840  about that before. Obviously, like, duh. But most of it is like, okay, here's the expected value stuff.
 
      00:59:57.200  Here's the AI stuff. Like the value logic. Check this on that personally. Yeah, there we go.
 
      01:00:01.680  There we go. But yeah, in general, I agree. Time to move on. It's a weird space, man. It's
 
      01:00:07.600  weird. I mean, even like Sam Harris, like, yeah, like, have him on, though, like, you'll look,
 
      01:00:10.880  they'll, like, get excited about the ideas. But I have to wonder, like, is this influencing Sam's
 
      01:00:17.040  actual gift? Like, I know he donates like a lot of money, you know, with this foundation and stuff.
 
      01:00:20.800  The last I checked, he donated a bunch of stuff to like Afghan refugees or something. Yeah,
 
      01:00:25.200  I don't think that's a long term is caused. Like, yeah, I'm like, is this act like,
 
      01:00:29.280  there's some like, is this just intellectual cotton candy or something that we're just like,
 
      01:00:33.760  eating up? Because it's like, it's like, yeah, fun. Yeah. Like, there was a huge splash when
 
      01:00:40.320  McCaskill went on the podcast circuit. But like, then it was gone radio silent. Totally.
 
      01:00:45.200  And maybe it's just going to be radio silent forever. Like, what is he actually advocated
 
      01:00:50.960  that we do differently? Exactly. But donate more to like fossil fuel replacements and like, okay,
 
      01:00:56.320  great, man. I agree we should do we should do that. But yeah. Yeah. I also don't want
 
      01:01:02.480  civilization to collapse. Like, I agree. Now what's one concrete thing that I can do to avoid that?
 
      01:01:08.640  Like, yeah, no, exactly. Yeah. You learn more and donate. It's like, yeah, learn and donate.
 
      01:01:13.920  Goodness. I'm not trying much more to say, to be honest.
 
      01:01:18.880  Me too. I think we've been going for an hour and a half. So we definitely got to got a bunch
 
      01:01:24.320  out there. So I don't know what we're going to do next time. I think we're having some guests on
 
      01:01:27.520  at this point. But we should be careful not to announce them because you never know.
 
      01:01:33.280  Yeah, never know. But hopefully that'll happen soon. And this will probably be the last thing
 
      01:01:37.360  we say about long term is this like, my cast garage is another book that.
 
      01:01:40.240  Yeah. I hope so at least. Yeah. I want to move on. Yeah. Yeah. But cool then. Well, this is fun.