Sam Harris and David Deutsch
Hello, so this is a response for Andy, who asked a question that I've been asked before
and which I'm sure will be asked in the future, which is where exactly the disagreement
between David Deutsch and Sam Harris lay within the second podcast, the waking up podcast
of Sam Harris, where he interviewed David Deutsch again for a second time. This time
specifically about the moral landscape, I think that was Sam's third or fourth book. During
that podcast, one of the first things that David said was that it's very difficult to
articulate precisely what the disagreement is because they come from such different places
when it in terms of the epistemology. And so the kinds of vocabulary that David
uses, the kind of vocabulary that David uses as prosaic and as common as it might be,
has a very different meaning in the perperian sense than it does for many, many academic
philosophers. And so this has the consequence that anyone trained in academic philosophy
strangely enough, is often particularly poorly placed to really understand the
perperian notion of knowledge. And the perperian notion of knowledge is, ironically, in some
way, closer to the common sense notion of what knowledge is. The academic version of what
knowledge is, following on from Plato, often has these overtones of certainty and justification
and foundation and belief. And the perperian idea is far more fallibleist and anti-foundation
list. It's more about trying to guess what is true and then checking whether or not
your guess is correct. So this idea that they will both speak in different languages is
something that David flags early on. And it's humorous in retrospect, because when
you listen to the entire podcast, I've done a few times now, that flag that David
plans, about the fact they're speaking different languages, highlights every single
error that appears to occur throughout the conversation, where David will make a point
and explain precisely what the disagreement is, and Sam doesn't understand. And it's
no fault of Sam, it's not like he's incapable of understanding, but he has an epistemology
following Plato, that it's possible to have a foundation and indeed it's desirable to have
a foundation. And although Sam can make noises that sound as though he's not an infallibleist
thinker, as though he isn't striving for a particular certain way of thinking, nonetheless,
what he said is reveals precisely what his internal psychology is on these matters.
Although he can probably give a good definition of what fallibleism is, ultimately, and
this happens on many, many topics, he's not a fallibleist. He's definitely a rational person
and he's very reasonable and he's trying to reach the truth like we all are, but as
we will see, I've written a few notes of examples where he has or he explicitly reveals
the fact that he's not a fallibleist, that he thinks that it's possible to finally
grok the answer, whether it's in morality or whether it's in science. So we'll get there
eventually and I'll attempt to explain exactly where this chasm of difference is between
the two. And this is why it's very difficult for the majority of people who hear the podcast
to understand where the disagreement is, because when they hear words like knowledge, or
when they hear words like foundation, they're hearing something different to what somebody
who has read pop or who has read doge understands these words to me.
In the last video I did about the beginning of infinity was chapter four creation. There's
a section there where David speaks about how knowledge is created inside of human
mind. We don't know all of the details, but we know something. We know it can't be spontaneously
generated. Now that might seem like, again, a prosaic sort of mundane vanilla thing to say,
but it means that the bucket theory of mind is false. It means that when you're using words,
you cannot download what you think those words mean into the person with whom you're
having a conversation. You can't download it from your brain into their brain. They've
got a certain understanding of what the words that you're using mean, and you've got a certain
understanding of what the words you're using mean. And you can try and explain it using
other words, but then those other words you use, in other words we explain the words
that you're using, might not do the job. And this is an example. This is a wonderful example
this conversation of where David says up from that the words that he's using have a different
sense to what they have inside of Sam's mind. He flags it, and tries to explain again and
again throughout the conversation. And although Sam admits that he is willing to go there
with David, willing to grant him certain things, he never really does. And so the error
never quite gets corrected. And so therefore at the end of the conversation, Sam isn't
sure where the disagreement is. The majority of people who have an alternative epistemology
something other than what Karl Popper views knowledge as. For example, they think that
knowledge is about justified true belief. They think that you need to begin with a foundation
and on that foundation, then you accumulate knowledge, you build it up. And this is an
anti critical vision about how knowledge is created. In the preparing view, you simply
have problems. You can start anywhere at all. And you attempt to solve those problems when
you have them, when you have ideas that are in conflict with one number, by using a critical
method. It's a completely different vision. Instead of accumulating and building, you're
kind of refining and cutting things down and reestablishing new ideas or improving new
ideas. So there's no reason to think of knowledge as this sort of base that you begin
with and then you're constructing towers. And then at some point, you finish the tower, of
course, as well, under that vision. In the preparing view, it is literally an infinite
process, not easy to visualize, but I guess people want a visual sense of what's going
on there. But we're talking about the abstract growth of knowledge, not the construction
of buildings. Also, very early on in the conversation. And this is another hint about, there's
problems with process here. And I guess there's problems with psychology or even linguistics.
The problem with process here is that David at very early on says, okay, well, let me
explain what the disagreement is. And then begins to preface what he's about to say.
But before he gets to the main point, Sam interrupts, and this happens in conversations.
And so it never quite seems as though David's able to get to explaining precisely what
he is, except that it's embedded in various other parts of the conversation. And so this,
again, will leave a typical listener, perhaps, with the impression that David never actually
did what he said he was going to do, which is to articulate where the disagreement was.
He does so. But it occurs some many minutes after he says, now I'm going to explain
what the disagreement is, because Sam interrupts as a natural conversation would have these
interruptions. Okay, so the preparing vision is that knowledge is always conjectural.
It's guest. And so when we're talking about what will increase the well-being of conscious
creatures, which is what Sam is concerned about as kind of, if not the foundation, then
the purpose of morality, the purpose of morality is to maximize the well-being of conscious
creatures. However, if we are going to attempt to maximize the well-being of conscious
creatures, and then we're kind of guessing what their states are going to be. And we
will come to, and we can also come to see later on, that Sam sees the well-being of conscious
creatures as intimately tied to their biology, that he doesn't take seriously the notion
of substrate independence, a let alone the universality of human beings. Human beings
have a universal mind, and so therefore their well-being cannot depend upon neuroscience,
their neurology, it cannot depend upon the particular makeup of neurons inside their brains.
So instead, just to preface what morality really consists of, it's about solving moral
problems. And in order to solve moral problems, we have to conjecture explanations about
what might improve things. And they can always be false. We can always criticize them. And
that includes any starting point that we might have. If we think that we need to start
with the well-being of conscious creatures, that could change, especially in so far as we
refine what we mean by conscious creatures. Now another thing that I got the sense that Sam
might have been hinting at, and this is early on, is he wants to defend the thesis that
moral truth exists, and I'm with him there, and I think David's with him there as well.
Moral truth absolutely exists. And it exists in a similar way to the way that mathematical
truth exists. And at some point I think David actually does, indeed, mention that, that
there's this kind of objectivity to mathematical truth and to moral truth. Neither of
which are dependent upon the truth about physical reality. Physical reality is a separate
kind of objectivity to what mathematical reality is. There can be truths in mathematical
reality that do not depend upon what's going on in physical reality. One such truth is that,
for example, the decimal expansion of pi is infinite. But it is impossible in physical reality
to represent that anywhere, because there simply aren't enough atoms in the universe, or even
the multiverse. You need a literally infinite number of particles, an infinite number of different
states of the universe in order to represent this decimal expansion of pi. But the infinite
decimal expansion of pi exists out there in abstract mathematical reality. It's a thing.
So the other thing here is a distinction between abstract objective, abstract objective,
ontological reality that's out there in terms of mathematics. Indeed, in terms of the laws of
physics, whatever the true laws of physics actually are, they exist, they absolutely exist,
but our knowledge of those laws of physics, like our knowledge of mathematics, or our knowledge
of morality. So this is the difference, again, between ontology and epistemology. Ontology is,
what is true in reality? Now, what is true in reality? We don't know. All we have are
fallible explanations of that reality. The fallible explanations of that reality are not that
reality. This is true of physics, where the laws of physics absolutely, absolutely exists. They're
really real. We don't know what they are exactly. We have approximations to them. And so for
example, we used to think that the law of gravity was Newton's law of gravity, which looks like
f equals g m1 m2 over r squared. We now know that's false. It works within a certain domain.
It's extremely useful for solving particular problems, but ultimately it's false and you can do
experiments to show that it's false and it fails in certain regards. The same is true of mathematical
truth. Mathematical ontological truth is out there. What we have are explanations of that
ontological truth. So we have epistemic claims, okay, which are just things that we write down,
things that we understand in our brains about that reality. It doesn't matter what the domain
of inquiry happens to be, whether it's science or mathematics or morality. There are
truths in each of these areas. They are part of reality, but we only have, because we are
fallible humans, we don't have direct access to any of it. We're fallible. And so when Sam says
that moral truth exists, it's not entirely clear whether he thinks that we have direct access to it.
And at times, I think that he thinks we may be able to get that truth in hand.
And I'm not convinced by that. Again, I'm a fallibleist, David's a fallibleist. And so this is one
of the areas of disagreement. When people make the noises on the subject as if to say, we're nearly
there. We're about to find out what the truth happens to be or in some distant future we're going
to know what the truth is. So I'll just move on to, I'll just move on to foundationalism. So
foundationalism is this idea that you can, you begin with a foundation. And then on that
foundation, you build up the rest of your knowledge. But it's the idea you need to begin somewhere,
you've got to start somewhere, you know, you're axioms, you're premises. And from that you can
derive everything else that you need to know that this is a platonic mistake. This idea that
you just begin here and then you justify, justify, justify, justify, and eventually you keep on
justifying in your reach the end because you found everything that needs to be understood.
This is completely the opposite in many, many senses to what
perperian epistemology is about and about how knowledge is constructed in reality. There's no
reason to begin here because we don't know that here is an absolute truth. We don't know that
anything's an absolute truth. And so because of that, all we can do is to guess what might be
the case in order to solve any particular problem. We don't have to worry about what the foundation
is. There's no bedrock. And even though there's a final reality out there, for many, many reasons,
we can't ever get to that final reality. And this is part of the conception of the beginning of
infinity. So you can always correct errors. There is an infinite amount to learn. We are fallible.
And so you can never be sure that whenever you've discovered something that appears to solve your
problem, that you're not going to find errors with it. And so with Sam when it comes to
morality, he wants two kind of foundations. He wants to talk about morality as being
about the well-being of conscious creatures. And this idea, the north to establish an objective
morality, in order to begin somewhere, let's consider the thought experiment of the worst possible
misery for everyone. So the worst possible misery for all conscious creatures. And he says that if
anything is worth avoiding, it's worth avoiding that. And we can agree that if there's anything
worth avoiding, it's worth avoiding that. But there's no need for this foundation. There's no need
to begin there because it doesn't help us solve any particular moral problem. It's a response
or a critique to either be political ways of thinking or relativism. Moral relativism, moral relativism
is this idea that your morality depends upon the culture from which you come or from the family
in which you find yourself or from your particular frame of reference, your particular psychology
determines what your morality is. And so Sam is right to want to critique moral relativism. This idea
that we shouldn't criticize other cultures or criticize other people for their moral beliefs.
And so if there happens to be a culture out there somewhere, that thinks that if little girls learn
to read, they should be stone to death, then who are we to criticize that culture? And as Sam says
in his very powerful TED Talk on this topic, who are we not to criticize such a culture?
So Sam wants to respond to the moral relativists by saying, well, let's consider the worst
possible misery for everyone. Now a moral relativist would say that no such state exists,
but he's appealing to people to say, you can't go with the moral relativists because
the worst possible misery for everyone is objectively bad. And therefore what is objectively good
is any movement away from that state. Now this cannot be a foundation for morality
because as David points out in the conversation later, and I think I've observed in my response
many years ago to Sam Harris on this as well in the moral landscape challenge,
as soon as you get a small distance away, any distance away from the worst possible
misery for everyone, I think David says one millimeter away from the worst possible
misery for everyone, then what? Then what? That foundation that we begin with is of no use whatsoever
to allow us to decide what to do next, which is the sphere of morality. What do we do next? What
should we do now? And that's if you're a millimeter away. Now we're, you know, the year 2018 here
on Planet Earth, we are a lot further away than one millimeter away from the worst possible misery
for everyone. So this worst possible misery for everyone is a critique of the idea
that there's no difference between good and bad. And as a as a critique of that, it's a good critique
that it doesn't get you answers to moral problems because where we are now, we have an infinite
space of possibilities before us. And what do we do next depends upon a whole raft of things
about what we value, but what should we value is and also another part of morality. So Sam has
this unalterable foundation he wants to begin with, that I would say the only purpose of which is
as a critique of relativism. And the other is the well-being of conscious creatures.
And the well-being of conscious creatures and insofar as that to the domain within which we want to
conceive morality, it has problems for human beings because there cannot be a dependence upon the
biology. And yet this is the assumption, this is the implicit assumption that operates behind this,
and we'll get to exactly why in a moment. But I just want to sort of fixate on these two foundations.
One being, morality is about the well-being of conscious creatures. And the difference between
good and evil can be articulated by considering the worst possible misery for everyone. So we've
got these two immovable things. Now this is a misconception because it simply makes the same mistake
that religious thinkers make, which is that you need to begin with a dogma. You need to begin
with this unalterable foundation and upon these two pillars, that's where you build the rest of
your knowledge. These things cannot be criticized, but this is false, this is wrong. And even if
your intentions are good, people's religious people have good intentions. The idea that we need to
begin with the Ten Commandments. The idea that we need to begin with the fact that God exists,
or love exists, or that Jesus zoomed up the heaven, or that Mary was a virgin, etc., etc.,
people have good intentions in wanting to enshrine dogma, in wanting to enshrine a foundation.
Now Sam says, well he's willing to conceive that this could be wrong, that this could be fallible.
However, he refuses to admit that morality could be anything other than about conscious creatures.
And he says again and again that this is his foundation. So why can't morality be about
conscious creatures? Why can't it purely be about conscious creatures? Now Sam has a
pretty forceful argument in the moral landscape about how if we were to consider a universe
in which there were no conscious creatures, then that by definition would be a universe without value.
Fine. But morality is a sphere of truth, of ontological truth. So those ontological truths
actually exist. They're out there in abstract reality, and so they occupy that reality.
Even if we can't find out what they are perfectly and we can't, they're independent of the
experiences of conscious creatures. So they might be about the experience of conscious creatures,
but they are independent of conscious creatures. And in particular they cannot be about
the neurology of the conscious creatures. And they can't be about the neurology of conscious
creatures, because human minds are universal. Even if you could possibly have a non-universal mind,
like for example, if a cat has conscious states, those conscious states, and especially
the universal mind of the human being of a person, can in principle, proven by David Deutsch,
they could in principle be downloaded onto a computer. We could be put into a matrix.
At that point, once our mind, which is a kind of program, is putting into a silicon computer,
or whatever the computers of the future happen to be, then it cannot possibly be the case.
The morality is about anything to do with the biology of the human brain,
because we're going to have biological brains anymore. We'll be a universal explainer inside
of some kind of silicon computer. There's a time about half an hour in, I can't remember now,
but I had to write myself in that because I thought this was a very valuable insight,
where David says that the criterion by which institutions should be judged is by how good they
are resolving disputes between people without violence, without coercion. And he's not saying he
knows what they are, but this is an absolutely crucial point about politics and economics and morality
generally. It means that the scope of government, for example, is extremely limited,
that if we want political institutions that work and that are moral institutions, they cannot be
coercive. And so when people consider things like the welfare state, and when they have good
intentions, like replacing the welfare state, let's say, with something that's an incremental
improvement like universal basic income, nonetheless, this requires some amount of coercion,
that if Joe over here is not earning much money, however, he's earning just enough in order that
you think that he should hand over some of his money to marry because of universal basic income,
then that will require a certain amount of coercion. The only way to avoid that is to allow Joe
to give Mary charity, to willingly, voluntarily do this, but the people who argue for universal
basic income in the same way that people who argue for welfare are the same way that people who
argue that socialism should obtain or that communism should obtain or any other kind of system
in which the government determines where the wealth gets distributed, where your wealth gets distributed,
wants to implement a coercive system. And Deutsche's criterion here is that being institution,
the political institution, needs to be judged by how good it is at resolving disputes between
people without resorting to coercion. And so if you can't reason someone into something,
then arguing that we need to use force, especially to extract money or something like that,
that's clearly inferior. And because people have, and this is tied to fallibleism, and it's
tied to this idea about what human beings are, that we're universal explainers, that we can come
to understand each other, that if you have an idea and it's good, and I'm a reasonable person
who's a universal explainer, then you will be able to use words at argument and explain to me
why it is that your idea is better. Now in discussions about economics and government,
it seems to me to be the case that very, very often we end up in a situation where
one side throws up the hands and says, well I don't have, I cannot convince you, nevertheless
we need to use force here, we need to use a mechanism whereby money is taken from these people
and give them to those people, etc. Okay that's a diversion. And David further adds,
this is an important quote, that there's no limit to the possibility of removing evil via knowledge.
And so all evils are caused by a lack of knowledge, and so therefore he's saying that
whenever there's a problem, whenever there's an evil or suffering, then what we need to try and
bring to bear is knowledge, we need to bring some kind of creative inspiration to that situation
in order to find a solution. But coercion can't be the thing. Now I'm going to read a
direct quote I've written something down that Sam says word for word, that's at the 49 minute 50
mark and he says, imagine this future of a completed science of the mind where we not only
understand the brainbases or the computational basis of every possible experience, but we can
intervene as completely as we would want. And we now have this machine that I can put on your
head and we can dial in any possible conscious state, it's just this perfect experience machine.
So that's a two sentences, it's a very long sentence and then a short sentence,
but I just want to emphasize this about Sam Harris who I admire, I think he's got the best
podcast out there, I've read all of these books, I think it's a fantastic thinker,
but I just want to emphasize that the language he uses is no accident, it's anti-fallableist,
it's foundationalist, it's not popularion. And even though he says various points in the
conversation that he's willing to concede the fallibleist point, he's underlying epistemology
which shapes his psychology and therefore the way in which he comprehends the world is
here in stark contrast against his explicit statements. So the explicit statements about yes,
I'm a fallibleist, yes, I'm willing to admit that I could be wrong about this. It's very,
very different to what comes out when he's just speaking naturally and so again he says
completed science. So as if we can finally rock the final answer, we can finally get there
and we will get to a point in science where there will be nothing further to discover
a completed science. If you didn't think that was possible he wouldn't put the word
completed in there, you would just say imagine this future of science of the mind where we not
only understand the brain based, etcetera, but he says completed. He also says every possible
experience as though that were possible, but as David goes on to explain it's not possible to
have every possible experience, you know, I take written down an algorithm, put into a computer,
enumerated in a computer, that simply isn't possible. And it's not possible because
that presumes that you can predict the content of future knowledge. So for example, the experience
of what will be discovered in a hundred years time, that's a possible experience. The experience
of discovering something that no one has yet discovered, that experience can't be put in there.
So you can't have this machine that Sam wants where you can put it on your head and dial in
any possible conscious state and it can't be a perfect experience for Shane. Okay, so how would
it help a perperian rewrite this? Okay, and again I'm saying it's no accident that the way in
which he tries to conjure this. So one way that you might write it is, imagine this future of a
science of the mind where we not only understand the brain bases or the computational basis of
the experience, but we can intervene. As we intervene in any way we like and we now have this
machine that I can put on your head and we can dial in conscious states. It's just this experience
machine. Okay, so that would be fine. I think that that that would work in a perperian sense.
But because Sam thinks that you can have this completed science of the mind that you can have
these machines that could have all these perfect states and you could just pick the one that you
like the most perfect one. Then he thinks that you can get to a peak and so these peaks on the
moral landscape he thinks are absolutely peaks upon which you can make no further improvement.
Of course, then later on he will say that oh no I didn't mean that. I mean you can make improvements.
So David interjects at that point with what I've said because he says something along the lines
that the vast majority of states, these conscious states that are in this machine,
will never know because we won't have the knowledge. The infinite majority will always be unknown.
So it's a big difference between Sam and David. Sam says we can dial in any possible conscious
state and David says the overwhelming infinite majority will always be unknown. There's a big
difference between having every possible conscious state and saying and denying the fact that you
can and in fact the infinite majority will forever not be known. That's a huge disagreement
in terms of quantity. It's a difference between zero and infinity and David used the example.
There is the experience of knowing tomorrow's scientific discovery which we will never
download. But Sam comes back and he says I know he didn't mean that these conscious states would
be finite but I think that's kind of a fudge. Either the computer can replicate all the possible
states or it can't. And so if it can't then his machine cannot be a perfect experience machine.
It can't be based on a completed science of the mind where you understand all the ways in which
the computational states or the brain states relate to conscious experience.
And the reason, if we manage to find a way in which to capture the mind inside of silicon,
we'll know what the algorithm is for creativity. We'll know what the algorithm is for a human brain.
But that doesn't mean that we will know what every single computational state
is. How that relates to subjectivity. Because there can still be an infinite number and uncountably
infinite number of conscious states. Being able to write down the algorithm for creativity
doesn't mean we know all the possible outputs of that algorithm. If it's a creative algorithm,
we already have them right, they're running on our brains right now. Even if we knew what the
algorithm for creativity is in our own brain, that doesn't mean that you know what the output is
going to be. Because presumably, part of that algorithm has the quality that it is a knowledge creator
and no knowledge creator can predict the growth of knowledge. That's simply a fact of epistemology
because as soon as you create something new, then you're going to find errors. And the errors
that you're going to find in that eventually, that bit of knowledge eventually depends purely upon
your preferences and your free will. But this is another probably a disagreement between David and Sam.
But Sam gives this idea that he's wrong, this idea that he's wrong about his thought experiment
that you're going to have this completed science and that you can have all the possible
conscious states. So he gives that short shift and he wants to go back to feelings. And this
happens a lot in the conversation. He wants to go back to considering how people either feel good
or they don't feel good and how you could feel better and how you could feel worse. And so
he just wants to consider, okay, well just imagine that you could feel like Mozart did during
his best moods or like John von Neumann during his best moods. What would feel like to be
Mozart composing a symphony? Must that not have been a wonderful state to be in?
And David points out that, well, this idea, and again David doesn't quite use these words but
it's this idea that anchoring morality to pleasure versus pain is misconceived. And when Sam talks
about the worst possible suffering for everyone, I think he really has in mind torture or pain
it's some sort of physical type suffering and you could turn up all the pain receptors in a
conscious creature and that would be the worst possible misery. And at the other end of the dial
you could just maximize pleasure. And so he starts to talk about pleasure later. But he can't
decouple this idea of feelings, sensations from morality, which is what David attempts to do here
when they start talking about what it feels like to be Mozart. And Sam says, you know,
it must be a very happy kind of experience. But David said, well, there's pleasure and then
there's joy and the joy that Mozart would have had is the joy of solving problems in music.
So in what way could you download what it's like to be Mozart?
Well, perhaps you can download this sensation, but it wouldn't exactly be the joy that Mozart felt
because the joy that Mozart felt had a lot to do with the fact that he just solved some problem
in music, some problem in composition. And that has a certain sensation associated with it's
sure, but the joy, the enduring feeling of having solved the problem, can only come from having
solved the problem. Now, one might want to say, well, you could download that experience,
the experience of their being Mozart and solving the problem, but then you would be Mozart,
then you would be Mozart. If you're downloading his entire mind into your own,
you're no longer yourself, you're not yourself having the experience of what Mozart had,
you are Mozart because you are solving the problem that he solved, you're valuing the problem
in the way that he valued it, everything about you is then tuned to being Mozart.
So it can't be the case that you can simply download sensations because happiness is a product
of doing something. And so always happiness is about solving problems. And suffering is a
condition in which you're thwarted in some way, you're unable to perpetually solve your problems,
or to solve a particular problem. And it's upsetting you. So it's about problems. Morality is
about problems. So David is arguing that you can't download the sensation of what it was like
to be Mozart or John Bond or anyone without recreating that individual person.
But Sam insists that it can't, you can have a form of happiness that is independent of
problem solving. And so he mentions a lot of pleasure and he mentions pain. And so he talks
about how drugs or medication or certain states during meditation, like enlightenment,
can give you a form of happiness or pleasure that is independent of what? It is independent of
problem solving. And everyone would agree that an opiate home might be pleasurable.
But this is a temporary thing. And David points out that most heroin addicts would suggest
that the experience of most heroin addicts would precisely be that. Maybe at first when people
try and drug it's interesting and new because you're having new sensations. But as time goes on,
if people are using this drug all the time, it becomes increasingly boring. There's nothing new.
And so they might become addicted. And in fact, that will become a real problem.
That the pleasure is no longer coupled to happiness. The pleasure is now coupled to unhappiness.
There is this deep divide here, this real chasm, another real chasm between Sam arguing frequently
that morality is about feelings in some way, the well-being of conscious creatures.
But also, here's every example is about some kind of state of happiness or state of pleasure or
state of pain. And David wants to say that instead, morality isn't really about that. It's
about solving moral problems. And so these are two different ways of viewing what morality is
about. Emotion versus reason. Emotion and reason versus problem solving. Now, Sam attempts
are thought experiment with the matrix. And the interesting part here for me was when,
I can't remember exactly the point of the thought experiment. But the interesting part for me was
where Sam said that the morality of the people within the matrix, if you're inside this matrix and
you're just having a great time, the morality of the people within that computer program isn't
relevant because they're not real people. And then David says, well, then that means that they're
not creative. So it wouldn't be a pleasurable experience to be inside of this matrix type, computer
world, because what you want inside of this matrix type, heaven. Let's say we could make a matrix
that was heavenlike. But Sam saying that you could do anything you like with the people that are
there, because the people that are there aren't real people. And David says, well, then that means
they're not creative by definition, by his definition of what a person is, a person's a universal
explainer, a person's a creative thing. Now, if they're not creative, if these people in
some of the matrix aren't creative, then they can't collaborate with you on any of your problems.
They can't really help you with any of your problems, really, because they're not able to
contribute to your problem situation, because all they have is a finite set of responses that
they can probably give you, like a non-player character inside of some computer game. They're not
really going to be able to help you very much. Anymore than Wikipedia can help you. But if it's a
real problem in your personal life, Wikipedia may or may not be helpful, but really what is
out of people, ultimately, we want other people. We need to collaborate in order to get our
problem solved. And our problems are important insofar as there are other people that around us
that we want to solve their problems. We want them to help us. Anyway, we need other people.
Now, David says that because of this, because these other non-real people aren't creative,
that we'd eventually notice that if we were inside of this matrix that's kind of this
heaven-like matrix, then we'd quickly notice that they aren't responding like normal people are.
And Sam in response to this goes right. So he seems not to get it or doesn't buy the argument.
And to me, this is another chasm of difference between the conception about what a person is.
So Sam kind of thinks, and this is the prevailing conception, that what we have our computer
programs that are maybe artificially intelligent in some sense. And we have people, and maybe we'd
want to agree that artificial general intelligence is a person. But then there's kind of this
continuum between the two, and then maybe there's something further beyond artificial general
intelligence. But this is simply false. You have only two states. It really is a binary thing.
People don't like this. They say you're a black and white. You think you're thinking shades
of gray. Of course. But not in this situation, it's the difference between black and white.
It is here because we have things that are not creative and things that are. There's not
partial degrees of creativity. Either you can tackle a problem because you're a universal
explainer or you can't. And so this is a difference as well, okay? That you have people and
things that aren't people. You have general purpose explainers and things that are not general
purpose. We are general purpose explainers. And we want to interact with other general purpose
explainers, other people. They are what's valuable in this world. They are the ones that are likely
to be conscious. They're creative. They've got free will, Sam won't like that. But creativity is
the thing that makes a universal explainer, a universal explainer. And you quickly notice if something
is not a universal explainer. It's not going to be able to give back to you in the same way.
It's not worthy of the same kind of love and compassion and fun times that universal explainers
or other people are. So there's a real difference here. There's a real chasm of understanding once
more between Sam's idea about the centrality of people and David's. Then we get into a section
that is a little bit of a diversion, I think, more of a distraction from the meat of the
disagreement. And Sam talks about meditation and utility. And I'd agree with Sam that meditation
is a very useful, pleasurable thing. It has a whole bunch of benefits. And he says that that state
is a state in which you can have happiness, be not solving problems. And I profoundly disagreed.
And I was so glad that at this point, this is where David jumped in and said, well, you know,
that could be because it might feel as though the subjective feeling that it's not about subjective
feelings. And so this is when Sam talks about meditation, he always talks about the subjective
feeling side of it. And yes, there's a subjective feeling side of it. Doesn't mean you can't be
wrong about your own subjective feelings, but what David contributes to this is a very
douchey in response to meditation, even though he says he's not experienced in the area
himself. He says, well, the pleasure that one might get from meditation might be because
you kind of dampen down your conscious state and maybe in dampening down your conscious state,
you allow your unconscious mind and your unconscious mind is real, your unconscious mind is
there and it's attempting to solve problems as well. But sometimes the conscious and the unconscious
probably have this interaction where there could be obstacles or blocks where your conscious
mind is just getting in the way of your unconscious mind. And meditation might just cause your
conscious mind to relax for a while, to go to the background for a while and allow your unconscious
mind to do its thing. And then it can solve problems unconsciously so that when you go back to your
conscious mind, suddenly you feel a lot more creative and Sam admitted that this is indeed
subjectively the case, that he's had the experience of feeling as though he's far more creative
after having meditated. So admitting that the pleasure of meditation really is cashed out
in what happens after the meditation, namely the problems begin to be solved after the meditation
at a rate greater than what they were before. And he said that this is one of the reasons that
culture has taken on meditation as being an important tool because people recognize this, that if
people are stressed or feeling bad or depressed, their meditation can be a very good prescription
to help that sort of thing because it allows the problems that are normally there to be dropped.
But you don't simply drop them such as they disappear, you drop them such that your unconscious
mind during the meditative state can do its thing, whatever that thing is, so that at the other end
your conscious mind, once you come out of the meditative state, maybe it's days, maybe it's weeks
whatever, is able to work better on solving those problems. So it does come back to problems, there
isn't this state of pleasure with respect to morality that is completely independent of problem
solving. So it's a key point, it's a key point. And so what I'd say there about the meditative
state, and I've tried to mention this before, when Sam's talked about it, is that a lot of what's
going on in the meditative state, this feeling of the divestment of I, the feeling that you no
longer, your witness of your conscious experience, as Sam would say, that you really do feel kind
of a distance between you and objective reality or even you and your thoughts, you look at
thoughts as objects. Fine, get it, it's great. But this is only to say that that state is very
difficult to describe. It's exactly the same as the difficulty of trying to describe what the color
blue looks like to you to someone else who's looking at the same sky, let's say. So it's the
difficulty of trying to articulate what quality are alike, we don't know how long, and that's all
that that is, that this inexplicit type knowledge that we have, we know, we have subjective
knowledge, we know what the sky looks like, I know what the sky looks like, but I can't put it
into words, it's inexpensive. And the same is true of the meditative state. And this is a mystery
for now, but it's just a problem, like how do we do it? Okay, well we can't do it now, I don't see
it as being some sort of reason to think that there's something massively spiritual or
hugely mysterious about this area. It could be the case, but it could just be a mundane problem,
I might just turn out to be a mundane problem. Someone hasn't thought of the solution. Okay, so
again, David says that moral theory should be approach like scientific theories, they don't need
foundations, they don't need foundations. And that the, there are a lot of theories out there,
a lot of moral theories like, um, can't's categorical imperative, okay, or, or rules as fairness,
or stuff that comes out of the Bible, or the golden rule, etc, etc, etc, whatever your moral
theory happens to be, or indeed Sam's own wellbeing of conscious creatures. All of these,
these principles, these ideas, these theories should be seen as critiques, as critiques of each other,
or as critiques of any other theory that someone proposes, or is a critique of a solution
that someone proposes. They shouldn't be seen as foundations from which you begin to build up
everything else. Okay, and in response to David saying that, that these, these particular theories,
these famous theories from utilitarianism to camps, categorical imperative to, so believing in natural
law, etc, etc. In response to this, Sam says, let me recategorize my foundation and he goes on,
so he hasn't heard what David has said about foundationism, or insofar as he's heard it,
he's heard something different to what David was trying to, to, to impart to him.
The receiver of the message, and the sender of the message, I'm not guaranteed to get the same
message, or the message that the sender sends is not the message that the receiver is guaranteed
to get. It depends upon error correction, and there's no good way to know what the method of
error correction is, in order to ensure that someone's idea, or that your idea gets into someone
else's mind, we don't know how to do that. We always make mistakes, and so I think this is
another example here that Sam just says it off the cuff, it's part of his psychology, part of his
vocabulary, it's part of his way of viewing the world, of thinking about these matters, about
thinking about morality and epistemology, that you have a foundation, you need a foundation, he
doesn't understand that you don't need one, and so he continues to return to it, and he just
wants to re-explain it, and so he said that David there, just let me re-explain my family, you're not
accepting my foundation, and I know you're against foundationalism David, but let me re-explain my
foundation, I can't be the only one that sees the problem there, so just to emphasize that this
comes from, I'm trapped before at the beginning of an affinity where David really articulates this,
to understand stuff, to learn, we human beings have to conjecture and explanation,
but Sam on the other hand is working with it, our foundation was sort of an anti-fallableist
framework, and although he says he explicitly isn't doing that, he doesn't need to do that,
to me it sounds very much like a person who says something like, I'm not religious, really I'm not
religious, but now let me explain to you the divinity of God and how Jesus zoomed up to heaven
and how I go to massively someday. So on the one hand, the person saying that I'm religious,
and on the other hand, with every utterance they make, they are articulating and announcing to the
world the ways in which they're religious, and so yeah Sam is saying he's a fallableist and he
doesn't need these foundations, but he's explaining again and again and again where these foundations
are, this is the big difference, this is the huge difference of opinion that they have,
that one person is arguing that a foundation is important, and the other one is saying that that's
the very mistake that you're making, and the other one then goes back to argue and say, well,
if you don't like that foundation, let me explain the other one, and David is saying, well,
you don't need the foundation, you're not quite understanding what I'm saying, and Sam goes,
well, okay, well, if you don't like that foundation, let me re-explain what the foundation is and see
if that will convince you. So this is why there's not as much progress on that front as what
there might have been because they're not agreeing on what the word means, I don't know what
Sam is hearing exactly, but he's hearing something like, I just don't know what Sam is hearing at
that point, because at one point Sam even says something like, but you need a foundation,
like even popularian science needs a foundation, and David tries to say, well, no, that's wrong,
David says that the idea that we need to start anywhere is false, everything's
predesizable, you don't need to start in a particular place, you don't need to start down here,
or up there, or anywhere else, what you have are moral problems, and then you need to approach
those moral problems with a critical eye, in the same way in science. Now, if David has a problem
in quantum computation, he really doesn't need to look at what the foundations of all of
science happens to be, or what all the physics happens to be. Now, they might be ways in which
they can critique his solution to a particular problem in quantum computation.
Yeah, let's say, for example, he decides to create some algorithm to run on a quantum
computer, but the algorithm requires that parts of the quantum computer exceed the
switch that require the quantum computer to have switching speeds that exceed the speed of
life. Well, relativity would be a critique of that, that it wouldn't be possible, and so that
could be ruled out. But it's not like he always has to begin with some particular set of facts,
and then from that build up. If he has a problem, so mainly if there's a problem anywhere in science,
then you solve that problem, without being too concerned about what else is beneath that.
Again, there's a point where Sam gets into feelings again, and so David comes
back with, or if you consider someone like Isaac Newton, Isaac Newton would have been quite happy
when he was solving problems. No doubt that state of mind he was in was just as happy as anything
that people today have. Just as happy as when David invented a theory of quantum computation,
or Edward Whitman solves a problem in string theory, Newton would have had that when he found the
universal law of gravitation, let's say. However, his state of comfort would have been
god awful, it would have been terrible. His clothes would have etched, his food would have been
terrible, his bath would have been cold, he would have been cold. There would have been a whole
bunch of reasons why he would have been uncomfortable compared to us. And so it can't be due to
comfort or pleasure, or the absence or presence of pain, that this idea of happiness can be
a cash down. Now happiness is kind of independent of those things, it's about solving your problems,
whatever your problems happen to be. And it doesn't need to be this snobbish thing of like you
need to find the next universal law of gravitation. There can be anything in your own personal life,
all problems are parochial, it's just what you happen to be interested in. But
so long as you're solving them, then that's what will make you happy, so long as the problems
are interesting and worthwhile that they don't need to be this profound type thing.
And it doesn't need to be anchored to creature comforts. In the far distant future, people will
look back and think how uncomfortable we are. Here I am sitting in this silly chair. Maybe in the
distant future people will look at a video like this and go, oh my god, they used to sit in chairs
like that, how ridiculous. We're now floating around on clouds, how could those poor people?
Sam says at some point that there might be aliens out there that have available to them,
states of mind, states of pleasure, that we do not have available to us. The biology of our mind
might foreclose certain states. And again, this is a profound misunderstanding of David Deutsches
understanding of what a person is. David Deutsches discovery of what a person is as a universal
knowledge creator. Given that our minds are universal, there can be no such state because our minds
can access any state. It doesn't depend upon our biology. Sam insists at this point in the
conversation that it does depend upon our biology. But it doesn't and it can't because our minds
are substrate independent. We can one baby download into computers. So we are not foreclosed
about having certain experiences. Our minds are already universal. We don't even need more
processing speed or more memory power in order to do this. Our minds are universal. And if we
did need more memory, well then let's hook ourselves up to a computer but find that completely
implausible. And David points out that discoveries in morality must always create more problems.
Sam mentions that, well, this is just a local wrinkle. Again, this idea that in the distant future,
Sam has this conception that we will be in a less problematic state than what we're in today.
And it's just wrong. He thinks that we'll be almost there. We'll just be ironing out the wrinkles.
Again, this anti fallibleous notion that he doesn't really take seriously the beginning of
infinity. That even then we'll be at the beginning of infinity. Even then, when he thinks that we'll
be almost at the peak, like David says, well, we will just see more problems from that point.
There'll still be existential problems. How can we get rid of all the existential problems? How can
we stop all the stars exploding? Well, maybe we can. But once we do that, how can we stop the
universe expanding and so on? The problems will always be there. We'll always discover something new.
And when you discover something new, you end up creating a, well, you end up being okay.
You end up finding a whole bunch more problems. And there's this funny moment where David says,
he doesn't see a reason why there should be a limit on the size of a mistake we can make and
Sam laughs at that. But I think that Sam's kind of laughing nervously. It's like a reflex
because he doesn't have an answer to that. He kind of gets it. He's a very intelligent person,
and he understands that, indeed, there's no maximum size on the problems we might make, even in
the future. And so things could go terribly wrong, even then. And so I think he kind of almost
gets there and realizes that, oh, you can't really be on a peak. Because if you're a peak,
then you've kind of solved everything. But that's not possible. You can still make mistakes with
fallible people. With 10 minutes to go, with 10 minutes to go in the conversation, Sam's still
asking what the disagreement is. He mentions morality as being a navigation problem. David says,
we'll change what we mean by better and worse. So even if we think right now we should go this
way rather than that way, well, in the future we might realize that that was a terrible era that
we could change our minds about what is right and wrong. And David also says that neuroscience
can't be very relevant because, and Sam agrees, but David explains again that it's because
the brain is universal. And I'm not sure that Sam quite understands what that means. And this is
a subtle point. This is very difficult. I don't know that many people really have comprehended
the significance of that. The brain is used as universal. People are universal. Human beings are
people precisely because they're universal knowledge creators and universal explainers, they're
creative. That's what it means to be creative. And David points out again that if we were in a
matrix like Sam imagined earlier on, then no moral question would have anything to do with science
because that matrix, this program, this fake program that would all be uploaded into,
wouldn't have physical laws that are instantiated in physical reality. Instead of be based upon
simulated physical laws, so any decision you made there wouldn't have anything to do with science.
They'd have to do with the laws that are instantiated inside of the program,
what if the program had decided to put in there? That's what, if you think that morality is anchored
to science, then you'd have to admit that at that point, morality then becomes anchored to the
rules of the program, the rules of the computer game or the matrix that you're in. And again,
that can't be true because morality has an objective reality beyond the matrix, beyond
our physical reality as well. Even though truths about physical reality might be relevant
at times to morality, you can't use, morality can't be derived from the laws of physics or
from the laws of neuroscience or anything else. And Sam says he wants to talk more about that
another time, so I'd love to hear that conversation. You should tune in to the just the final
two minutes of that particular podcast by the way, there's a good laugh, a good joke.
So I hope that went some way to trying to tease out what the differences are. I think there
are a number of differences there, not less of which is foundationalism. And this conception
about what morality is and that when you've got moral theories, really the moral theories that we
often talk about, like natural law and utilitarianism and the golden rule, etc.
The really these should be seen as critiques, critiques to use in order to find out what is wrong
with other theories or with other proposals within the domain of morality. But morality is about
solving moral problems, just the way that science is about solving scientific problems.
I hope that was useful.