So I'm going to talk a bit about some new research I'm doing on constructor theory, which does have rather large potential coverage, but it's very nascent and early yet. So, but I was encouraged to talk to you about it.
In many cultures all over the world, there was an ancient myth of a cornucopia, a supernatural device producing an endless stream of food. It was a quintessential wish fulfillment fantasy of people who faced the unique primal dilemma of our species between toil, meaning unpleasant physical work, and starvation. There were many other wish fulfillment fantasies such as flying, space travel, transmutation of elements, immortality.
People tried to achieve those from time to time, but strangely nobody ever tried to make a cornucopia unless you count the agricultural revolution, perhaps 12,000 years ago, that increased food security, though it may have increased toil as well, until the scientific revolution definitively provided an exponentially increasing amount of food and drastically decreased toil, so much so that in high knowledge economies, the primal dilemma has now been eliminated.
As a result of such success, wish fulfillment fantasies have become a lot more ambitious, as we as we've heard at this meeting, yet the idea of achieving universality in making things, eliminating all toil through technology, was a remarkably slow to take hold, even the golem, a fantasy of medieval Jewish mystics. Its universality was not recognized, there was only ever one golem, and one cornucopia. Until 20th century science fiction, when Carol Chappec introduced the term robot, which means toil. Ever since science fiction has imagined technological cornucopias, sometimes called universal constructors, which I'm going to talk about, which would be capable of producing an endless stream of not just food, but any physical objects that the user asked for, without any need for any toil.
In 1933, the science fiction writer Lawrence Manning envisaged what he called a production device, working by transmutation and nuclear power. He described the characters in his story, feeding the device by shoveling earth and gravel into hoppers, but shoveling is toil, so is mining nuclear fuel.
In general, a universal constructor would operate by being programmed to make, as the sci-fi authors David Gerald and Larry Niven put it, the tools to make the tools to make the tools and so on. And these programs could not be written unless the knowledge of how to do all that had first been created. That is irreducibly a task for creative beings, people, not obedient automata.
Why would I want to do that? Turing didn't. His universal computer made a paper tape, required a processor embodying very complicated rules in its physical structure that could run indefinitely without maintenance with an unlimited supply of paper and pencils, so generally with a lot of toil from its operators that is not included in the theory. Nor did my theory of the universal quantum computer prove that it can be built. And to this day, there are people who think it can't. Nor did John von Neumann, whose replicator vehicle construction solved the logic of all living things, prove that living things can be made. We know that they can evolve, but it's not self-evident that everything that can evolve can be made reliably along with the tools to make it and so on by a universal machine that can be programmed to build tools to arbitrary depths and maintain itself and find raw materials and energy and so on.
That's a measure of why we're still quite away from building an artificial, self-reproducing automaton, a von Neumann machine. For all that those fundamental theories know, the physical universe could be like a basic lego set. You can build small models of almost anything, but they're not scalable. In such a universe, there'd be no universal computers, no universal constructor and presumably no life.
Why can't these theories prove from the laws of physics that their respective universal machines can exist? The basic reason is that existing laws of physics are very poorly adapted to proving that something with given properties can be caused to exist without presenting a specific design that was testable and that is because in the prevailing conception of fundamental physics, there are laws of motion, there are laws setting initial conditions and between them those determine everything that happens. So in that conception, there's no such thing as what could happen, everything either happens or never happens.
No such things as counterfactuals in other words. Yet any universal device is necessarily judged by its counterfactual properties. You don't buy a computer to do only the computations you will use it for. You buy it because it could do any computable computation, an enormously larger set.
Even though the laws of motion are time reversal invariant and it's basically for the same reason, knowledge, the future is affected by knowledge that we don't have yet. And the past, well, the same is true. You can't retrodict historical events in fine detail because the relevant knowledge has been lost. In contrast, there could be a law of constructor theory that, for example, a universal computer is possible on one side of the dichotomy, which also tells you that the constructor that builds it is possible and the constructor that builds that and so on, back to the big bang, without specifying the exact state even in principle.
This also illustrates that constructor theory doesn't distinguish between microscopic and macroscopic objects or laws because entities in constructor theory are defined by their in-out possibilities and impossibilities, not by how they are constituted microscopically. Unlike their analogs for computation and life that I've mentioned, universal constructors are not built out of elementary construction primitives, analogous to bits, qubits, and replicators.
Universality for constructors is therefore radically different from that of computers. As I said, I've been working on a proof that a universal constructor is possible from the principles of constructor theory. For example, those principles imply that only finitely many objects that could be needed in constructions, kinds of objects, that is, that could be needed in constructions can possibly form spontaneously, i.e. without knowledge. My proof is still quite inchoate, but it's basically that everything complex either evolves, in which case it can be made given the right knowledge, which we might not have, or can be made by a constructor which can itself be made and sewed onto a chain of finite depth, using knowledge that can itself be created.
Knowledge is central to the story.
The fact that it can build countless instances of itself will make any parallelizable job, completeable in essentially linear time, quite different from the polynomial equivalence classes of computational complexity theory, but those machines won't be nanotechnology, so they won't mutate and decide to take over or infest the world because they'll have error correcting hardware that can correct errors reliably until the universe ends, so they will be obedient, they wouldn't be universal if they weren't, and they won't have AGI's artificial general intelligences in them unless someone programs them with one. I'll come to that in my glitch creation and all other information processing, corresponding to late proper laws for regulating advanced information processing, laws that enhance progress rather than strangling it. All existing proposals I've seen do the latter because they compulsively confuse AI with AGI, which are almost opposites, and information with knowledge.