An interview with Gregory Chaitin

This post is also available in: Spanish

Is metabiology, in a sense, a continuation of your previous research?

Yes. Emil Post was a mathematician who did very important work at the same time, roughly, as Turing and Gödel, in the 1940s; very much along the same lines, though he’s sort of forgotten. And he used to say that what incompleteness and the unsolvability of the halting problem implied is that Mathematics is fundamentally creative, it has to be and will always be. And then, if you take a small step, and say: “well, viewed from a great distance, creativity in biology is not that different from creativity in Mathematics”, which is my position, you see that the work of Gödel, Turing and Post opens a door to evolution, to biological creativity, to a theory of evolution. That’s, at a great distance, what the idea is.

Another way to put it is that incompleteness and the unsolvability of the halting problem show that mathematics has infinite complexity and, therefore, is more like biology than biology, because biology is very complicated but only has finite complexity, whereas mathematics has provably infinite complexity, as is shown by the bits of the halting probability, which are irreducible information. So that’s also a door from pure mathematics into biology. I started developing this hint and now I have the beginning of a mathematical theory of evolution along those lines.

Another hint is the idea that DNA is what computer scientists call a universal programming language. There’s this notion that goes back to Turing, in 1936, of universality, which says that even though axiomatic theories are always incomplete, programming languages can be complete, universal, which means that anything that can be written in a programming language can also be written in a universal programming language. Those are the most powerful languages and, presumably, DNA is at that level, it’s a language which can express any algorithm. So that’s another clue, and another one is just a technological observation: the two most important technologies at this moment are computer programming technology and molecular biology. One is already fairly developed, the other is starting to be so. If you look at the DNA of an organism, it’s basically a very large piece of software: the human genome is of the order of a gigabyte of information. And now we have libraries where we keep the genome of species: I once saw one, and there were about 300 hundred species; by now, they probably have about thousands and thousands. They have the entire genome, which is this very big program in a very low level language, which we don’t understand too well. And, on the other hand, we have computer programming languages. Now we have digital software in the natural world, which is biology, and in this artificial world, which is computers. And they are very similar, in a way, so the idea is to take advantage of this analogy and use it to develop a mathematical theory looking at random mutations on computer programs, which hopefully will have something to do with random mutations on DNA. This is the idea of what I’m calling “metabiology.”

How much have you advanced in this direction?

I’ve been working on this for about two years: two years ago, I had the idea that this was a good direction for research, but there was a problem, there was a barrier: I couldn’t get a reasonable mathematical theory, I couldn’t really prove serious theorems, such as the theorem that evolution works. Around late August I had a breakthrough, and now this new theory, metabiology, is connected very closely to algorithmic information theory and it has all the elegance of that theory. In particular, the start of algorithmic information theory is the halting probability Omega, a fascinating number which shows that pure math is infinitely complicated, but the key point is that algorithmic information theory, in a very natural way, assigns probability to things, such as the probability of halting. In the same way, it gives a natural assignment of probabilities to mutations. The key idea is: what is a mutation? And originally I was working with very low level mutations, what are called “point mutations” in biology, which basically means you’re changing one or more contiguous bases in the DNA. As you know, DNA is a language with four symbols, A, C, G and T, the four bases. So, originally I was working with this “point mutations”, which meant changing a number of contiguous bits, and I got some results, but mathematically it wasn’t going very well. It was ugly, but this usually happens with the pioneering work. So at the end of August I realized that what I needed was to make the notion of mutation very general, and now a mutation is an arbitrary function which takes as input the organism and produces a line with the new organism. It’s an arbitrary function, but it should be a simple function: it’s only going to be very probable if it’s very simple to program. That includes inserting, changing or deleting a few contiguous bits -those are simple mutations in the new context- but it also includes transformations like flipping every bit in the program, which is not a very useful transformation but a very simple one, very probable if you’re taking mutations at random. So algorithmic information theory, as well as giving you a halting probability, also gives you a natural measure on the space of all possible mutations. And that’s what I’m using now, this very high-level notion of a mutation. Now, I’m not sure what the mutations are like in real biology: there are higher-level mutations and point mutations, the two happen: one is called re-combination, when you mix pieces from the father and the mother; another mutation the biologists talk about is to copy an entire gene and then making changes in one of the copies -that seems to be something that happens frequently- and these are fairly high-level transformations of DNA.

Another way to talk about the models that I’m using is to go back to Dawkins. In The selfish gene, he says that organisms are a gene’s way to reproduce itself. So in my model all I have are the genes, basically, I’ve thrown out the organism. I’m doing, in a way, theoretical physics: in order to prove results about biology, I need to simplify biology to a toy model -physicists are very comfortable with it, they do it all the time. The theoretical biology that I’m proposing, that I call metabiology, is much further from real biology than theoretical physics is from physics. Theoretical physics and physics are very close: that’s because physics is very mathematical. Now, biology doesn’t look mathematical at all, it looks like a place where mathematics really doesn’t apply too much, so I have to go further from biology, to this toy model I call metabiology, in order to be able to prove things. Therefore, this theoretical biology that I’m proposing and that is starting to work is further from real biology but, hopefully, as it develops, we’ll get a better feel about whether it’s relevant to real biology or not. Initially, the toy model that I have is a model which works beautifully mathematically, but it’s far removed from real biology: one example is that I don’t care if the programs that I have in my model -which is the DNA that I have in my toy model- take a long time to run, as long as the execution time is finite. That’s unrealistic in real biology, but it makes my model work better mathematically. So initially I’m trying to see the ideal case of evolution. From a mathematical point of view maybe I found it, I’m not sure yet, but it looks encouraging. And then, one could try to do more realistic versions of the theory, closer to real biology, and I do actually have two other versions of the theory. One of them is lower level: in that version, where I have simpler computer programs -I don’t allow arbitrarily complex ones- I can prove they will get a more and more hierarchical structure over time, which is one step closer to biology, but still a situation where it’s easy for me to prove things. Now, it’s possible that in order to get really close to biology, initially one would have to work doing computer experiments, and I’m not sure if one would be able to prove things. The kind of thing you would want to do is make the computer programs be very fast, limit the execution time. To do that, one would need to do computer experiments, but I suspect it would be hard to prove theorems, so one would have to stick to experimental work. So it looks like there’s going to be a spectrum of possible theories in metabiology: the ones that are most beautiful mathematically will be the furthest removed from real biology. From an epistemological point of view, what this suggests is that theoretical biology is possible along these lines, but it’s going to be a little more theoretical than theoretical physics, it’s going to be further from reality.

Another thing one can discuss about this new approach -though it may be premature- is creativity, biological creativity, rather than competition, survival of the fittest, and “Nature, red in tooth and claw.” I’m mentioning this because Darwin’s theory of evolution has been used to justify a very brutal free market, and my theory, which is trying to emphasize biological creativity, which is not emphasized in the normal way of looking at Darwin’s theory of evolution, goes in a different direction. From this new point of view, I would not measure the value of a society in dollars, I wouldn’t talk about its GNP, I would rather look at the creativity of a society, at the new ideas -scientific, social, technological, artistic- that it comes up with. So, from this point of view, if you’re looking at creativity, ancient Greece is good, ancient Egypt is not good; Renaissance Italy is good, China is not good. You notice that China or ancient Egypt are much more stable than, say, the anarchistic ancient Greece, where all the cities were fighting each other, or Renaissance Italy, which was also divided into many small principalities and duchies. So a very strong central control over a very big geographical area, like China has and Egypt had even better, is very good for stability, but it’s not very good for creativity; for creativity you need more anarchy, I think, and a situation where individuals can have an effect, like the small Greek city-states or Renaissance Italy. So, from this point of view, if you’re emphasizing creativity, which is my main interest, I would say that the European community seems like a bad idea: it may be a wonderful idea economically or politically, say, if you want to compete with the United States, but I suspect it will have a bad effect on creativity because, for that, it’s better to have each nation in Europe doing its own research without a central control from Brussels. Similarly, from the point of view of creativity, it would be better to divide the United States into fifty separate states. Of course, this would have a lot of undesirable consequences and I don’t think it would ever happen, I’m basically joking!

What do you think about experiments with simulated life, in which computer programs have been made to evolve in virtual environments?

There have been a number of experiments like this: the first one, Core Wars, actually comes from a computer game of the same name; then there’s Tierra, created by Thomas S. Ray, which was the first interesting situation: it developed parasites, hyper-parasites, etc. And there have been more recent versions of this kind of stuff. My comment on that is: yes, this is very interesting work. What you are doing is to try to simulate evolution. But two comments on this: first, you will never prove any theorems. I’m trying to prove theorems, I’m trying to prove evolution will continue, indefinitely, ever onward and upward. And the other comment is that some of this models work very well for a while, but the problem is that they eventually stagnate, basically when the organisms adapt well to the environment. And it’s not clear how to get evolution to continue: you can do it endogenously or exogenously, to try to create a situation where evolution will continue, rather than stagnate. There has never been a computer experiment suggesting an evolution that will continue, they all come to an end when the organisms adapt very well to the situation they’re in. For example, the original experiment Tierra, which attracted a lot of attention, was done first in the States, and the Japanese invited the creator to do a much bigger version in Japan, where he did this enormous simulation with a lot of computing power. And it actually behaved less well than the original version! What happened was he started off with an organism and, instead of getting more complicated organisms, parasites and hyper-parasites, very quickly, very simply, it turned out that a much simpler organism would reproduce itself very quickly, so the original one degenerated and, once it found this very simple, very fast way to reproduce, it got stuck there.

More work along this lines was done at Caltech by Chris Adami. They used a system called Avida, which I think is the one you’re referring to, and a new version of it was in New Scientist recently.

In conclusion, I think this work is very interesting and I think some work should also be done along the lines I suggested. Some experimental work would be helpful in metabiology. But you can only go so far with only experimental work done on the computer. I would like proofs. Experimental work can be very convincing -biologists are convinced that evolution works. I just would like to try to understand it mathematically a little. I think that, if evolution is as fundamental, as basic as biologists think it is, there ought to be a basic mathematical theory. At this point, metabiology is still a candidate for such a theory. It does look promising; that could change in the future, but I think the initial work is encouraging and I’m hoping to interest some other people in working on this. I’m convinced that it’s promising at this point, but to convince people to go into a new area is difficult, because it doesn’t have journals, it doesn’t have funding… it is a problem, in the current research environment, but I think it’s actually a very promising area to work on.

It’s also different from genetic algorithms. It’s analogous, but then again, in genetic algorithms you simulate evolution in order to try and solve an engineering problem. And the evolution stagnates on the solution, on a relatively good solution. You are not proving theorems and, also, you don’t get evolution continuing, the evolution will stop when you get reasonably close to a good solution to your engineering design problem. But genetic algorithms are related to what I’m proposing. I was talking in Salvador with someone who works on genetic algorithms and she was very interested in my research, because it’s close to what she’s doing. She works on Artificial Intelligence in Brazil. I go around the world lecturing on this, hoping to interest people, but giving talks is certainly good for me, even if nobody gets interested, because people ask questions, they make suggestions, and the ideas develop that way. I’m hoping at some point some other people will get involved, but that has not happened yet.

Has your research helped you to reach any definition of life?

Yes, there is a definition of life implicit in this model. Let me say, by the way, this model does not deal at all with the origin of life, with the creation of life. Basically, in the model I have you start off with life present immediately. And what you study is how it evolves, how it increases in complexity. You study biological creativity. That’s because in my model I’m already assuming, essentially, DNA, because when I assume a universal programming language I’m essentially assuming DNA and all the related mechanisms are there already. So I already have life: what happens is life gets more sophisticated, it evolves. Now, let me contrast that a little bit with population genetics. Dawkins, in The selfish gene, calls Ronald Fisher “the greatest biologist since Darwin”, and he says that because Fisher has a mathematical theory of evolution. And this theory, called Population genetics -Fisher, Wright, Haldane, many people worked on it- studies how gene frequencies change in response to selective pressures, and it’s a very nice theory, I have nothing against it, it’s a beautiful theory, but in this theory you start off with a fixed gene pool, and what you study is how frequencies change. So you don’t study where new genes come from. And that’s what interests me, biological creativity. Also, from a mathematical point of view, population genetics is very beautiful and you’re using very well-known mathematics, differential equations, whereas I’m using a very new kind of mathematics, starting off with Turing in 1936, computability theory and, in particular, algorithmic information theory. So this is newer kind of mathematics, and basically what I’m working on is the idea that DNA is essentially a digital programming language, which is a metaphor that is mentioned a lot: evo-devo, evolutionary developmental biology, talks about DNA as a programming language that calculates the organism, that specifies how to create the organism through embryo development. So that’s a very well-known metaphor and I guess what I’m adding to it is saying: “we have an opportunity here to try to create mathematical models and prove things.” Now, most of the current work, that I’m aware of, in modeling biology, is very different from what I’m doing. Systems biology, which is the fashionable thing to do now, tries to have computer models of biological systems which are very detailed and very realistic. And this is very useful: for example, the hope would be to test the effect of a drug, via computer simulations, rather than on living organisms. This is a very hot subject a lot of people are working on. But I think it’s hopeless to prove theorems with systems biology because you have very complicated, detailed models, which is good for doing a simulation, but is not good for proving things. This is sort of an epistemological issue of how pure mathematics works. I could give you some amusing quotes to make this point, that theoretical physicists are well aware of: that toy models are useful to understand physical systems, because the real systems are too complicated. One way to make this point is to quote Picasso: he made a remark, the English version of which goes something like “Art is a lie that helps us to see the truth”. And I modified it slightly: “theories are lies that help us to see the truth.” I can also quote John Maynard Smith, a theoretical biologist, and Jacob Schwartz, a mathematician. I had the good fortune to meet John Maynard Smith, by the way, before he died, and Jacob Schwartz was a friend of mine -he’s also dead, unfortunately. And they both have very quotable remarks. One of them, by Maynard Smith in his book The origins of life, says it’s a mistake to think that very complicated models are useful in biology: very complicated models only have the effect of confusing you. You need to work with simple models, otherwise it’s hopeless to try to understand their behavior. I can give you another quotation, from an essay by Jacob Schwartz, called The pernicious influence of mathematics on science: in there, he says that pure mathematics is not good at dealing with real situations, which are normally complicated and have several things going on at the same time. Pure mathematics works best when you’re studying a single phenomenon. Pure mathematics is, to put it some way, single-minded, and it works best when you take a simple idea and elaborate its consequences, but does not work very well in a more realistic situation when more than one thing is going on at the same time. Then pure mathematics tends to get lost in a jungle of combinatorial complexities. You can find these two quotations in the lecture notes on metabiology I have posted in my website, at the very beginning.

The notion of creativity is central in all your work. What do you understand by it?

Well, I don’t know! I think it’s very interesting to try to understand creativity better. Part of what I understand by it is something that cannot be done mechanically, something that cannot be done automatically or routinely. So, in a way, for me it’s necessarily an incomputable function. So, for example, when Turing talks about the halting problem, he proves there is no general procedure, no general-purpose algorithm, that is, no mechanical way to solve that problem, which means it’s a problem that requires an unlimited amount of creativity. We could describe it using Feyerabend language -Paul Feyerabend never talks about these things, but he has some colorful, strong ways of referring to this- and the title of his book, Against method. That is a theorem of computer science, Turing’s 1936 theorem that there is no algorithm to solve the halting problem. So that means that you need to use different methods. There is no single method that will solve all cases. Similarly, Gödel’s incompleteness theorem shows that, even in elementary arithmetic, there is no axiomatic theory which will answer all possible questions. If there had been, that would give you a mechanical procedure to answer questions in elementary number theory, because you could just run mechanically through all possible proofs in a formal axiomatic theory. So both of these theorems state that there are no general-purpose methods in pure mathematics, that mathematics is rich. My version of incompleteness, the way I put it is pure mathematics has infinite complexity, in a sense that algorithmic information theory defines more precisely. Essentially, I can prove that pure mathematics is infinitely complicated. Any formal axiomatic theory only has finite complexity and, therefore, is incomplete, it cannot encompass all of pure mathematics. Another way to put it is solving the halting problem is infinitely complicated: no single algorithm of finite complexity will work.

You can take this two results by Turing and Gödel pessimistically and say they are a slap in the face of pure mathematics and even of pure thought. But I think the right way to take them is optimistically, the way Emil Post initially took them, and say these results are opening a door in pure mathematics to the very important issue of creativity. They’re saying creativity is essential in fundamental mathematics, it plays a fundamental role, and they’re starting to give us hints about how to understand creativity. Turing has a paper where he talks about oracles, using an oracle for the halting problem. Using an oracle is a little bit like divine inspiration: getting a yes/no answer from an oracle is like one bit of creativity, because the oracle can answer questions that you cannot answer mechanically. I’ve always been fascinated by this question, though I wasn’t working directly on this. But now, with this theory of evolution, I’m actually using incompleteness and incomputability in order to force evolution to go on forever. I need to face my organisms with a challenge that requires an infinite amount of creativity, and pure mathematics, with the work of Gödel, Turing and my own, gives us a mathematical problem that, if used to challenge organisms with it, makes evolution go on indefinitely. It’s a first step, but creativity is a very deep, important question. Certainly, if you look at mathematicians like Euler or Ramanujan, whose creativity seems really breathtaking -especially Euler- it’s hard to think of a rational explanation. Somehow, Euler seems to go straight to the source of new ideas. This may sound a little mystical, but creativity is mysterious. Some mathematicians say that thinking about things which are incomputable is mysticism, but I don’t think so. I disagree: I think you need to go beyond incompleteness and think about creativity. And prove whatever we can prove.

So, in a way, what you’re studying is the emergence of creativity through biological evolution.

Yes, that’s what I’m studying. I’m forcing my organisms to be creative by the fitness measure I take in my metabiological model, which is very simple, it’s just a random walk in software space. It’s a hill-climbing, random walk in fitness space. Another key intellectual issue if you want to come up with a theoretical biology is: what is the space of organisms? What kind of mathematics should we use for the space of all possible organisms? And I think the only space which is rich enough would be the space of all possible algorithms, of all possible computer programs. In population genetics, all we’re looking at is a fixed gene pool and the frequency of each gene in the population, and that’s not a very rich space of possibilities to model evolution and creativity. It models in a very detailed way some aspects of evolution that are very interesting, but it does not deal with creativity and where new genes come from. You have a finite set of genes in that model.

You are one of the proponents of Digital Philosophy. What does it imply?

Digital Philosophy is a kind of neo-Pythagorean view. It’s an ontology, it’s a metaphysics. And it’s sort of Pythagorean. Pythagoras said “all is number”, God is a mathematician, the Universe is built of one, two, three, four, five. And the new version of this Pythagorean idea, this new view which is Digital Philosophy, says the world is discrete, it’s built out of zeros and ones, out of bits, and God is a computer programmer. So the new version is “all is algorithm” instead of “all is number.” And a number of us are sort of enthusiastic about an approach of this kind, and each of us has a slightly different version of it, but we’re basically doing pre-Socratic philosophy. We’re saying “it would be nice if the world were this way.” But the world doesn’t have to be this way, just because we find it more beautiful intellectually. So we’re kind of doing the stuff that pre-Socratics did, saying “the world is fire” , “the world is number”, “the world is one” or “everything is change”: they were doing ontology. They were trying to understand the world by pure thought, pretty much. So that’s essentially what Digital Philosophy is, but there are some encouraging aspects in the real world, on the empirical side: there is of course computer technology, which is digital and discrete; there’s also the modern approach to Quantum Mechanics, called Quantum Information and Quantum Computation. Another thing I left out, which is part of Digital Philosophy, is the idea that the world is a giant computer program, or the Universe is a giant computer, which is calculating its future state from its present state all the time, it’s a giant calculation. And there is a physical theory which is very fashionable now and doing very well, which is called Quantum Information and Quantum Computation, and that’s a very exciting field which is uniting Quantum Mechanics, which is theoretical physics, with theoretical computer science, which is pure mathematics, and this is sort of along the lines of Digital Philosophy. There’s also more tentative work, dealing with General Relativity, connected to work on black holes by people like Stephen Hawking and Jacob Bekenstein, which has to do with Quantum Gravity, the attempts to unify Quantum Mechanics and General Relativity. String theory is one attempt to solve this problem; there are other people who work in a more phenomenological way, such as the work on the thermodynamics of black holes, by people like Stephen Hawking. Jacob Bekenstein, using this work on the thermodynamics of black holes, tentatively arrived at the conclusion of what has now been generalized into something called “the holographic principle”, by people like Gerhard ’t Hooft, a Nobel price winner in Physics, and Leonard Susskind, at the Stanford University. These are fine physicists, all of these people. And the general idea of the holographic principle is that every physical system can only contain a finite amount of information. And that comes, originally, from the thermodynamics of black holes, but you can generalize it to other physical systems. There’s a formula to calculate the number of bits, and the important thing is that this number grows with the surface area of the system, not with its volume. That’s why it’s called the holographic principle: it suggests that, in some funny way, the physical world is actually 2-dimensional, not 3-dimensional. That part interests physicists a lot, but what interests me -because I’m a pure mathematician, so I’m further away from the physics- is just the fact that this suggests that every physical system only contains a finite number of bits of information and therefore, in a sense, that the physical universe is discrete.

This would imply that the Universe is computable.

Right. If information is finite and discrete, then these models of the world as a computation work better, because computers are discrete and they work better with finite numbers of bits. Not with real numbers or field theory. In classical physics and field theory, quantum field theory, an arbitrary small piece of space-time contains an infinite amount of information. And, as Feynman says in his little book The character of the physical law, that’s a little implausible. So he says, based on pure thought, that a checkerboard model of the physical universe would avoid that problem. It would create other problems, like anisotropy -there might be preferred directions- but it would solve the problem of the infinite amount of information that you need for an arbitrarily small space-time cube. This new phenomenological work on Quantum Gravity, connected with the thermodynamics of black holes, already suggests that there’s only a finite amount of information, which I view as encouraging. For example, algorithmic information theory works better if everything is discrete, it doesn’t work so well for continuous systems, because the computer is the basis in which information is measured in it: it’s the size of the smallest program to calculate something, the size in bits, that’s the fundamental measure of algorithmic information. So you can apply that to physical systems if physical systems are really discrete. You can’t talk about the complexity of a physical system if you need continuous mathematics, not using algorithmic information theory, which looks at the size in bits of a computer program, because you need an infinite number of bits to calculate a continuous system.

That’s the reason I like Digital Philosophy, but the question is: does God like Digital Philosophy? I mean, is the physical universe that way? Just because I would prefer it, it doesn’t mean it’s that way. But there are some encouraging signs from Quantum Gravity, thermodynamics of black holes and the holographic principle. Also, I would say that Quantum Information Theory is encouraging to a certain extent. So that’s real physics.

On the other hand, from the side of technology, the most important technology of our time is computer technology, which is completely digital and discrete, and DNA, which is clearly discrete software. That’s encouraging also, as a metaphor. So I think there’s a lot of things encouraging people to think about Digital Philosophy. There are a few of us who would like to think in more specific terms: Stephen Wolfram – the author of A new kind of science- is certainly an important man to mention; Edward Fredkin has been working on this for many years and has a website (, apart from many papers; and I have my book Metamath!; so in a way what we’re doing is pre-Socratic philosophy, although part of the reason we’re doing it is because there are some encouraging signs from physics. But it’s sort of trying to learn about the world by pure thought… which is also theoretical physics, as I’ve read somewhere. Every good theoretical physicist, in a way, is doing metaphysics. A good theoretical physicist, as Einstein says in one of his essays, is basically a reformed metaphysicist: a metaphysicist is someone who thinks he can understand the world by pure thought, which is a little extreme; theoretical physicists have that in their blood, but they know that you have to look at experiments also. They’re sort of reformed metaphysicists. You won’t come up with a new physical theory unless you’re willing to take a leap into the unknown, based on pure thought. The experiments don’t force you to come up with a new theory. They do in some routine cases, but the big jumps require a big leap of imagination.

In a computable universe, is there any room for free will?

In classical physics, there’s also no free will, because it’s deterministic. Although, as chaos shows, classical physics -even though deterministic- is so unstable, so sensitive to initial conditions, that in practice you can’t know the initial conditions well enough to predict the future. After four weeks, the weather isn’t predictable, not even in theory, because a butterfly flapping its wings in India could affect the weather in New York. That’s called “the butterfly effect.” Now, when you go to Quantum Physics supposedly there is room for free will, you can use Quantum randomness and say that gives you free will, but that’s not terribly useful for you: would you like to have a friend who tosses a coin to make the major decisions in his life? So it’s sort of dangerous to use randomness in order to get free will. Schopenhauer has this beautiful line and says: “you can do what you want, but you can’t want what you want”, so there’s an illusion of free will. Einstein, I think, refers to this somewhere.

Now, Stephen Wolfram has taken up the issue of free will and I think he has an important idea. A new kind of science basically says the world is deterministic and can be run on a computer, so what is free will? Well, it can be run in a computer, but most of the time -and this is the main thesis of Wolfram’s book- there are no shortcuts, that is, most of the time, the only way to see what a physical system will do is to run it. Most of the time, the only way to see what a computer program will do is to run it: that is a new statement of Turing’s result. Stephen Wolfram calls this principle “computational irreducibility.” So, even though in theory every physical system is predictable -if you had an infinitely powerful computer you ran outside our universe- in practice, the fastest way to know what it will do is to run the system itself, that is, if you want to know what will happen, do the experiment and see. So in a way what he says is there is no free will, but it looks like there’s free will because there’s no fast computation that will enable to predict what something will do, in general. So, in that case, you may think of a physical system as either random or exercising free will. He also goes on in his book and somewhere he says: the world looks like it’s random, but it could all be pseudo-randomness, and we couldn’t really tell the difference. The world could be deterministic and look like there was a lot of randomness, but it could actually be pseudo-randomness, like the digits of pi. If you don’t know them, the digits of pi look very random and Stephen Wolfram believes all the randomness in the physical world is like that. That’s certainly possible, it’s a possible world, I don’t know if it’s our world. So you can regard the work on Digital Philosophy, on Digital Physics, as theoretical physics of possible worlds, not necessarily of our world. And that may sound crazy, but string theorists, for different reasons, are doing that, they talk about the multiverse, and part of the reason is that there isn’t one string theory, there are 10 to the 500 possible string theories and they don’t know which one is our universe, they think one of them is, so talking about possible worlds has become more acceptable in theoretical physics. You may take it as a bad sign; you may say theoretical physicists are talking more like philosophers because there isn’t any convincing empirical data to suggest new theories, or maybe because theoretical physicists haven’t come up with the right new theory. That’s possible, but there are people like Max Tegmark who really have some very convincing arguments in favor of multiverses and possible worlds, actually based on astronomical data, cosmological experimental data. One of the things he said is: “the physics of our particular universe is not really intellectually interesting: it may be practically important, but it’s just our postal address in the multiverse.” You have the space of every possible universe, of every possible set of laws, and this ensemble is more interesting than any particular universe. This is one of the philosophical arguments they make; Max Tegmark has cosmological and astrophysical arguments based on actual astronomical data. I remember hearing a talk from him which got more and more convincing as he was going on: the general drift of what he was saying was: cosmology used to be a very philosophical area of theoretical physics, because there was very little data, so it was wild speculation, whereas now there’s an awful lot of data and cosmology has become a rather hard subject, because you have many models that account very precisely for the data, whereas fundamental physics, when I was a student, was very solid and cosmology wasn’t, because particle physics had a lot of data and cosmology didn’t. Now it’s the other way around: in fundamental physics you have string theory, where they talk about the landscape of all possible string theories, which some people call possible worlds, some others call it the multiverse -there are different versions of these ideas, for different reasons, which reminds me of Leibniz of course, “the best of all possible worlds”- so, in a way, theoretical physicists are talking now more like philosophers. And philosophers mostly now are against metaphysics, especially if you’re an analytic philosopher, but strangely enough, metaphysics is alive and well in theoretical physics. Also, theoretical physicists are not scared of ontology, whereas that’s not fashionable in philosophy, where what’s fashionable is epistemology. Ontology belongs with the pre-Socratics and now is considered as something we will never know, the real nature of reality. But physicists try. So ontological speculations are alive in theoretical physics, even though philosophers consider them ridiculous. That’s the fashionable view if you follow analytic philosophy, which I don’t believe in, as you can guess from these remarks.

What is the meaning of life?

I don’t know. In the metabiological model, the meaning of life is trying to be creative. That’s the best answer I can offer at this moment.

See this author’s biography.

Share and Enjoy:
  • Digg
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Google Buzz
  • Meneame
  • Reddit
  • RSS

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>