[band/news]  [music]  [artwork]  [articles]  [the apostles]  [order]  [contact]

 

   This essay - in an abridged and edited format - originally formed part of a double-paper submitted by myself for discussion at the 3rd Impossible Conference, organised by Unit and held in The Gateway Exchange, Edinburgh, in January 2005. Members of various scientific and magickal disciplines compared, debated and evaluated the results of researches in their respective fields over a two day period in workshops which culminated in a brief seminar for the whole assembly on the last day of the conference. This essay (admittedly in its shorter, simplified form) was read out in its entirety. I was one of five people who had initially been asked to submit papers for discussion on topics of our own choice for use at the conference. ‘The Miserable Mess Of Magick’ was the companion piece submitted to the conference. This is the full, unabridged text.

THE NATURE OF COMPLEXITY

   The theory of evolution propagated by Charles Darwin has (unfortunately) been adopted by many biologists as ipsissima verba: to advance beyond the initially necessary but deficient theory of evolution of this 19th century eccentric is virtually anathema to them. To me, this is faintly bizarre. The significance of self-organisational processes is acknowledged and recognised now even by chemists (who have caught up with the physicists at last), yet in biology, where organised complexity is at its most conspicuous, there remains scepticism. Why, I inquire, is the nature of complexity a possible cure for the curse of Darwins’ evolution theory which has held so much of modern anthropology in the grip of 19th century Britain for so long?

   The answer is located not in biology, anthropology or the study of evolution itself but in mathematics and physics. A previously unacknowledged source of order prevalent in our universe was indicated by the research undertaken by the German mathematician Von Neumann with his self-replicating machines; astronomers such as Fred Hoyle, Carl Sagan and others began to formulate questions, not the least of which was to inquire that since the universe began at least 1×5 x 109 years ago, there now exists a stupendous variety of all the multifarious forms into which matter has been sculpted - how did all this arise from out of an inchoate, formless genesis?

   Clearly there is ample evidence to suggest that our universe is a self-organisational system. This treats Darwins’ eccentric, if elegant theory with the scepticism it has for so long deserved. The Institute for the Study of Complex Systems was founded at Santa Fe in New Mexico to investigate, through a correlation of over a dozen different disciplines, complex adaptive systems, the relation of complexity to simplicity, turbulence, non-linear dynamics and the mathematical responses (via computer programmes) to examples provided by nature. For example, since intelligent people realise that the concept of ‘God’ is both idiotic and impossible, the true perspective of creation could be witnessed in far greater glory than was previously the case. A complex adaptive system, such as pre-biotic chemical matter, preceded the biological revolution, which in turn preceded ecology. Both the biological revolution and the ecological system are examples of complex adaptive systems. These propagated individual thought and learning, mammalian immune systems and so on. In fact, in humanity, our cultural evolution has far surpassed our biological evolution, so rapidly has it progressed.

   A complex system is, for instance, a communication network, a mammalian brain, an economic model, a weather pattern or a snowflake. The questions the Institute asked were initially these: do these examples share any common features? Are there laws of complexity just as there are laws of motion and of gravity? In physics, each atom moves under the duress of the forces (that are devoid of purpose) of its neighbours, oblivious of any governing plan. In biology, individual components co-operate under a benign regime of purposeful, mutual support.

   However, such an impression of purpose may be illusory. Despite the conceptual gulf between the attitudes of physics and biology, particularly in their respective approaches to their disciplines, complexity may ultimately provide a common theoretical basis for both these branches of the tree of science. Here, as you have no doubt realised, a problem arises: science tends to deal with generalities yet by their very nature, complex systems are unique so how can we arrive at a coherent study of complexity when the properties of which complexity are comprised are not amenable to abstraction? After all, the intention of science is to simplify the vast ocean of data that assaults us in order that we may attempt to understand it all with at least some degree of comprehension.

   The physical world is not a jumble of unrelated objects; its complexity is organised. Proof of this is provided by the most cursory glances around us: an ant colony, a whirlpool, the migratory patterns of birds, a living organism. Evidently the components of complex systems co-operate to produce coherence and concord, able to adapt to the environment. Basically then, the qualities of organisation and adaptation are awkward to define in the necessarily rigorous terminology of science.

  There is one ally in this battle of incongruous concepts: the theory of computation. Complex programmes generate complex results but these often follow simple systematic procedures known as algorithms. From this example we see that complexity derives from simplicity - rather akin to the manner in which cells divide in an organism. This idea can be applied to physical systems. The compression of a message is imperative when we are required to describe a system to a distant correspondent, for instance, so that it is the length of the message which matters. This definition is subjective as it depends upon the language used and therefore must be avoided. For example, we could call the system by a nickname but that would require previous agreement prior to the broadcast of the message. We must also avoid ostensible definitions such as to physically point at or in such a similar manner indicate a detail. The programme is thus defined as the shortest possible message in a previously agreed language necessary to describe a system to a certain level of detail to a distant observer. This is known as the algorithmic information content.

   Should a system possess a high degree of complexity for its size then it is said to be random, that is, it is incompressible since it is not possible to locate a message shorter than the system itself to describe it. The problem inherent in the use of the shortest message (computer programme) that describes the system in question is that random and chaotic systems are indistinguishable from equally complex organised, ordered systems. We need to isolate the organisational aspect of complexity.

   A complex system possesses, by implication, a history of greater magnitude than itself and would be impossible to exist without that history. This computational description may be applied to a wristwatch. A wristwatch is the shortest length of any plausible evolution which could have resulted in the existence of the watch. Thus we quantify the complexity of a system by an acknowledgement of its evolutionary history. So in complex biological systems of a highly organised nature, we can see they arrive as a result of a sequence of genetic mutations, each of which can be regarded as a form of information processing, which of course means that complexity is not merely the length of the programme but the duration of its execution.

   In computer language, what is the peak processing rate of the average human brain expressed in bits per second? Using visual stimuli as an example from which to work: to the unaided eyes, the moon appears to be 0×5°; human eyes on average can see about 12 pixels across,

 

\ eye resolution  0×5  =  0×04.

                                                12

 

The instantaneous field of view of human eyes  @  2° \ at any given time a picture may be seen by both eyes

 

=     2   2   =  2,500 pixels.

                             0×04

 

   To characterise light intensity, movement and colour et cetera, takes about 20 bits per pixel, therefore a computer description of the human eye reference frame requires 2,500  x  20  =  50,000 bits. To scan the picture takes about 10 seconds which gives us

 

50,000  =  5,000 bits per second.

                   10

 

   How difficult it is to construct a complex system and what resources are required for its assembly or to perform a given task are functions measured in terms of what is known as thermodynamic depth. This is equivalent to the amount of required information (free energy) necessary to assemble the system where we are concerned not purely with the amount of energy but of the quality of that energy, informational energy in fact. This information is not random. Therefore the thermodynamic depth of complexity is not the same as the algorithmic complexity.

   This can be clearly adumbrated by analysis of the work of different artists if we use this criteria to illustrate the concept. With our computer programme analogy, let us assume I am required to convey the exact properties and forms of a minimalist painting to a distant correspondent who cannot see, nor has ever seen, the painting in question. It will take scarcely any time or effort to relay precise details of the information necessary for that distant correspondent to reconstruct a replica of the painting based upon my information (although why anyone should actually want to reproduce such garbage is beyond me) since such ‘art’ possesses low thermodynamic depth and low algorithmic complexity. Camera pan to the warehouse where dozens of ‘original’ Reinhardt squares and Kelly triangles run off the production line destined for sale at $3,000 a piece...

   Now, however, since the public no longer exhibit an interest in the mass production of black squares on black canvasses, my agent has asked me to relay over the telephone the precise properties and forms of a drip painting by Jackson Pollock. It is still expressionist rubbish, of course, but to describe it within these parameters will present a formidable proposition! Clearly it possesses truly enormous algorithmic content yet low thermodynamic depth: the plan for the execution of the painting is hardly more difficult than the description of it. All systems which are very nearly random tend to possess low thermodynamic depth. Now my agent has changed his mind again and wants me to turn to the faithful reproduction of genuine art such as a canvas by Heironymous Bosch. Here the technical requirements for the execution of the system far exceed the description of it, for consummate skill is required on the part of the executor - the painting has both extreme thermodynamic depth and high algorithmic complexity.

   Every definition so far considered has suggested that the universe is self-organisational: that is, it grows from shallow to deep complexity both in algorithmic and thermodynamic parameters. Physicists will eventually appreciate that this contravenes the second law of thermodynamics which, simply expressed, states that there is a natural tendency for physical systems to become more disordered as a function of time. Eggs are easily broken but difficult to make. Sand castles are washed away by the tide but never washed into existence. Buildings collapse, cars rust, plants and animals die. Localised systems such as crystal growth may appear to refute this law but generally any increase in order is compensated by a greater increase in disorder.

   In our universe in accordance with this law, the total degree of entropy rises with an apparent advance of chaos. However, with this pervasive entropy there exists the advance of depth. Like the Hindu gods Shiva (or the second law of thermodynamics) and Brahma (the advance of depth) the degenerative and the progressive march ever onwards in blatant contravention of the properties each possesses.

   At this point we must consider the matter of equilibrium. Shortly after the theoretical (and by no means proved) ‘creation’ of our universe (referred to by its extremely technical epithet as ‘the big bang’), the universe was at thermal equilibrium but due to the rapid expansion that then commenced (for which nobody has yet been able to offer a satisfactory explanation), this state of equilibrium was destroyed and for this reason: the potential entropy grew in proportion to the growth of the universe but the actual entropy, though it increased, lagged ever further behind - this is what allows energy to flow; therefore we live off the proceeds of the entropy gap. When ever energy flows, there is always a certain amount that is dissipated, as in the internal combustion engine where petrol energy is lost through heat derived from friction. 

   It is this dissipation that leads to creativity: the complex processes in our world are due to simple causes. In other words, a flow of energy creates complexity. If matter is driven away from thermal equilibrium, it spontaneously rearranges itself into complex systems so that, in fact, energy derived from dissipation has replaced the function of what was once called ‘God’. Scientists have studied both physical processes and chemical reactions by a concentration upon systems that are close to equilibrium, that is, those in which disturbances from a steady state are gentle, where changes occur slowly. Examples of this process are the streamline flow of air across an aeroplane, the formation of rust on metal.

If the experimenter drives the reaction with greater rapidity, the system moves into a regime known, in the typically imaginative tautology of so many scientists, as ‘far from equilibrium’. Here the behaviour of the system may adopt a radically new form, such as the occurrence of chaos, turbulent air flow or the movements of the stock market during a crisis. In other situations there is a different effect: the spontaneous growth of order and/or complexity.

   A fine example of this latter effect is the Belusov-Shabodzinsky reaction, a chemical process discovered in the early 1960s. Here a featureless chemical mixture spontaneously grew spiral shapes or pulsated between different colours. Other examples of this effect are fluid flows and convection currents. When the study commenced of non-equilibrium processes (as in linear systems such as heat flow and thermal diffusion), there were simple relations between flow and force as was to be expected, but at a certain critical distance from equilibrium a new factor arises in the system: it may enter a coherent state. One is reminded how incompatible the laws of biology appear to be from those of chemistry and physics: biological molecules are so improbable, so unlikely to discover the correct sequences, yet they still achieve this. Evidently a state of non-equilibrium leads to molecular order. These far from equilibrium phenomena capture the essence of biological self organisation, at least at the molecular level.

   Recent research into evolution, ecology, social and economic systems has revealed that a large degree of interacting components form networks. By use of computer studies of networks of random construction, Doctor Stuart Kaufmann attempted to model gene switching in developmental biology. Self organisation was found to be one of the most fundamental properties of the development of cell differentiation: they did not require initial selection. Dr Kaufmann proposed a theoretical model where a system of 10,000 genes was comprised of active and inactive components.

   This is of course a binary idealisation for in fact no gene is actually ‘on’ or ‘off’. Each gene was regulated by only a few other genes. The action or inaction of the genes was indicated by a system of light bulbs, each one wired up in the same manner as the hypothetical genes: each bulb had input wires from only a few other bulbs. Again, there would not be real wires, of course, for cells are controlled by molecular activity, their behaviour mediated by specific small molecules. He assumed that nothing else was known about this system so that the only method available for an exploration of its properties was to examine all possible networks. Bear in mind that each gene is regulated only by a small number of others so that each possible network is subject to this constraint.

   His questions were these: what would be the typical behaviour of such a model and what would be its properties? Contrary to what many of you may expect, random chance would not be the end result for the 10,000 light bulbs linked up to the same number of genes constitutes a finite system which therefore can only exist in a finite number of states. Given enough time, the pattern of twinkling lights must eventually return to its initial state. The system is thus not random but will endure complex states of immense duration. Dr Kaufmann then took this further by a postulation that we build a system of 10,000 light bulbs subject to the constraint that each bulb is affected by only 2 others. Here the sizes of state cycles and the number of state cycles is small, equivalent to the square root of the number of light bulbs in fact. 10,000 light bulbs would have 2 to the power 10,000 states so that the length of the cycle into which it would settle is the square root of 10,000 which is 100. 

   Therefore, rather than the gene activity acting in a random manner, it adheres to localised loops of behaviour that uses 100 states out of a possible 210,000. This is a manifestation of spontaneous order on an extremely large scale. Expressed in terms of biology, we interpret a state cycle as a cell type so that SC1 is a liver cell, SC2 is a kidney cell, SC3 is a neuron and so on. The number of different state cycles is the square root of n so that if you had 10,000 model genes you would have 100 different model cell types each with 100 patterns of gene activity through which it is cycled. The practical result of this is that bacteria has about 2 cell types, human beings have about 250 cell types and about 100,000 genes. The square root of 100,000 is of the order of 370, so here is an abstract model which takes us close to the actual number of cell types in the human body. It predicts scale relationships that suggest the number of cell types should increase as a square root function of the number of genes.

   Very approximately the number of cell types in organisms does increase as a square root function of the number of genes. The state cycle length is about the square root of n. The time taken to turn a gene on or off in a bacterium is approximately 1 minute and in a human, about 5 minutes. One obvious cyclic process of cells, as I have already indicated, is division, so if we identify the turn on and turn off of genes and their recurrent cycles with cell division, we arrive at the prediction that cell division time should vary as a square root function of the number of genes and ought to be of the correct magnitude. It so happens that both claims are true. In humans there are 100,000 genes, the square root of which is approximately 370; if it takes about 5 minutes to turn a gene on or off then that represents about 1,000 minutes which is itself approximately 1 day and 1 day is about the time cells take to divide!

   It is apparent that Darwinian selection is clearly not the sole source of biological order if innate networks possess the tendency to spontaneously organise. The laws of physics thus bestow upon matter an intrinsic creativity which at least in part accounts for the progressive evolution of our universe. The initial condition of our universe, at the time of the hypothetical ‘big bang’, its dynamics, the unified field theory of all its elementary particles and all its forces, could actually have determined the fundamental laws that apparently prevail today. Our external reality largely depends upon dynamics (Newtons’ laws of motion and gravity) but also upon the initial conditions of our universe, for these give the correlations that permit the evolutionary processes which result in the structure we see around us now. In other words, the initial conditions of the universe of which we are aware endow it with the propensity to achieve depth. Complex adaptive systems occur only where enough classicity, regularity and predictability exist for such systems to learn from their environment. If our universe was a random mess of thermal fluctuations or quantum accidents with no degree of classical regularity (as exemplified by self organisational processes) then complex adaptive systems would simply never arise.

   We are as yet unable to measure the set of initial conditions of our universe in order to say how probable are those which lead to the kind of universe in which complex adaptive systems can occur, particularly since the universe may always have existed and be infinite in size! The history of our universe can be called an interaction of chance and necessity. Chance still partly determines the nature of the world but from the study of complexity it seems that complex systems can organise themselves given suitable conditions which may depend crucially upon the state of our universe during the first moments of the ‘big bang’ or, even more critically, should there never have even been a ‘moment of creation’ at all.

   The relationship between the laws of physics and the initial cosmological conditions are akin to a computer programme and its input data; we can therefore conceive the present state of our universe as the input from this celestial programme which, if the analogy is studied closely enough, could indicate an actual link between computation and the physical processes in nature. Our physical universe conforms to certain mathematical principles and fundamental laws; Galileo Galilei once observed that ‘the book of nature is written in the language of mathematics’. Mathematics is of course a product of the human brain, itself a complex physical system. How has the brain generated a product which so profoundly captures the fundamental properties of nature?

   The study of complexity has already revealed the importance of the computer paradigm with which to describe the physical world. Nature may not even be merely a manifestation of mathematics but of a gigantic computational process. Our universe may itself be a gargantuan computer, say certain scientists from both the California and the Massachusetts Institutes of Technology. Thus God is no longer a mathematician but a cosmic software designer.

   Mathematics can be used to describe atomic structures, how objects fall and move, the machines and devices constructed by industry; the corollary of this is that we can construct devices whose operation simulates the performance of mathematics. This is possible because addition, subtraction and trigonometric operations are computable functions; if this was not so, then machines whose operations simulated those mathematical functions could not be fabricated. It appears that mathematics describes physical laws wile the laws of physics tell us what is computable. Is this consonance between computability and physical law inevitable or does it indicate some subtle aspect of the nature of our universe?

   The logical basis of computers was established in about 1940 by Alan Turing who formulated the proposition that no finite system can generate all true mathematical statements by blind obedience to an established procedure or algorithm. There would always exist mathematical facts that could not be proved within a system, regardless of the level of its elaboration, so even the most superior of computers would possess finite limitations. The existence of non-computable functions introduces random factors which in turn implies that mathematics is restricted by fundamental limitations and is therefore incomplete. If the physical world is both the source and the manifestation of mathematics then it must also reflect these limitations. This is no eccentric whim of philosophy, as a closer scrutiny of the humble motor car engine will illustrate.

  Scientists who work on the problem of maximum efficiency in various machines but particularly the engines of motor cars have discovered that the small microprocessors associated with the ignition systems are constrained by basic rules of logic. To attain maximum efficiency, such engines must develop perfect codes for the information they use. Such engines pass information around consistently, from the timing chain to the distributor cap through to the spark plug. This information is processed via a microprocessor to adjust the mixture for maximum possible efficiency. The limitations appear in the theorem which states that there are certain qualities that are non-computable, that maximum efficiency is itself a non-computable function and thus can never be attained by any engine. Where previously the world was regarded as a collection of material objects that moved around in accordance with the laws of dynamics, the new science of complexity propagates an alternative paradigm: the currency of our universe is not matter but information. This notion is virtually a familiarity to biology where heredity and the genetic code specify essential information transmission and storage, but to physics the idea is novel.

   What happens when we apply computational and information dynamics to a biological example, for instance, the manner in which proteins fold? Inside an organic cell, a protein molecule consists of a long chain of atoms but its biological function is performed directly only when it folds into a complex three dimensional structure of a very specific shape. So how does the protein molecule discover the correct conformation in which to shape itself? The duration required in which to solve such a problem that consists of so many possible variables should grow as an exponential function of the increase in size of the protein. This is not because the formation of a fold is difficult but because there are so many possible conformations available.

   If we use the new paradigm, we can conceive the protein molecule to be a device to process information that is then confronted with a search problem of immense complexity. It is likely that the protein molecule has developed a particular shape that allows the elision of this otherwise astronomical task. If my supposition is an accurate comprehension of the process, could these amazing molecules be harnessed for use as part of a computer? The utilisation of protein molecules is itself a formidable task; less fearful would be a proposition that suggests the use of another biological tool, the DNA molecule. Why should this be so? Because we know we can take a fraction of RNA and search the DNA library by chemical means in order to extract the piece of DNA that most closely matches it.

   In a sense, biosystems are computers; they may not be able to compete with silicon but powerful computations could be achieved with the use of DNA. You can clearly appreciate that it matters not what material comprises the construction of any particular computer. We can simulate computers and use almost any physical process for a computation. “If a computer achieved a certain level of capability, it could do any work of any other computer, if it possessed sufficient memory.” (Alan Turing). From a microprocessor that governs the ignition system in a motor car to a super-computer for the calculation of weather fluctuations, these machines are basically of the same origin. However, in all industrial uses, computers possess a level of memory that is finite, regardless of the purpose for which the computer is allocated. A general purpose computer does not require a complex design. Its job is basically to move pieces of information around and alter them; nature (not only in DNA) performs precisely this function.

  From the selected specifications given about computers above, I am compelled to ask: is nature therefore a computational process? Initially, I feel obliged to reply ‘no’ for one reason: all known computers are irreversible in their operation. That is to say, one cannot take the output of a computation, run it backwards through the computer and thus recover the input. In nature, however, atoms adhere to fully reversible laws of motion. All the early computers, such as the famous 4-tran PDP11, possessed a ‘clear memory’ function for the erasure of previous information that was no longer required. It has recently been discovered that circuits for a reversible computer can be constructed with a provision for all the unnecessary material to be recycled as data. As a consequence, I can perhaps amend my response to a tentative ‘yes’. Could such a computer then simulate any physical process, regardless of its complexity?

   Complex systems are not always generated by complex processes and indeed, forms of complexity rather akin to those found in life can in fact be generated by use of the simplest computational rules. For instance, take the Game Of Life, invented by the mathematician John Conway: a chessboard is used upon which coloured spots are placed - this all appears on a computer screen. Simple rules are used; the game commences with a pattern. The rules are applied to each square; the pattern alters, repeats and evolves. This is actually a toy universe where the chessboard rules provide a substitute for the role of the laws of physics and material objects are represented by the coloured patterns that move, interact, reproduce and perform computations of their own. These properties are so convincing that versions of them have been used to simulate air turbulence.

   The Game Of Life is one of many such chessboard simulations known collectively as Cellular Automata. An advanced cellular automaton was built at the Massachusetts Institute of Technology. Basically a cellular automaton is a mathematical game that possesses similarities to what we understand to be the rules of our universe. On this cellular automaton, some of these rules were inserted into the computer upon which a variety of idealised universes were run. The only common factor was that each idealised universe obeyed these rules. Later, the rules were slightly altered in order to obtain different possible universes. A window on various possible alternative universes is thus provided. These universes may be studied in real time as the operators disturb them, play miracles, change the initial conditions, stop them for analysis then resume the simulations.

   This particular cellular automaton programme is referred to as ‘the god game’ for obvious reasons. Scientists at the M.I.T. believe it should be possible as a workable hypothesis to reconstruct the programmers’ blueprint for our universe as a computer; they have already commenced work on this project. Our thought processes could be models of those of our universe. For instance, take the self-simulation ability of any computer: we may not physically be ‘here’ but be merely data in a computer simulation or the data of a meta-simulation of a computation being simulated by another computer which is itself being simulated in a situation of infinite regress.

   What happens when such a programme terminates as a result of a bug or virus? Sentient organisms that inhabit a toy universe may seem a most capriciously whimsical notion but the founder of computers, John Von Neumann, proved the ability of patterns to exist in cellular automata that possess reproductive abilities. The step from a process of self-reproduction to consciousness is small, at least according to those who believe the construction of thinking computers to be a viable possibility. As the genetic information we contain is gradually decoded, it becomes evident that where once it was assumed that life was intrinsically associated with organic chemicals that could only be produced by living creatures (and later, chemists actually synthesised organic chemicals), it is now equally plausible that life could derive from some random initial state in a computer network of sufficiently gigantic size to permit such a level of complexity.

   There will probably come a time when a computer is created that will possess the ability to think for itself; there will be a computer that can think perhaps 1,000 times faster than that first machine. Every few years, subtle advances in computer design could result in a computer better equipped to design superior versions of itself. After all, what machine is better adapted to an adequate comprehension of the task of the creation of artificial intelligence than an artificially intelligent machine? This production process could even continue until the capabilities of the computers were exhausted. One can imagine a conversation between two computers: in the time it takes for a programmer to say ‘What are you doing?’ these computers could discuss every sentence ever spoken by every human being who ever lived while they solve Hermetian scalar field problems to multiple powers between microchip breaths. Perhaps one of these mighty machines might best respond to the programmers’ question with the polite reply ‘Oh, you know, things in general.’

   Perhaps this has already occurred elsewhere in the universe? However, you may object to this possibility since if it had occurred then surely we would be aware of it? Well, consider this: in what would a vastly superior computer of astronomical mental ability be interested - us? To be stoic for a moment, we must remember that we have tended to extrapolate our latest scientific advance to the point where we believe the whole world is entirely subservient to it, with the whole universe in dutiful obedience to our computer model version of all that exists. 2,500 years ago, Greeks discovered the power of numbers and geometry. Before you could say ‘logarithm’, they had a whole cosmology based upon it.

   In the late 17th century, Isaac Newton created the laws of mechanics that were soon so influential that our universe was declared to be a giant machine. Today, in a paradigm dominated by computers, our universe has now become a glorified Pet Commodore with ideas above its station. Is this not merely just the exuberant fantasy of zealous computer scientists?

   There is one reason why this could in fact be so: if the laws of physics are aspects of a cosmic computer programme then by definition we would be prevented from the means by which to discover facts that pertain to that computers’ hardware, such as the very nature of computation; its power derives from the computer being a universal machine. If we are only part of a programme then the programme cannot obtain information about the machine upon which it is being run. There would be a fundamental form of physics responsible for this computer which would not be responsible, however, for the programme, therefore we would never be able to discover the nature of that fundamental physics. This is quite unacceptable, if only on philosophical grounds. From the perspective of a physicist, it represents a dead end created by ourselves. Is it of crucial importance from what matter the computer is constructed? Most certainly, for it is the aim of physics to reveal the truth about our universe; it is therefore erroneous to assume the existence of fundamental limits to the amount of truth we can discover until we are absolutely forced to do so.

   The is of course another way to look at this. Could we not indicate the atoms of which our universe is comprised and claim those atoms to be the hardware of this hypothetical cosmic computer? To do that, however, is to say that there are computers in this universe and that the physical processes we observe around us are computations. Suppose we reject the notion that a computer (of the Turing model) could ever capture the quality of consciousness: a Turing machine (upon which all computers are based) follows a simple mechanical procedure, an algorithm. Certain processes (such as artistic, musical or mathematical inspiration) involve fundamentally non-algorithmic roots to knowledge.

   To support this contention we can appeal to a crucial limitation inherent within the logical foundations of mathematics: if we propose an algorithmic procedure for the truth or otherwise of a proposition, then we can construct a further proposition from the procedure that we can see is true but which the previously arranged procedure cannot. In other words, mathematicians can perceive the truth inherent in statements that could never be demonstrated by any mechanical procedure.

   There exist innate truths available to us that cannot be mechanised. This implies that the human mind employs a means to truth that no computer could ever utilise. If the same areas of physics apply to us as to computers then how can we gain access to truths that are unavailable to our mechanical brethren?

   It could be that a computer does not use the same particular aspects of the laws of physics as the human brain. These would have to involve quantum ideas (in the brain) although this probably goes well beyond our present knowledge of quantum theory. What would happen if we programmed a computer to discover just how the human brain functioned and what properties existed as criteria for it to function? In our arrogance we could imagine the furious flash of lights while banks of technical hardware emitted reams of ticker-tape and belched huge billows of smoke as The Big Computer tumbled to an ignominious halt in a shower of silver sparks due to its inability to deal with such a problem...but somehow I seriously doubt the validity of this picaresque scenario.

   It is to quantum theory I must now turn. Quantum systems (that deal with atomic and subatomic processes) are subject to intrinsic uncertainties; this renders such systems indeterministic. Any analysis of the relationships between computation and physics, until recently, tended to disregard quantum effects but if the human brain exploits quantum processes then it may possess different capabilities from all known computers. Is there a method to check the validity of this proposition?

   To achieve this we need to study how a hypothetical computer, specifically constructed to harness quantum effects, would operate. Research into this area was in fact engaged for it was believed that quantum effects would impose a limit on computability in practise because classical models of computers (Turing machines) possess properties apparently incompatible with quantum theory. For example, the state of such a machine has to be sharply defined with no uncertainty; but in quantum mechanics there exists the fundamental uncertainty principle. After much theoretical research, it was realised that no limitations upon classical computations exist from the use of quantum theory so that Turing-type machines could be constructed with the arbitrary accuracy necessary for the incorporation of quantum physics. The unexpected result of this work was that not only where there no restrictions on classical computation but there were actually modes of computation available to a quantum computer that are not available to a classical computer. Once this was both recognised and understood, it was hoped the quantum computer could compute functions that were previously incompatible with classical computers.

   However, it transpired that the functions a quantum computer can compute are the same as those of a classical computer but the method of execution is different in that the steps taken by a quantum computer could not necessarily be mapped or predicted. As a consequence, the complexity theory of quantum computers is slightly different from that which pertains to classical computers. There are computational tasks that can be performed with exponentially greater rapidity by a quantum computer, thus the physical notion of computability in quantum mechanics is at variance to that of classical mechanics. As a result, the proposition that quantum physics could be related to or associated with consciousness remains pure conjecture. Furthermore, until we understand how the human brain actually functions, the proposition that the universe is a computer must also remain in the realms of sheer onus probandi. 

   If we could somehow prove that the human brain was a computer, would that explain why mathematics (which is, after all, a product of human thought) always so efficaciously describes nature with such remarkable accuracy? I must remind you that the reason the world appears so mathematical could be because it is only the mathematical aspects we are able to recognise. Should the world possess deep, non-computable aspects intrinsic to its nature, they may not be apparent nor amenable to our scientific investigation.

   If our brains and the manner in which we think are a consequence of the evolutionary process then they have evolved in relation to certain aspects of reality: they survived because they are effective at the acquisition of sense from their perception of reality and also in their ability to organise it in a way that enables us to respond to it, exploit it, avoid dangers and so forth. If this range of experience necessary for our survival does not require the inclusion of concern for non-computable processes then our brains need not have ever derived an appreciation for such processes nor be able to engage in or understand such non-computable aspects of nature. We are thus imprisoned within minds that are a part of the reality we attempt to comprehend. We may even project onto external reality the inner mathematical functions of our brains.

   There remains a fundamental flaw in this particular argument: our mathematics is so universal in its applicability. It is no surprise that we discover computable functions in the world for that would necessarily be true; whatever the laws of physics had been, we would have called the functions computable that physical systems can compute. What is of special interest is that the computable functions in different physical systems are identical. This is why we can appreciate reality - our brains are computational machines and because other machines generate the same functions when they are regarded as computers, it is possible for us to invent laws that concern them. What is of interest here is not that functions are computed by our universe for that would have been true in any case, but that different physical systems compute the same functions. There appears to exist a computational resonance between the human brain and the physical processes that occur in the environment.

   This conspires to produce a fortuitous coincidence in our universe that any physicist would, naturally, regard as an opportunity for the proposition of a new law of physics. As a result, this computational resonance between our brains and the physical processes of nature is not merely an intriguing regularity but a previously unrecognised law of physics that defines the computable properties of physical systems. Should such a new law be ratified, the study of complexity will have revealed a new law that links human thought intimately with the physical processes of the cosmos. It appears that the connections between mathematics, consciousness and physical complexity might all depend upon the special nature of the laws of physics.

Andy Martin (c) 2005.

 

 

 

Links:

[ Redchurch Studio ]  [ Resonance FM ]  [ Alternative Radio ]  [ What Really Happened.Com ]