Category: Consciousness


Last year it was announced that quantum vibrations had been found in microtubules. Microtubules are a hollow structure in the cytoplasm of neurons, the cell substance between the cell membrane and the nucleus. This is extraordinary as such quantum effects are thought to require very cold temperatures and biological systems have been considered to be far to warm for such thing to occur. Further to this, the finding gives some support to a controversial quantum theory of consciousness by Sir Roger Penrose and Stuart Hameroff that is some 20 years old. You can read about it here.

So what does this all imply? I decided to read over Penrose’s books ’The Emperors New Mind’ and ’Shadows of the Mind’ to find out.

Neurons are the basic functional units in the brain. The conventional view is that they transmit information using electrical signals called action potentials. A neuron has a membrane that serves as a barrier to separate the inside and outside of the cell. The membrane voltage of a neuron is dictated by the difference in electrical potential inside and outside of the cell. Neurons are electrically charged by membrane ion channels that pump ions, which have different electrical charges, across their membranes. Neurons are constantly exchanging ions with the extracellular surroundings in this way. In doing so they can not only maintain resting potential, but also propagate action potentials by depolarising the membrane beyond a critical threshold. Action potentials are transmitted between neurons allowing them to communicate.

This functionality can be encoded in an algorithm, which means that the conventional biological model of the brain can be simulated on a computer. In his books Roger Penrose critiques Artificial Intelligence research by claiming that human understanding is essentially non-algorithmic and therefore non-computational. The argument is derived from the Church-Turing Thesis and Godel’s Incompleteness Theorem, which are considered (by Penrose and others but not all) equivalent to each other.

Penrose’s argument goes something like this: There is an algorithm for deciding if a mathematical proposition is true. This algorithm must be consistent otherwise the decision about the proposition cannot be known correctly. However, according to Church-Turing and Godel the algorithm, if it is consistent, cannot by definition be applied to itself to discover if it is consistent/true. The implication for AI is that either we can not know if something is really true or the method used to ascertain truth cannot be known or validated as correct. Penrose believes that out ability to know mathematical and indeed all truths is unassailable, because such truths particularly mathematical ones are ideal. In turn he suggests that we know our understanding is correct and therefore we know something that cannot be known algorithmically.

This leaves us in a number of positions. Either (A) our understanding is algorithmic but we can never understand how it works, (B) our method of understanding is algorithmic but not consistent, or (C) our understanding is non-algorithmic and therefore requires more than our current conventional biological understanding of the brain. Regarding (A) I believe it is possible to develop complex computational systems for which, due to their innate complexity, the detail of their workings cannot be fully known. However, we are still able to use them to solve problems. Liquid-State Machines are a good example of such methodology currently being employed. Hence, I don’t think it is necessary to fully understand our method of deriving understanding in order to create AI. Regarding (B) I think Penrose’s attachment to the ideality of mathematical truths, i.e. their timeless and absolute truth, makes him feel that the ability to grasp this is somehow special and unassailable. I would regard this as a fallacy. A large part of what brains do is statistical pattern recognition. Our ability to understand fuzzy concepts such as a ‘chair’ may be a similar mechanism to that which is used to understand non-fuzzy things like mathematical truths. The reason the latter is so much more precise is not due to the cognitive systems applied to them but due to the thing itself being so much more precise. Hence, I doubt that our understanding is consistent in a Church-Turing/Godel sense. It is just that we do a damn good job when the subject matter is amenable.

Whilst I think what I have argued for (A) and (B) may discount Penrose’s cognitive requirement for (C), I don’t think that it should all be discarded just yet. Penrose argues that quantum mechanisms are non-algorithmic and super-computational and therefore may if tapped into provide a mechanism for understanding. Although I don’t feel this is necessary, I would agree with Penrose’s critique of strong AI that suggests that consciousness emerges from algorithmic complexity alone. Algorithms can be implemented in many mediums, even using cogs and pulleys. It does seem ridiculous that a system of cogs and pulleys if complex enough would become conscious. Therefore, one may conclude that algorithmic complexity alone is not enough. I would suggest that such complexity if instantiated in a particular medium (e.g. biological brains) give rise to consciousness. However, our current understanding of biology and classical physics does not encompass anything that can explain the phenomena of consciousness. Perhaps an interaction between complex biological systems and quantum mechanics, with all its strange phenomena such as entanglement, may open the door to our understanding of consciousness.

Primary consciousness is the holy grail of neuroscience. Unlike high level consciousness which concerns aspects such as the notion of self and consciousness of consciousness, primary consciousness is concerned with phenomenal qualia. In brief, why do we have a subjective phenomenal experience of something such as the redness of something red. This refers to the ‘what it is like’ to experience something. The philosophical problem of zombies is often used to illustrate some issues involved. Is it possible to have a zombie like creature that can respond and behave in exactly the same way as we do but that does not have a subjective experience? If so then why do we have one? Further to this, why should a mechanistic device such our brain produce experience but the zombie or even a thermostat not? How can something material produce something phenomenal?

The problem is a big question as we do not really understand at any level how phenomenal experience can arise. We can however deduce some properties that a system must satisfy to enable it and also identify neural correlates of consciousness. Both of these still leave an explanatory gap of say ‘why is the activation of this group of neurons accompanied by an experience of red?’.

William James noted that consciousness is a process, and although it undeniably forms an aspect of what primary consciousness is, I have trouble with people who use such a claim as an answer to the problem. A process is an abstract concept that only bestows meaning to an intelligent observer of a situation, as such its ontological status is vaguer than something such as for example a chair. Although hard to define, I don’t feel that primary consciousness suffers from this ontological ambiguity. This is probably because consciousness is the closest thing to us, it is us, and as a result although difficult to tie down conceptually due to its subjective nature, its being is direct, immediate and definitely not ambiguous or even interpretational. This may highlight a difference in ontological category between process and consciousness that has to be clarified.

Pursuing the process aspect of consciousness Gerald Edelman and Giulio Tononi present the dynamical core hypothesis as an explanation.

‘First, consciousness experience appears to be associated with neural activity that is distributed simultaneously across neuronal groups in many different regions of the brain. Consciousness is therefore not the prerogative of any one brain area; instead, its neural substrates are widely dispersed throughout the so-called thalamocortical system, and associated regions. Secondly, to support conscious experience, a large number of groups of neurons must interact rapidly and reciprocally through the process called reentry’.

The dynamic core relies upon the notion of complexity in a neural system. A neural system is highly integrated if it constituent clusters are well connected so that functionally their behaviour can synchronize. A highly integrated system although able to bind information in different parts cannot contain much information as everything ends up doing the same thing so the number of possible states is limited. A differentiated system is the opposite in which there is little communication to bind parts but the number of possible states is large. Complexity is defined as a balance between integration and differentiation in which many states are possible and desperate parts can communicate and bind. Given this, consciousness through the dynamic core is defined as follows:

1. A group of neurons can contribute directly to the conscious experience only if it is part of a distributed functional cluster that, through reentrant interactions in the thalamocortical system, achieves high integration in hundreds of milliseconds.

2. To sustain conscious experience, it is essential that this functional cluster be highly differentiated, as indicated by high levels of complexity.

A curious question that this model raises relates to the fact that different neuronal groups can be members of the dynamic core at different times allowing for the possibility that at two different moments in time the dynamic core may be constituted from totally different members. If this is the case what binds the continuity of consciousness? Is it just the process and if so how does this evade the problem of ontological status mentioned above?

For more on the dynamical core hypothesis read ‘A Universe Of Consciousness’ by Gerald Edelman and Giulio Tononi.