29 Consciousness

The Economist

IN AN IDEAL world, science would work by making unambiguous predictions based on a theory, and then testing those predictions in ways that leave no wiggle room about which are right and which wrong. In practice, it rarely happens quite like that, especially in biology. But, the coronavirus always permitting, a group of neuroscientists plan to apply this method over the course of the coming year to the most mysterious biological phenomenon of all: human consciousness. They are organising what is known as an “adversarial collab­oration competition” between two hypotheses about how consciousness is generated in brains.

The contestants are Giulio Tononi’s integrated information theory (IIT) and Stanislas Dehaene’s global workspace theory (GWT). The competition was dreamed up at the Allen Institute for Brain Science, in Seattle, and is being paid for by the Templeton World Charity Foundation. The practical side of things is being led by Lucia Melloni of the Max Planck Institute for Empirical Aesthetics, in Frankfurt.

Dr Tononi, of the University of Wisconsin, Madison, thinks consciousness is a direct consequence of the interconnectedness of neurons within brains. IIT argues that the more the neurons in a being’s brain interact with one another, and the more complex the resulting network is, the more the being in question feels itself to be conscious. Because the parts of a human brain where neuronal connectivity is most complex are the sensoryprocessing areas (in particular, the visual cortex) at the back of the organ, these, IIT predicts, are where human consciousness will be seated.

Dr Dehaene, who works at the Collège de France, in Paris, reckons by contrast that the action, when it comes to consciousness, involves a network of brain areas—particularly the prefrontal cortex. This part of the brain receives sensory information from elsewhere in the organ, evaluates and edits it, and then sends the edited version out to other areas, to be acted on. It is the activity of evaluating, editing and broadcasting which, according to GWT, generates feelings of consciousness.

One difference between IIT and GWT, accordingly, is that the former is a “bottom up” explanation, whereas the latter is “top down”. Supporters of IIT think consciousness is an emergent property of neural complexity that can exist to different degrees, and could, in principle, be measured as a number (for which they use the Greek letter phi). GWT-type consciousness, by contrast, is more of an all-or-nothing affair. Distinguishing between the two would be a big step forward for science. It would also have implications for how easy it might be to build a computer that was conscious.

The competition’s experiments will be conducted on 500 volunteers at six sites in America, Britain, China and the Netherlands. Three techniques will be used: functional magnetic-resonance imaging (fMRI), magnetoencephalography (MEG) and electrocorticography (ECoG). fMRI measures blood flow, which in turn relates to the level of activity in the part of the brain being examined (the more blood that is flowing through an area, the more active it is). MEG records fluctuating magnetic fields produced by electrical activity in the brain. Neither of these is intrusive. ECoG, however, records electrical activity directly from the surface of the cerebral cortex. This part of the project will therefore rely on volunteers who are undergoing brain surgery for reasons, such as to treat epilepsy, which require the patient to remain conscious throughout the procedure. Half the data collected will be analysed immediately, by researchers independent of the protagonists, who have no axe to grind for either side. The other half will be locked away for future reference, in case confirmatory analyses need to be done.

In the spirit of adversarial collaboration, the two sides have hammered out a set of tests that both agree should produce different results, depending on which theory is correct. These depend on the fact that GWT predicts brain activity only when attention is actively being paid to something, whereas mere conscious awareness of something is enough for IIT to predict activity. The tests’ details vary (some involve stationary letters, objects or faces on a screen while others have shapes moving across the screen). In all of them, though, the distinction between attention and awareness is clear—and so, therefore, are the predictions.

Whatever emerges from the experiment will not be anywhere near a definitive explanation of consciousness. In particular, it will not address the “hard” problem of the phenomenon: the “feeling of what it is like to be something” that was raised in 1974 by Thomas Nagel, an American philosopher, in an essay titled “What is it like to be a bat?” It will, however, by providing what are known as neural correlates of conscious experience, point to directions in which future investigations might usefully travel.

Geoffrey Carr, The Economist (Nov 17th 2020)

Memo Burkeman

The feeling of being inside your head, looking out, or of having a soul.

One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness, by which he meant the feeling of being inside your head, looking out – or, to use the kind of language that might give a neuroscientist an aneurysm, of having a soul. Though he didn’t realise it at the time, the young Australian academic was about to ignite a war between philosophers and scientists, by drawing attention to a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it.

Hard Problem of Consciousness – and it’s this: why on earth should all those complicated brain processes feel like anything from the inside?

Why aren’t we just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life?

Philosophers had pondered the so-called “mind-body problem” for centuries.

What the hell is this that we’re dealing with here?

Conscious sensations, such as pain, don’t really exist, no matter what I felt as I hopped in anguish around the kitchen; or, alternatively, that plants and trees must also be conscious.

In recent years, a handful of neuroscientists have come to believe that it may finally be about to be solved – but only if we are willing to accept the profoundly unsettling conclusion that computers or the internet might soon become conscious, too.

Cartesian dualism

Science had been vigorously attempting to ignore the problem of consciousness for a long time. The source of the animosity dates back to the 1600s, when René Descartes identified the dilemma that would tie scholars in knots for years to come. On the one hand, Descartes realised, nothing is more obvious and undeniable than the fact that you’re conscious. In theory, everything else you think you know about the world could be an elaborate illusion cooked up to deceive you – at this point, present-day writers invariably invoke The Matrix – but your consciousness itself can’t be illusory. On the other hand, this most certain and familiar of phenomena obeys none of the usual rules of science. It doesn’t seem to be physical. It can’t be observed, except from within, by the conscious person. It can’t even really be described. The mind, Descartes concluded, must be made of some special, immaterial stuff that didn’t abide by the laws of nature; it had been bequeathed to us by God.

Cartesian dualism, remained the governing assumption into the 18th century and the early days of modern brain study. But it was always bound to grow unacceptable to an increasingly secular scientific establishment that took physicalism – the position that only physical things exist – as its most basic principle. And yet, even as neuroscience gathered pace in the 20th century, no convincing alternative explanation was forthcoming. So little by little, the topic became taboo.

Few people doubted that the brain and mind were very closely linked: if you question this, try stabbing your brain repeatedly with a kitchen knife, and see what happens to your consciousness. But how they were linked – or if they were somehow exactly the same thing – seemed a mystery best left to philosophers in their armchairs. As late as 1989, writing in the International Dictionary of Psychology, the British psychologist Stuart Sutherland could irascibly declare of consciousness that “it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.”

It was only in 1990 that Francis Crick, the joint discoverer of the double helix, used his position of eminence to break ranks. Neuroscience was far enough along by now, he declared in a slightly tetchy paper co-written with Christof Koch, that consciousness could no longer be ignored. “It is remarkable,” they began, “that most of the work in both cognitive science and the neurosciences makes no reference to consciousness” – partly, they suspected, “because most workers in these areas cannot see any useful way of approaching the problem”. They presented their own “sketch of a theory”, arguing that certain neurons, firing at certain frequencies, might somehow be the cause of our inner awareness – though it was not clear how.

Consciousness can’t just be made of ordinary physical atoms. So consciousness must, somehow, be something extra – an additional ingredient in nature.

It may be true that most of us, in our daily lives, think of consciousness as something over and above our physical being – as if your mind were “a chauffeur inside your own body”, to quote the spiritual author Alan Watts. But to accept this as a scientific principle would mean rewriting the laws of physics. Everything we know about the universe tells us that reality consists only of physical things.

If this non-physical mental stuff did exist, how could it cause physical things to happen – as when the feeling of pain causes me to jerk my fingers away from the saucepan’s edge?

Science has dropped tantalising hints that this spooky extra ingredient might be real.

Daniel Dennett, the high-profile atheist and professor at Tufts University outside Boston, argues that consciousness, as we think of it, is an illusion: there just isn’t anything in addition to the spongy stuff of the brain, and that spongy stuff doesn’t actually give rise to something called consciousness. Common sense may tell us there’s a subjective world of inner experience – but then common sense told us that the sun orbits the Earth, and that the world was flat. Consciousness, according to Dennett’s theory, is like a conjuring trick: the normal functioning of the brain just makes it look as if there is something non-physical going on. To look for a real, substantive thing called consciousness, Dennett argues, is as silly as insisting that characters in novels, such as Sherlock Holmes or Harry Potter, must be made up of a peculiar substance named “fictoplasm”; the idea is absurd and unnecessary, since the characters do not exist to begin with. To Dennett’s opponents, he is simply denying the existence of something everyone knows for certain.

The Hard Problem is nonsense, kept alive by philosophers who fear that science might be about to eliminate one of the puzzles that has kept them gainfully employed for years.

Life is just the label we give to certain kinds of objects that can grow and reproduce. Eventually, neuroscience will show that consciousness is just brain states.

Solutions have regularly been floated: the literature is awash in references to “global workspace theory”, “ego tunnels”, “microtubules”, and speculation that quantum theory may provide a way forward. But the intractability of the arguments has caused some thinkers, such as Colin McGinn, to raise an intriguing if ultimately defeatist possibility: what if we’re just constitutionally incapable of ever solving the Hard Problem? After all, our brains evolved to help us solve down-to-earth problems of survival and reproduction; there is no particular reason to assume they should be capable of cracking every big philosophical puzzle.

There’s actually no mystery to why consciousness hasn’t been explained: it’s that humans aren’t up to the job.

Panpsychism

“Panpsychism”, the dizzying notion that everything in the universe might be conscious, or at least potentially conscious, or conscious when put into certain configurations.

If humans have it, and apes have it, and dogs and pigs probably have it, and maybe birds, too – well, where does it stop?

Physicists have no problem accepting that certain fundamental aspects of reality – such as space, mass, or electrical charge – just do exist. They can’t be explained as being the result of anything else. Explanations have to stop somewhere. The panpsychist hunch is that consciousness could be like that, too – and that if it is, there is no particular reason to assume that it only occurs in certain kinds of matter.

Anything at all could be conscious, providing that the information it contains is sufficiently interconnected and organised. The human brain certainly fits the bill; so do the brains of cats and dogs, though their consciousness probably doesn’t resemble ours. But in principle the same might apply to the internet, or a smartphone, or a thermostat.

Integrated information theory

Unlike the vast majority of musings on the Hard Problem, moreover, Tononi and Koch’s “integrated information theory” has actually been tested. A team of researchers led by Tononi has designed a device that stimulates the brain with electrical voltage, to measure how interconnected and organised – how “integrated” – its neural circuits are. Sure enough, when people fall into a deep sleep, or receive an injection of anaesthetic, as they slip into unconsciousness, the device demonstrates that their brain integration declines, too. Among patients suffering “locked-in syndrome” – who are as conscious as the rest of us – levels of brain integration remain high; among patients in coma – who aren’t – it doesn’t. Gather enough of this kind of evidence, Koch argues and in theory you could take any device, measure the complexity of the information contained in it, then deduce whether or not it was conscious.

But even if one were willing to accept the perplexing claim that a smartphone could be conscious, could you ever know that it was true? Surely only the smartphone itself could ever know that? Koch shrugged. “It’s like black holes,” he said. “I’ve never been in a black hole. Personally, I have no experience of black holes. But the theory [that predicts black holes] seems always to be true, so I tend to accept it.”

It would be poetic – albeit deeply frustrating – were it ultimately to prove that the one thing the human mind is incapable of comprehending is itself.

Burkeman: Mystery of Consciousness (Guardian 2015)