Feelings and minds are the same

In the contributions to the debate and explanations about artificial intelligence (AI) and the direction in which it is supposed to be developing, computers and software are often set in analogy to the human brain and consciousness, because it sounds so vivid and plausible. The spirit, the human soul, is only software that is installed on the brain, these natural computing platforms. In relation to hardware and software, one even speaks of "wetware". One thesis that was born out of this metaphor is: At some point computers will be as intelligent as humans and even more intelligent. You will also gain consciousness. David Gelernter is one of the scientists who oppose this.

Would the first ultra-intelligent machine relieve people of every further invention?

The thesis with awareness is justified by the fact that computers can calculate like brains, albeit much faster, and their software is also being refined more and more, so the computers will first be able to learn like schoolchildren and then eventually educate themselves. Computers could soon make themselves smarter and smarter. If knowledge is power, then knowledge will eventually become computer power. And that power is more powerful than we are. Something like that.

In this overwhelming dystopia, a term from systems theory appears regularly: One speaks of singularity and means that moment in which the AI ​​will have trained itself to such an extent that people will become slaves of this synthetic spirit, which they then will not created more, but simply indebted.

The singularity thesis is not all that new. It emerged at the latest with the advent of computer science after the end of the Second World War. As early as 1958, the mathematician John von Neumann suspected that progress "could result in a decisive singularity in the history of mankind, according to which living conditions as we know them could not continue". The statistician Irving John Good, who helped decipher the German rotor key machine Enigma during World War II, became clearer in 1965. He said, "An ultra-intelligent machine can build even better machines; there would undoubtedly be an explosive development of intelligence and human intelligence would lag far behind. The first ultra-intelligent machine is the last invention that man has to make . "

Since the late 1980s, Vernor Vinge and Ray Kurzweil have never tired of arguing that "within 30 years we will have the technological means to create superhuman intelligence. A little later, the human era will come to an end" - says Ving; Kurzweil speaks of "technical change" that is "so rapid and all-encompassing that it represents a break in the structure of human history". And when a superbrain like Stephen Hawking says that the apocalypse will not be atomic, but technological, that we cannot be destroyed by war, but by computers - shouldn't we be scared?

In any case, the AI ​​metaphor has become so powerful that science fiction writers and special effects people have taken a liking to it. You can now let the horror of enslaved humanity in the form of super-smart super-robots tapers with a metallic sheen through the moving images of cinema and computer games. But - regardless of the suggestive power of the images - is that actually true? Will one at some point have built machines that have come to mind and intelligence themselves and now act as self-sufficient as they are planned? And even worse: will it then have to be said that these wise tin heads must now also be granted human rights because they do have consciousness, even a soul?

David Gelernter wrote a book that argues cleverly and skilfully rhetorically against these fantasies of extinction: "Tides of the Spirit. The Measurement of Our Consciousness." But it only raises an objection to the singularity debate outlined above at first glance. Because those who have learned do not argue against the computer, but in favor of the human mind. There is actually a difference. His reasoning reverses that of the singularists, so to speak. He does not ask: What does the development of technical intelligence mean for humans? Rather: What does human intelligence have, what do people have that machines simply do not have and can never have? His answer is quite unpathetic: himself. "Only we know what is going on within ourselves, that cannot be measured from the outside."

Learned discussion of what constitutes consciousness is more phenomenological than technological, more philosophy than engineering. And that is why the German translator was not well advised to give the book's subtitle "The measurement of our consciousness". The American original says: "Uncovering the Spectrum of Consciousness". After all, what has been learned is not about measurement - that is, translating consciousness into the terminology of technology, but about discovery and - about the spectrum. The fact that consciousness represents a spectrum and not a state makes the difference for him between the use of language and its understanding, of signs and what is designated, of text and message.

Consciousness is not a state, writes David Gelernter, but a spectrum

Now one can say: A learned representative of the humanities and social sciences tries to interpret technology that is typical for these faculties. Has everything already existed. But trained is not a humanities scholar, but a computer scientist, professor of computer science at Yale, a man who was involved with Apple in patent litigation that exceeded half a billion dollars in claims. He is a contentious author and columnist on issues such as education. And he's a victim, seriously injured by the 1993 Una bomber's letter bomb. But a trained person is neither a philosopher nor a literary scholar, even if "tides of the mind" cavort wildly in philosophy and literature and avail themselves of them as if they were laboratory reports. But how does David Gelernter argue with Freud, Shakespeare, J. M. Coetzee, Wordsworth and John Searle as his sources?

First: he rehabilitates psychoanalysis. And the introspection. For those who have learned, mind can only be thought of as a "spectrum" that ranges from highly focused concentration through daydreaming to unspecific dreaming in sleep. In this respect, the consciousness extends from perception of the outside world and intentional action to introspection and the images in the dream. All cognitive and mental processes - concentrated thoughts, feelings, reason, dreams and memories - settle within an individual spectrum and are dependent on the physical and psychological state - and on the daily form. In this way, "spirit" becomes a function in and with time: it "is" not, but varies along its spectrum, the states of which are arranged vertically by the learned person: "Above" he recognizes logic and reason, "below" he sees memories and dreams . This - also in the randomly acting correlates - makes up for those who have learned the human mind as a whole, which should not be limited to the logically reasonable function of the brain.

As William Wordsworth wrote, man should become "nature's prophet" again

"Feeling," he writes, "is an immensely powerful medium for summaries, because it can summarize everything: people, places, novels, historical epochs. Much more important is that one can also use emotions to uncover the common essence of anything, no matter how fundamentally different they seem. We can find the common essence of an aircraft carrier and a bowl of oatmeal - provided they exist. "

The perspective under which David Gelernter brings the spirit (back) develops its dynamic in a discourse that, as his book progresses, moves more and more towards hermeneutics, neurology and psychology, i.e. towards areas of the humanities. It is pushing away from the pure counter-speech and the negation of a strong AI.

Learned knows this, and that is why in the end - despite his own reservations about the blind Freud adaptation - he also pleads for a return to depth psychology, so that man with William Wordsworth can again become "prophets of nature" and show "how." the spirit of man becomes a thousand times more beautiful than this earth on which he lives. " As long as there is no computer that understands this Wordsworth request, one is prepared to follow David Gelernter's rejection of silicone intelligence until further notice.