On June 22, 2010 David Gelernter presented his thoughts on Artificial Intelligence - the capability of computers to show intelligent behaviour - in a talk on invitation by The American Academy and FAZ in Berlin.
The title "Dream Logic, Software Minds, and the Poetry of Human Thought" gave a hint at what to expect. He went deep into his rather personal understanding of intelligence and consciousness.
Gelernter attempted a definition of 'thinking' (as opposed to the simulation of thinking) by deep introspection and analysis of his thought-processes. The result was a rather romantic, very anthropocentric praise of creativity, dreaming and intuition. Something tightly connected to feelings, emotion and unpredictability - a collection of elements a computer does arguably not have. A thinking computer, he inferred, should 'know' or 'feel' that he is thinking - thereby connecting thinking to consciousness.
But is this the right approach?
David Gelernter rejects anything that smells like solipsism. "if I see an animal with a head and eyes, I simply assume that what is going on in my head is also going on in it's head", he states in an interview with Berlin's "Der Tagesspiegel". His proof is: common sense. Although this might be satisfactory for a contemporary proponent of a romantic universal poetry, we actually do lack the ultimate test for consciousness and always end up with cozy attributes like feelings, emotions, awareness.
(see also: Der Tagesspiegel "Selbstbewußtsein ist ein Fluch", 27.6.2010)
The title "Dream Logic, Software Minds, and the Poetry of Human Thought" gave a hint at what to expect. He went deep into his rather personal understanding of intelligence and consciousness.
Gelernter attempted a definition of 'thinking' (as opposed to the simulation of thinking) by deep introspection and analysis of his thought-processes. The result was a rather romantic, very anthropocentric praise of creativity, dreaming and intuition. Something tightly connected to feelings, emotion and unpredictability - a collection of elements a computer does arguably not have. A thinking computer, he inferred, should 'know' or 'feel' that he is thinking - thereby connecting thinking to consciousness.
But is this the right approach?
David Gelernter rejects anything that smells like solipsism. "if I see an animal with a head and eyes, I simply assume that what is going on in my head is also going on in it's head", he states in an interview with Berlin's "Der Tagesspiegel". His proof is: common sense. Although this might be satisfactory for a contemporary proponent of a romantic universal poetry, we actually do lack the ultimate test for consciousness and always end up with cozy attributes like feelings, emotions, awareness.
(see also: Der Tagesspiegel "Selbstbewußtsein ist ein Fluch", 27.6.2010)
Comments
Anyway, as to the subject of artificial consciousness, I believe, that any machine which we would concede a kind of consciousness to, would at the same time stop being a machine.
Of course, intelligence is intimately linked to meaning and meaning in turn is linked to many things like e.g. an action-oriented context and social interaction. But consciousness comes before intelligence; how would we test for consciousness? I propose, the most basic prerequisite for consciousness is twofold: (1) The "thing" in question shows behaviour, i.e. lets us assume, that it has its own purposes and (2) this behaviour is to a certain degree unpredictable and recursive (i.e. the thing acts diverse; it "learns" and "evolves").
Ofcourse, these concepts leave a ginormous space for of interpretation. But that's in the nature of the subject. Intelligence is to a large degree a normative and not just a descriptive attribute.