Artificial Intelligence is a misnomer.
Does anybody have an idea of what human intelligence (the non-artificial one) is? Over here in Germany Intelligenz is absolutely not the same thing as intelligence represented with a capital I in CIA, for example. That CIA-one was most probably the intelligence that was meant to be artificially emulated in the 50s, 60s, 70s... And quite successfully so.
Today, AI is widely perceived to be an imitation, a parallel realization, or even a substitute of the human intelligence that suposedly sets us apart from every other rock. Remember the times when your science-inclined parents explained to you how the behavior of snails is nothing more than the output of a hardwired machine? Same for the canary, the turtle under you bed, and the dog. (It was hard to believe how the malevolent fits of my cat were hardwired and not real, genuine evil. But it was.) Intelligence was nicely reserved for us humans. That appears to get questioned. The definition of intelligence was usually a list of things unintelligent conversation-equipment would not be able of doing. This nicely excluded everything that didn't talk. Fish were out. Insects, obviously! too. But on closer inspection it didn't really hold. And it becomes every more clear that a real definition of intelligence - the human one with a 'z' in german - is sorely lacking.
Let's briefly dip into the hotly debated problem of bias in Large Language Models - which blew us laypeople away when they popped out of nothing earlier last year. We all played with the most popular AI-Apps and they are truly impressive (some users lamented the lack of accuracy when answering content-related questions. But that is not what those systems are built for, anyway.).
There sure enough is a widely documented problem when using those tools to sift through CVs or when asking for a suggestion of what to look for in a perfect candidate for an engineering position. The results unsurprisingly perpetuate clichées and stereotypes as the data-base is a western-centered a pile from a male-dominated past (past we hope) and it is filled with the everyday patterns of prejudice.
It is important to remain aware of this, but it is not at all surprising let alone malevolent. The AI we amateurs tinker with is a system trained to find - and repeat - patterns. It is probably best compared to our 'intuition' or, yes, bias we as humans are endowed with. The pattern of driving a car is - prejudice, bias... somehow. We repeat patterns we were trained to recognize without using our reasoning brain, which is too slow and too close to consciousness. You realize that best when you apply this competence in the wrong environment. I emember the headaches after the first day of driving on streets in the UK. The trained patterns have to be overriden by reason, by inellect - maybe 'intelligence'. Or to call Daniel Kahnemann to help - the software we today know as AI (or large language models) emulates quite nicely the fast part of thinking, not the slow part. So while we are 'thinking fast and slow' we have to stay aware of the fact that the AI-apps of 2023 are pattern-reproducing devices, they are bias-repeaters, they are thinking fast but there is no reasoning.
Reasoning, the slow part of thinking, is left to us.
For now.
Comments