Repeating the mistakes of AI past
With the current news cycle being dominated by all things ChatGPT and AI, I thought it would be a good idea to revisit a book I first read in 1997/1998, the year it was released.
The book is “HALs Legacy - 2001’s Computer as Dream and Reality”. It was issued by MIT Press in 1997 and was edited by David G. Stork. You may find a secondhand hard copy, but no ebook exists as far as I can tell.
It contains a forward by one of my favourite Science Fiction authors, Arthur C. Clarke, who you will no doubt know was the writer of the original novel 2001: A Space Odyssey.
This paragraph in Chapter 1 struck me :
Marvin Minsky (who, incidentally, nearly lost his life consulting on 2001.) argues (in chapter 2) that the field made such good progress in its early days that researchers became overconfident and moved on prematurely to more immediate or practical problems —for example, chess and speech recognition. They left undone the central work of understanding the general computational principles —learning, reasoning and creativity— that underlie intelligence. Without these, he believes, we will end up with a growing collection of dumb experts and will never achieve Al.
I can’t help thinking about the parallels to this new rush of AI deployment, and it feels as though we are falling into the same trap once again. The various ChatX machine learning algorithms impress on one level but are brutally stupid and over-confident simultaneously on another. The risks are multiple, some benign and others too frightening to ponder.
What is it that we say about the past and the future?
Those who cannot remember the past are condemned to repeat it. - George Santayana
6 April 2023 — French West Indies