The Science-Fantasy of Artificial Intelligence, part I

by Jennifer Rezny
on 19 February 2016
Hits: 4746

Over a decade ago, as a high school student, I wrote a paper entitled: Technological Singularity and the Tomorrow-land of Today.

Cheesy, I know.

 

While my high school was renowned for its focus on essay-writing and I have always had an aptitude towards writing, I can pinpoint this essay as being one of the first I really committed to. Academic papers in high school are often an obligation, rather than a truly personal missive, but technological singularity tickled me. Within a week of it being assigned, I was on my second full draft, and I turned it in a full week early. I was rewarded with a ludicrous 110%, it was declared to be a university-level paper, and my media studies teacher arranged its publication. At the time, I was fiercely proud. Now, that reaction seems as exaggerated as the comparison made between technological singularity and the proliferation of continental American marsupials. (And, as I would find out some years later, university-level still doesn't necessarily mean good.)

I will spare you any droll quotes and summarize my argument thusly:

Despite being science fiction, ideas about robot intelligence and autonomy as presented in the films the films i, Robot (groan) and The Matrix (double groan) are reflective of a potential future wherein artificial intelligence undergoes a technological singularity. This singularity would be a point where human intelligence is capable of creating a machine that surpasses its creator in intelligence and function, and then that machine goes on to do the same, creating a chain of ever-improving machines that would slowly eradicate their progenitors.

Where do marsupials fit into this? In 2000, Hans Moravec published a book entitled Robot: Mere Machine to Transcendent Mind. In this book, in a passage called "The Short Run", Moravec describes how North American placental species eliminated Southern American marsupial species over a time period of only a few thousand years, millions of years ago. Moravec compares this to the possibility of robots overtaking, incorporating or eliminating humans.

A decade ago, this idea thrilled and terrified me –– one of the clearest recollections I have of feeling real frisson. But I now work in software development, a world where everything is supposed to come down to cool machine logic. I no longer believe in artificial intelligence, and I am profoundly certain that technological singularities are a fiction. (That said, I have been certain before.)

The glaring problem I see with this now is that technology is made by humans, and the animal kingdom is not nearly so self-determined. Technology does not just breed like Tasmanian tigers once did: it takes deliberate human choices to reproduce it and "evolve" it, and while from a distance it could look like an evolutionary tree, up close it is still the function of machines. Evolutionary traits in the natural world are luck and survival. What impetus does a machine have to "evolve" without human-set priorities and parameters? What imperfect machine can correct its own problems?

So I come to this point, as a QA analyst looking at the programs I am assigned to test, where I can’t fathom machines surviving their own flaws, let alone surviving long enough to overcome them. In a simple application designed to book conference rooms within a professional campus, I could log hundreds of bugs over various iterations of the program, and each round of bug-squashing would incrementally create a better application. A handful of developers pour their intelligence into creating this thing, and I pour my intelligence into refining it, but when does the software become intelligent?

I can’t fathom the start of an artificial intelligence, or where it even begins. What is the scope of that project? How does one summarize the functions of the human brain for programming, when both philosophically and scientifically, what quantifies intelligence is still the subject of fierce debate? Thousands of years in the future, will we have an understanding of the human brain that could allow us to imagine what building an artificial intelligence program might require or what functions it might serve? And if we do, would it be possible to begin right away, or would it still largely be theoretical?

When presenting artificial intelligence as something still very much resigned to science fiction and science fantasy, many people will point out some of the steps forward we've made already, such as Cleverbot or the now-defunct SmarterChild.

Here is a recent conversation with Cleverbot:

I taught my parrot to whistle. Do you think the parrot knows what it is doing?

Cleverbot: I don't know that there's a point to anything.

So you don't see the value in teaching parrots tricks?

Cleverbot: No, I do not have the stone. Was it a murder weapon?

This is the best we have. It’s been almost two decades since its first release and Cleverbot still sounds only approximately human. It would be difficult -- or nigh impossible -- for anyone with any experience or familiarity with programming to mistake this for anything remotely approaching a replication of human intelligence, even if can replicate human conversation decently. 

It’s a start, but it is not intelligence, and it certainly doesn’t address bugs. Next week, in Part II, we will tackle indoor navigation, and the machine's perspective.