The Science-Fantasy of Artificial Intelligence, part III

by Jennifer Rezny
on 03 March 2016
Hits: 2167

Last time on the blog, we poked around indoor navigating -- a subject near and dear to my expertise, given I once spent eight months straight poring over all the ways one could navigate various luxury malls in Australia, the United States and the U.K. (Next time you're in a major mall and encounter a touch kiosk that will create paths for you, you may think of this blog.)


When testing text directions, I will often find little snags for the developers to smooth over. A slight veer in the path might prompt the text directions to instruct a "turn", so a new rule is added to account for degree variations in path directions. A ride up an elevator might attempt to contextualize you when your end destination is right in front of you, so a new rule removes a contextual direction if the end destination is within a certain distance of a people mover. This kind of refinement represents a cumulative hundreds of hours between development, testing, sales and client support and ultimately creates a stronger product, and it looks quite clever but it is still the product of human intelligence -- lightyears away from a standalone artificial intelligence.

I use this to illustrate why artificial intelligence becomes something of a castle in the air; artificial intelligence is not even in its infancy. It seems clever until you've poked around the wiring and the rules yourself, and seen how adding rules to coding creates something that seems to consider every angle. From here on the ground, genuine artificial intelligence doesn't seem particularly probable. And yes, Leonardo da Vinci dreamed of flying machines five hundred years ago, and here we are today capable of flying from one side of the world to another in little more than half a day, but that is incredibly simple compared to a machine that involves individual intelligent thought -- enough to involve sentience, critical thinking, spatial awareness. Just shy of 3000 years ago, Homer wrote about Hephaestus's self-built gold handmaidens who helped him at his forge [Iliad, 18.417], and that idea of a robot has still only manifested in two ways: automated factory plants (far from human in both form and function and still requiring plenty of human direction) and experimental robotic humans, which barely scrape the human form. (Uncanny valley, anyone?) 

While we at least have something tangible over the Greeks, 3000 years is a long gap to have accomplished only the most rudimentary aspects of what Homer envisioned. Yes, a technological singularity would be a boom that increases at more and more rapid paces, and development speeds would grow exponentially, but I propose we aren't nearly as far as we believe we are. It's hubristic to believe we are at the tipping point of artificial intelligence, rather than just barely dipping our toes in the water; Hephaestus was a god. With magic and imagination on his side, of course he had sentient golden maidens.

It also forgets the true roadblock to bringing any technological singularity to life: Most imaginations of artificial intelligence from Hephaestus to the modern day notably exclude testing.

I've written about testing before, but in summary: testing is the process by which we develop trustworthy, reliable products that are free of defects and coding errors that cause undesirable behaviors. If testing does not occur, then products undoubtably ship with bugs that render them nigh unusable at one point or another. Testing leads to greater quality, broader user acceptance, higher user satisfaction and, all around, better products.

And yet we never see testing in sci-fi media. In my mind, Will Smith's Del Spooner of i, Robot (2004) didn't actually need to catch a robot in the process of malfunctioning to prove that they had the potential of killing humans, whether deliberately or through negligence; he probably could have just broken into the USR research and development lab and shaken down a few QA analysts, who knew but hadn't won out over the sales department or ship date. After all, in the real world, Toyota's Prius has been recalled numerous times due to break software (and hardware) issues, and a software issue controlling the breaks of a car are simple compared to a bipedal humanoid robot with some form of artificial intelligence. The idea that any company could create a functioning AI without testing to get it to that point is absurd precisely because advancement is not possible without testing.

If humans were to create an artificial intelligence that rivals their own, this AI must be tested rigorously to ensure that it is actually functional and not utterly ridden with bugs -- and it will have snags regardless, obviously, if the best and brightest of the tech industry still cannot prevent tragedies such as the Space Shuttle Challenger disaster, despite NASA having four testers to every developer. Introducing the AI of science fiction to the general population could have much larger consequences (and potential tragedies) than sending a dozen people into space. As such, it would require our best and brightest to manually test an artificial intelligence that rivals human intelligence -- and now, this is still supposing we have finally found a singular, non-transitive but still representative concept of human intelligence to replicate. And if we could not possibly manually test such a thing, largely because of the limits of time and labour and the sheer monstrous scope of the project, we would also need to develop testing tools to automate a degree of this testing. Automation would come with the caveat that the testing tools (which must be sophisticated enough to test artificial intelligence) must also be reasonably artificially intelligent themselves (while still being tools), and must also be bug-free. 

Bugs are caused largely by human error, and amplified considerably by scale and complexity. We are then tasked at creating perfection, arguably while having an imperfect and ever-changing understanding of ourselves, and what intelligence is.

You see the problem: to cause a technological singularity, humankind will need to not only invent numerous forms of artificial intelligence, but also create a form of artificial intelligence that negates human error almost completely. It would need an artificial intelligence that could recognize error and develop a means to correct it.

In the end, I'm not opposed to artificial intelligence, or even attempts to create a facsimile of it. I suppose I just see them more like ghosts than an inevitability of techno-evolution; a fascinating subject worth exploring in fiction and philosophical works, but without compelling evidence, it is impossible to believe in their [potential] existence. We can dream, and we can aspire towards AI, but the technological singularity is most likely squarely in the realm of science-fantasy.

That, and I struggle to operate an unfamiliar toaster without burning my toast. (I've developed a Pavlovian dislike of toast because of it.) I have no interest in fending off B1-66ER or Sonny when they decide that a reset /reboot constitutes killing them and thus requires defense with lethal force. (Would they understand that my soft flesh can't be put back together like theirs?)

Next week: Neil deGrasse Tyson and how science-fiction warps our perspective of AI.