At one point in my career, I stood at crossroads of academia pondering what direction I should take. Do I pursue a scholarly life as a computer scientist? Or should I venture into the corporate world of business, money, and endless meetings?
I seriously considered both options, and ultimately I decided to get a Master of Business Administration in IT and a nine-to-five job as a software engineer (which, by the way, I no longer do). Sometimes I still wonder how my life would be had I chosen the other route. I probably would have gone into graduate school in computer science with a focus on human-computer interaction. My thesis would have been, “Can computers make art?”
Of course, that would have been a rhetorical question. Of course computers can make art, and my research would show you exactly how.
You see, I spent most of my undergraduate days in college taking computer science and math classes while learning to program computers. On the side, I would take elective courses on ancient history and sneak off to the art studios to paint for hours at a time. For me, technology went hand-in-hand with history, and logic was somehow related to creativity.
Back then, almost twenty years ago, the future of computing seemed to be in human-computer interaction, grid computing, and neural networks. And that has remained true, though those fields have morphed somewhat into what we know today as user experience design (UX design), cloud computing, and artificial intelligence (AI).
At this point, the technologies in UX design and cloud computing are mature and their applications are clear. However, we are just now entering a time when AI technology is becoming more important.
But what exactly is AI? What is its relevance to technology in the grand scheme of things? I don’t think we really understand what AI is or its place in technology yet. In fact, I think AI is overrated.
***
When I was a teenager back in the 1990’s I used to love going to the mall. At that time, the internet was brand new technology outside of the military and educational institutions. Almost nobody had a cellphone and social media was non-existent. The closest thing to social media was the mall.
At the mall, you could see demonstrations of the latest in consumer technology. You had arcades showcasing the most advanced computer graphics. You had toy stores with high-tech robotic toys. Holograms were all the rage. And there were virtual reality kiosks, where you could pay $15 to wear a virtual reality headset and explore a pixelated, blocky virtual world for five minutes.
In the 90’s everyone thought that by the year 2020, everyone would be spending hours each day wearing a virtual reality headset and virtually exploring other worlds (when they weren’t outside riding their hoverboards). In the 90’s, virtual reality was overrated.
***
As I get older, I realize more and more that most hyped things are overrated. We overrate things by either extrapolating (if this trend continues…) or thinking of possibilities that never materialize (with this technology, I can finally…). And sometimes the technology shows great potential, but underwhelms when it finally arrives (virtual reality headsets, I’m looking at you). I’m afraid AI may fall into one or more of these traps.
The expectations for AI are simply too high. The culprit may be in the name. “Artificial Intelligence” implies too much. When you think of “Artificial Intelligence,” you think of an artificial brain that works just like a human brain, but much faster. You think of a sentient computer that has feelings and never makes mistakes and comes up with optimal ideas. You assume that it’s “intelligent” because that word is in the name.
But look under the hood of current AI systems and you see pretty much the same thing you see in any computer program. There’s step-by-step code implementing algorithms to tell the computer what to do. There’s nothing special about the processors and other hardware running the programs. The developers of the AI don’t need any special equipment to create the AI software. In fact, if you have a computer and know how to program it, you can create your own AI.
That leads back to the all-important question. What is AI anyways?
***
If you ask a computer scientist “what is AI?,” you might get a complex response after which you might conclude, “the machines will destroy us, eventually.” But take a big step back and look at the big picture. An AI is just a software program, like Microsoft Windows, Adobe Acrobat, or Angry Birds on your phone.
The difference between a regular app and an AI app is that the AI takes training data as input and outputs synthesized data in a new form. For example, I can feed an AI the entire text of Moby Dick as training input, and it might output a new story based on what it “learned” from processing the text of Moby Dick.
Looking at AI through this lens, I can say that I once wrote an AI application back in 1998. It was a Windows program called NameGen. I would “train” it with a text file of names as input. Then it would break the names down to “learn” the style, then generate new names (you can read about it and try it out here).
Is NameGen an AI? I think so. It fits the simplest, most basic definition. But it’s probably not what you would think of as an AI.
How about something a bit more famous and that more people would agree is an AI? Let’s talk about IBM’s chess-playing AI, Deep Blue.
IBM built Deep Blue in the 1980’s and fed it hundreds of thousands of grandmaster chess games as training data. On February 10, 1996, Deep Blue played a match against reigning world champion Garry Kasparov, and won. Whenever Garry made a move, Deep Blue’s operators would enter the game board position data into the computer, and the computer would output its next move. Note that Deep Blue behaved just like any computer program would. An operator asks it something, it retrieves its data, and it outputs data. At the end of the match, Deep Blue felt no emotions and had no thoughts about becoming the new world chess champion. That’s because even though Deep Blue was an artificial intelligence, it was not a sentient being. It was just another computer program written by programmers. In fact, Garry Kasparov once said that Deep Blue was “as intelligent as your alarm clock.” And he was right.
For AI to meet expectations, it would have to show intelligence higher than that of an alarm clock. To meet expectations, (1) AI cannot require a human operator to “operate” the AI. (2) It also cannot require humans to design and create new AI. (3) A true AI should not need human-created data as training data. It should be able to train itself with other AI-created data.
It’s only once AI can cross these three thresholds that it will separate itself from mundane software applications. We are far from crossing these thresholds. In fact, we may never cross them. That’s why I predict AI will underperform expectations.
As I mentioned earlier, it might be the name “artificial intelligence” that lends itself to creating unrealistic expectations. What if AI was instead called “logic enhanced decision making” or “rational predictive processes?” Those names don’t sound as cool. If we had called AI technology one of those names, we probably wouldn’t have such high expectations of it.
***
If I had chosen to attain a PhD in computer science all those years back, I might be researching and developing artificial intelligence today. My thesis of “can computers make art?” would have been rooted in something I believe in very strongly, even today. The belief is that {randomness * constraints = art}.
It is a fallacy to believe that art is created out of freedom of expression. Yes, perhaps bad art is created without rules, but if you want to create good art, you need to follow rules. Every artistic discipline has rules. In visual art, there is composition, color relationships, balance, and a host of other rules. Writers have the three-act structure, the hero’s journey, tropes, and many others. To create good art, you need to follow the time-tested rules of your art. It’s only after you master the rules that you know how to break them.
After constraints, you need a touch of randomness. Strictly following constraints without randomness yields boring, predictable results. But add a pinch of randomness, and you get something more interesting… something that might be called artistic. It’s like finding a yellow flower growing in the middle of a freshly-paved asphalt road. The yellow flower doesn’t make any sense, it’s so random, and so creative.
Computers are expected to be logical. If you build artificial intelligence that is logical, it will feel like a computer. But if you add some randomness, some things that don’t make sense, to the computer, you might think it’s sentient.
In one of the chess games that Garry Kasparov played against Deep Blue, a bug in the AI led to a seemingly random move. It was a random mistake on the part of Deep Blue that seemed like a move that only a human would make out of desperation. The move perplexed Kasparov and threw him off his game for the rest of the match. Kasparov had attributed the move to “superior intelligence” on the part of Deep Blue, when in fact it was randomness. Deep Blue’s mistake made it seem human to Kasparov, and as the saying goes, mistakes are what make us human.
Perhaps by introducing some randomness, illogical thinking, and “mistakes” into our AI systems, we can make them seem human. I don’t believe computers will ever be sentient. The thresholds required for that may well be unattainable. But by adding some randomness and constraints, we can create the illusion of sentience and creativity.
Artificial intelligence will never live up to its expectations, but that doesn’t mean it won’t be immensely useful to us. Like all technological innovations, AI will be another tool for us to use. AI will help us make difficult decisions. AI will make easy decisions for us when we don’t have the time to be bothered with them. AI will prompt us with random ideas to spark our own human creativity. AI will assist us in our work. AI will talk to us. Artificial Intelligence will do all of these things and more, but it will never replace Human Intelligence. Not by a long shot.
Leave a Reply