news-header.jpg

News

Subscribe rss.gif

Recent News

Archived

Notes from the Executive Director – March 2018

ar•ti•fi•cial : humanly contrived often on a natural model : MAN-MADE

in•tel•li•gence : the ability to learn or understand or to deal with new or trying situations; (2): the ability to apply knowledge to objective criteria

When you think about artificial anything, what do you usually think about? Artificial sweetener with its slightly metallic taste instead of pure, sweet sugar? Artificial flowers instead of plants with scent and beauty? Artificial respiration instead of breathing on one’s own? In any of these cases, would you argue that artificial is superior to the real thing? Yet, we consistently seem to think that artificial intelligence (AI) and artificial reality either are way better than the real thing, or vastly more powerful and potentially dangerous. The former view is often assumed by technological optimists who believe we are on the cusp of a golden age because of machine learning and computers that think. The latter is more darkly uttered by technological pessimists who sometimes warn about the coming robopocalypse and disappearance of jobs and work. Ben Johnson, adult services manager at Council Bluffs Public Library, voices both optimistic and pessimistic views in his piece, “Libraries in the Age of Artificial Intelligence”, which was published in the January/February 2018 Computers in Libraries.

Often, as Johnson does, you will see “AI” used interchangeably with “machine learning”. Both terms are flawed. By using words such as “intelligence” and “learning” we imply that machines have human characteristics. We tack on the preceding words “artificial” and “machine” to modify the nouns that follow, but we still think that our computing machines are imbued with the very abilities that make us human. In actuality, we are using a metaphor to describe what happens in the black boxes that are modern computers. It appears that the machine is doing something intelligent when we ask Siri for directions to the closest Starbucks or how to translate “Hoe gaat het met jou?” from Dutch to English, so we tend to think the computer is smart or intelligent. In the meantime, we have all but forgotten that we are using a metaphor and that there is no real intelligence within the black box. It’s just an algorithm reading a very large dataset very fast. Impressive? Yes. Intelligent? No.

In a delightful passage in his book, In Our Own Image, George Zarkadakis traces the ways that humans have always given inanimate objects human capabilities. It started with the very earliest humans and continued through hunter-gatherer society when the entire world seemed alive. Idols and totems were ascribed abilities to control nature, win wars, and conquer disease. Today we do the same thing; we just call our totems “computers”.

But computers are different. Aren’t they?

In his article, Johnson says, “The algorithms that give rise to machine learning are mostly kept secret, and the code that results from machine learning is often so complex that even the human developers don’t understand exactly how their code works.” Perhaps. But, it should be noted that today’s AI stems from a 32-year old paper from Geoffrey Hinton, David Rumelhart, and Ronald Williams. The paper described a technique called backpropagation. Backprop, as it’s known, is “what all of deep learning is based on – literally everything.”[1]

So rather than something brand new and mysterious, AI is really built on a conceptual model developed in the 1980s. It just took a long time to create powerful computations and big enough data sets for backprop to work. This is not to negate the sometimes-amazing things that we can do now with our computers, but to suggest that computers actually think or can offer expert opinions or answers to thorny questions like, “What was the cause of the Civil War?”, as Johnson does, is just wrong.

By now, you see my skepticism about AI and machine learning. I’m doubtful that we’ll see anything close to the technological advances promised by the optimists, nor will we see the end of humanity as forecast by the pessimists.[2] In all likelihood, we’ll see continued steady progress in AI and some interesting uses for artificial reality. But AI taking over our thinking for us or gunning us down like Skynet did in Terminator? Highly unlikely. If you want to look for genuine scariness, look at climate science and the forecasts for the next few decades. We face far more likely threats than AI taking over.

So what does all of this have to do with libraries anyway? I’ll explore that question next month.

 

[1] “Is AI Riding a One-Trick Pony?”, James Somers, MIT Technology Review, Vol. 120, No. 6, p.29. Somers provides a good, non-technical description of backpropagation. He comments, “[O]nce you understand the story of backprop, you’ll start to understand the current moment in AI, and in particular the fact that maybe we’re not actually at the beginning of a revolution. Maybe we’re at the end of one.”

[2] Lest you take my forecasts too seriously: In another time and context, I wrote an article for Small Computers in Libraries in October 1991 called “No Windows Here!” In no uncertain terms, I explained why I would never use Windows on my computer. It would be DOS forever. When I’m inclined to make some pronouncement or another about the future, I remind myself of that article. It keeps me humble. But doesn’t stop me.