Subscribe rss.gif

Recent News


Notes from the Executive Director – April 2018

Last month, I argued that the predictions and fears for artificial intelligence (AI) to have anything resembling real human intelligence are wildly over the top. AI today and into the future is highly unlikely “to understand the physical world well enough to make predictions about basic aspects of it – to observe one thing and then use background knowledge to figure out what other things must also be true. Another way of saying that machines don’t have common sense.”[1]

What we know today as AI is very far from intelligent, which makes most of the forecasts in articles like “Libraries in the Age of Artificial Intelligence” written by Ben Johnson, which appeared in the January/February issue of Computers in Libraries, off-base. But Johnson is not alone. Many very smart people kind of lose their heads when they write about AI. They worry about self-aware AI, which will result in an alien intelligence that could turn on humanity. Think Skynet in the Terminator movies or the Cylons in Battlestar Galactica. My view of all of that is close to economist Andrew McAfee who said in a 2013 TED talk, “There is no shortage of dystopian visions about what happens when our machines become self-aware, and they decide to rise up and coordinate attacks against us. I’m going to start worrying about those the day my computer becomes aware of my printer.”

There is a second thesis related to AI that contains more than a kernel of truth. In this view, AI will obviate the need for much work that today is done by humans and lead to massive unemployment. One highly cited report from the Oxford Martin Programme on the Impacts of Future Technology said that 47 percent of US jobs were in danger of disappearing through advanced uses of computers and technology. That’s a scary proposition. It, too, may be overblown, but the kernel of truth is that automation will have a huge impact on libraries. Indeed, automation has already dramatically affected libraries.

Libraries today are far different than they were just a few decades ago. The invention of machine-readable cataloging changed the game for technical services and led to decreases in the number of catalogers. At the Library of Congress there were almost 50 percent fewer catalogers on staff in 2007 than the early 1990s. In addition to MARC, the Internet also contributed to the decline.

In the second decade of the 21st century, we may be seeing a similar impact on reference and public services as AI techniques make it easier to do the things that have long been associated with reference and public services. Readers Advisory was a staple of public libraries for decades. In an age of Amazon and Goodreads, it is very easy to find good recommendations for the next book. If I want a science fiction author similar to Charles Stross, I have many online venues to find that without asking a librarian. What we used to call “ready reference” is long gone. Fielding calls about factual information, such as “Can you tell me the capital of Rwanda?”, has been replaced by Google, Siri, and Cortana.

What’s left is providing the help that only humans can do. At least for now. The long-term impact on the number of public services librarians is not yet clear, but I’m not sanguine about employment opportunities for these kinds of jobs.

All of this notwithstanding, I don’t believe libraries or librarians are endangered. Johnson put his finger on it when he wrote, “Libraries are a social space. We espouse the virtues of community and creativity.” Amen to that. And for us to thrive as digital automation pushes ever deeper into our lives, we must seize the opportunity to become more engaged, more embedded, more enmeshed in our communities. The future does not belong to those who sit behind reference desks or in the director’s office, waiting for the world to come to them. It is up to us to seize the moment now and create deep dialog to determine what it is that our communities want, and find ways to give it to them. Creativity and community. They go hand-in-hand.

[1] Bergstein, Brian. “The Great AI Paradox”, MIT Technology Review, vol. 121, no. 1, p.76