Attracting more than a million users within days of its release in November, ChatGPT—the new artificial-intelligence application from the U.S. research lab OpenAI—has provoked intensive global media coverage in the month and a half since. Responses to the technology—which can generate sophisticated written replies to complex questions; produce essays, poems, and jokes; and even imitate the prose styles of famous authors—have ranged from bemused astonishment to real anxiety: The New York Times tech columnist Kevin Roose wrote that “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public”—while Elon Musk tweeted, “ChatGPT is scary good. We are not far from dangerously strong AI.” Critics worry about the technology’s potential to replace human work, enable student cheating, and disrupt the world in potentially countless other ways. What to make of it?

Sarah Myers West is the managing director of the AI Now Institute, which studies the social implications of artificial intelligence. As West explains, ChatGPT is built on more than 60 years of chatbot innovation, starting with the ELIZA program, created at the Massachusetts Institute of Technology in the late 1960s, and including the popular SmarterChild bot on AOL Instant Messenger in the 2000s. West expects this latest version to affect certain industries significantly—but, she says, as uncanny as ChatGPT’s emulation of human writing is, it’s still capable of only a very small subset of functions previously requiring human intelligence. This isn’t to say it represents no meaningful problems, but to West, its future—and the future of artificial intelligence altogether—will be shaped less by the technology itself and more by what humanity does to determine its use.

Graham Vyse: How do you see ChatGPT in the history of artificial intelligence as a whole?

Sarah Myers West: Artificial intelligence has been around as a field for almost 80 years now, but its meaning has changed a lot along the way.

In its early days, AI focused on what we’d call “expert systems”—technologies that would replicate human intelligence in certain ways. Now, what we refer to as AI is very different—largely, an array of data-centric technologies that, in order to work effectively, rely on a couple of things that didn’t really exist before.

This article is for members only

Join to read on and have access to The Signal‘s full library.

Join now Already have an account? Sign in