Attracting more than a million users within days of its release in November, ChatGPT—the new artificial-intelligence application from the U.S. research lab OpenAI—has provoked intensive global media coverage in the month and a half since. Responses to the technology—which can generate sophisticated written replies to complex questions; produce essays, poems, and jokes; and even imitate the prose styles of famous authors—have ranged from bemused astonishment to real anxiety: The New York Times tech columnist Kevin Roose wrote that “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public”—while Elon Musk tweeted, “ChatGPT is scary good. We are not far from dangerously strong AI.” Critics worry about the technology’s potential to replace human work, enable student cheating, and disrupt the world in potentially countless other ways. What to make of it?
Sarah Myers West is the managing director of the AI Now Institute, which studies the social implications of artificial intelligence. As West explains, ChatGPT is built on more than 60 years of chatbot innovation, starting with the ELIZA program, created at the Massachusetts Institute of Technology in the late 1960s, and including the popular SmarterChild bot on AOL Instant Messenger in the 2000s. West expects this latest version to affect certain industries significantly—but, she says, as uncanny as ChatGPT’s emulation of human writing is, it’s still capable of only a very small subset of functions previously requiring human intelligence. This isn’t to say it represents no meaningful problems, but to West, its future—and the future of artificial intelligence altogether—will be shaped less by the technology itself and more by what humanity does to determine its use.
Graham Vyse: How do you see ChatGPT in the history of artificial intelligence as a whole?
Sarah Myers West: Artificial intelligence has been around as a field for almost 80 years now, but its meaning has changed a lot along the way.
In its early days, AI focused on what we’d call “expert systems”—technologies that would replicate human intelligence in certain ways. Now, what we refer to as AI is very different—largely, an array of data-centric technologies that, in order to work effectively, rely on a couple of things that didn’t really exist before.
The first is massive amounts of data.
This was enabled by the internet boom of the 2010s, when tech companies developed the capabilities to leverage the production of data on a huge scale—that is, to develop systems that could look for patterns in extremely large data sets. In this sense, when we talk about AI today, it’s essentially what people were talking about as big data starting in the 1990s.
The second thing these systems rely on is massive amounts of computational power to process all this data.

Overall, what this means is that AI, as a field, has become increasingly dependent on the resources of a small number of big tech companies that have built or acquired these two things: huge data sets and huge computational power.
What it doesn’t really mean, though is any close replication of human intelligence. So although it’s very effective at doing a small subset of tasks, what we refer to as AI is very different from what humans are able to do.
Vyse: What’s new about ChatGPT in this history, then?
West: ChatGPT is more effective than anything before it at producing text responses that closely mimic human writing. But even there, I’d understand it as the newest version of older technology.
If you go back to 1966, for example, Joseph Weizenbaum developed a natural-language processing system called ELIZA, an early chatbot. ELIZA was designed specifically to mimic what it was like to be in a therapy session. So if you were to say, I had a really hard day at work, ELIZA would know to say, Tell me more about that.
ChatGPT is more effective than anything before it at producing text responses that closely mimic human writing. But even there, I’d understand it as the newest version of older technology.
When people interacted with ELIZA in 1966, they had much the same reaction as people are having to ChatGPT today—that it was this remarkable technology, In fact, Weizenbaum’s secretary—because her conversations with ELIZA felt so intimate—would ask him to close the door when she was speaking to it, even though she knew exactly how the system worked.
Another example that’s a little more recent: I remember, back in the 2000s, playing around with a chatbot called SmarterChild on AOL instant messenger that would offer essentially the same kind of interaction—you’d talk to the system about your day, and it would feel like a very intimate experience.
ChatGPT builds on those precursors, but it does so using huge amounts of data, largely culled from the internet.
Vyse: And how is it doing that?
West: It’s an example of something called a large language model. This is a broad category of systems that look at patterns in text and then try to reflect back similar patterns in their output.
So when you tell ChatGPT, for example, Tell me about math homework in the form of a haiku, it looks at this huge data set of coded text—scraped from websites like Wikipedia or Reddit—and tries to look for patterns.

When ChatGPT was trained, the process involved human coders that would assess its output and say, This text that ChatGPT just turned out looks more like human text to me; this one looks more fake; this one looks glitchy. And so on. It took thousands of hours of human training to build a level of sophistication into the technology such that it now knows pretty well what patterns we like and what patterns we dislike.
It still relies fundamentally on huge amounts of data, but it took a lot of human training to get its pattern recognition to where it is.
Vyse: Where would you see a technology like ChatGPT potentially replacing human beings—or having any sustainable competitive advantage over them?
West: It’s important to remember that ChatGPT can’t know anything that isn’t already put out there on the internet by humans—and it can’t parse meaning in any deep way.
So it’s especially effective at churning out shallow responses in particular forms. For example, in internet searches, it can help improve the quality of the predictive text that guesses what we’re looking for. Or in online customer service, where there’s already a lot of chatbots in use, it could improve the quality of the user experience there.
But I don’t see ChatGPT being used effectively in tasks that rely on deeper levels of intelligence.
It’s important to remember that ChatGPT can’t know anything that isn’t already put out there on the internet by humans—and it can’t parse meaning in any deep way.
I do think we’re likely to see it accelerate a general preexisting trend, though, that’s been devaluing certain categories of work—or making their human elements more mundane. For instance, where someone used to write text themselves, work that required real creativity and ingenuity, they might now be asked just to edit AI-generated text and make it incrementally more meaningful.
Vyse: Could you give a few examples of what that might look like?
West: If we look at the customer-service context, you might see an increasing role for ChatGPT generating predictive text, and then a human would fact-check and approve that text before it goes out.
Now, customer service represents a very important function in society; it’s something that should be highly valued. But with the increasing incorporation of AI, we’re apt to see a continuing trend toward marginalizing the human component of that work, demands for greater rates of speed, and lower rates of pay.
So, as in this example, I’d see ChatGPT as representing less the wholesale replacement of certain categories of labor and more the progressive devaluation of the human role in them.

Vyse: In the areas where ChatGPT may have a kind of competitive advantage over human labor, what do you see the implications of the technology being for people as citizens or consumers or users generally?
West: At the moment, our experience with ChatGPT is through an API—an application programming interface—that’s been opened up for everyone to use. So a lot of people are having fun with it. This is an extraordinarily effective advertisement for OpenAI, and it enables them to garner a lot of insights from the ways people are playing around with ChatGPT. But it’s also an extraordinarily expensive experiment. Sam Altman, the CEO of OpenAI, has called the computational cost of operating it daily “eye-watering.”
Ultimately, I don’t think we’re really the end users of this technology. I think the real concern about how it effects us, as citizens or consumers, is that it’s a technology will end up being controlled and utilized for profit by one or two, or maybe a handful of, companies that are able to afford the cost of its use. Which points to the importance of bringing forward broader social and political conversations—including about frameworks for regulation that might guide the way the technology is used and mitigate some of its potential harms.
Ultimately, I don’t think we’re really the end users of this technology. I think the real concern about how it effects us, as citizens or consumers, is that it’s a technology will end up being controlled and utilized for profit by one or two, or maybe a handful of, companies that are able to afford the cost of its use.
There’ve already been extensive concerns about how ChatGPT can be used for cheating in education systems, manipulation in the media, new ways to con people into giving over their information, or fraud generally—all of which are legitimate, and all of which OpenAI has acknowledged. But there’s also a deeper concern about the technology’s use by a very few companies, which is that they’ll develop their use without any clear avenue for broader public input as to how it will affect people and the public interest.
Vyse: Which very few companies are these likely to be—and how specifically do you see their likely deployment of the technology having widespread effects?
West: As to which companies, we already know the first name: Microsoft made what was initially described as a $1 billion investment in OpenAI to produce this technology. The latest reporting shows it was actually a $3 billion investment. And they’re now considering an $11 billion stake in order to use and license this technology.
So at the moment, it appears Microsoft is the focus for ChatGPT. But other companies, certainly Google, are training similar systems. And there aren’t many more companies in existence with the capacity to develop and use these kinds of systems at all—for now.

As to how widespread the effects will be, I honestly don’t know—and I don’t think that even the companies behind the technology necessarily know yet. They certainly haven’t discussed publicly how they intend to make it profitable. Which I think is the critical issue at this juncture: There just isn’t very much transparency from those who’re developing this technology, and those who’ll be deploying it, about where it’s heading.
Vyse: What do you see as the next frontier for artificial intelligence, beyond this technology?
West: Forecasting is perilous in any field, not least AI. What I would say, though—and I think this needs to be a constant refrain in this space—is that the future of AI is still something we can very much determine. There’s a powerful strain of discourse about inevitability in AI—that there’s this train running, whether we like it or not, so we’d better all get on it. But what we’ve actually seen, what history’s actually borne out, is that there’s a lot of scope to shape what that future looks like.
In the United States, for example, we’ve seen this with facial-recognition technology and the passage of bans on certain uses in cities across the country. Looking across the Atlantic, there’s a lot of momentum behind AI regulation in the European Union. So the trajectory of where artificial intelligence is going is still taking shape—and will continue to take shape—not just through innovation but through the kinds of public and regulatory conversations that are happening around it.