With anxiety rising globally over the rapid spread of increasingly sophisticated artificial-intelligence technologies, governments in America and Europe are making first attempts to create rules for them. In October, the U.S. administration issued a broad set of regulations and guidelines. Soon after, in December, European Union officials agreed on the final version of the EU’s AI Act.

The U.S. initiative—an executive order from the White House—requires companies working on AI models that affect national security to share the results of safety tests with the government. It also directs federal agencies to develop standardized testing for both safety and performance, suggests watermarks for AI-generated content, and recommends independent oversight of the technology’s potentially catastrophic risks.

The EU rules ban AI models that create scoring systems for individual people based on recorded behaviors—systems used pervasively in China—and establish transparency and oversight requirements for AI applications classified as high-risk.

Still, the U.S. regulations are mostly a framework of recommendations and suggestions, without the force of law; and the EU legislation won’t take effect until next year—an eternity in the world of AI development. Meanwhile, Microsoft, Amazon, and Google invested $18 billion in AI start-ups, representing about two-thirds of all global venture investments in the new technology. Tech giants and their emerging competitors continue to rush out new AI applications with the latest advances—and to continue their intense and sustained lobbying of the U.S. and European governments. So what do these new rules mean for AI?

Daron Acemoglu is a professor of economics at MIT and a co-author of the recent book Power and Progress: Our 1,000-Year Struggle Over Technology and Prosperity. In his view, the new regulations will likely have little effect on the industry, because they don’t address the fundamental problem underlying the creation of all AI applications: the massive, global harvesting of data—including nearly all online purchases, social-media posts, photos, personal details, and copyrighted material—without the consent or compensation of the people who create it.

The new U.S. and EU rules, Acemoglu says, label consumer privacy and copyrighted data as important concerns, but they don’t include any limits on AI’s collection of individual data or creative content. Instead, they try to set guardrails that can prevent the worst abuses of the technology—but for the most part, they also leave the industry with all the leeway it needs to pursue its own goals. It’s an approach that doesn’t affect the venture-capital model underwriting AI’s development—and so, won’t affect investors’ incentives to push tech firms to develop AI applications that have the potential to become monopolies, or at least dominant in their markets. And that kind of dominance tends to hinge on one thing above all: having ever more data than your competitors.


Michael Bluhm: Back in October, you made the point that the focus in the tech industry was now to push generative AI—artificial intelligence that produces text, images, and other media—as fast as possible. There’s been some tension, in the meantime, between those pushing and those calling for a slowdown. What’s going on in the AI industry now?

George Prentzas

This article is for members only

Join to read on and have access to The Signal‘s full library.

Join now Already have an account? Sign in