Inside France’s Effort to Shape the Global AI Conversation
France’s AI Action Summit marks a departure from previous gatherings. Some welcome the change. Others worry safety has been sidelined.

Greetings from Paris,
I’m here for the build-up to France’s AI Action Summit—a gathering where heads of state, technology CEOs, and civil society representatives will seek to chart a course for AI’s future. Attendees expected at the Summit include OpenAI boss Sam Altman, Google chief Sundar Pichai, European Commission president Ursula von der Leyen, German Chancellor Olaf Scholz and U.S. Vice President J.D. Vance.
Set to take place next at the presidential Élysée Palace in Paris, it will be the first such gathering since the virtual Seoul AI Summit in May—and the first in-person meeting since November 2023, when world leaders descended on Bletchley Park for the U.K.’s inaugural AI Safety Summit. While other international forums on AI exist—such as the OECD or G7—the summit series is the only place, besides the U.N. where China considers itself a founding member.
This marks my second visit to Paris in recent months, during which, I’ve spoken with the Summit’s coordinators, as well as experts on AI, to try and understand how France’s Summit will chart a different path to previous gatherings—and what that means for the future. The result is my latest piece for TIME.
While the U.K.'s Summit centered on mitigating catastrophic risks—such as AI aiding would-be terrorists in creating weapons of mass destruction, or future systems escaping human control—France has rebranded the event as the 'AI Action Summit,' shifting the conversation towards a wider gamut of risks—including the disruption of the labor market and the technology’s environmental impact—while also keeping the opportunities front and center.
It comes at a critical moment in AI development, when the CEOs of top companies believe the technology will match human intelligence within a matter of years. If concerns about catastrophic risks are overblown, then shifting focus to immediate challenges could help prevent real harms while fostering innovation and distributing AI’s benefits globally. But if the recent leaps in AI capabilities—and emerging signs of deceptive behavior—are early warnings of more serious risks, then downplaying these concerns could leave us unprepared for crucial challenges ahead.