World leaders, technology moguls, assorted hangar-ons (including you truly you) will be taking place this week for the Artificial Intelligence Action Summit, a conference co-hosted by French President Emmanuel Macron and Indian Prime Minister Narendra Modi. It can be collected in. We discuss many AI-related issues.
Leaders of three American AI companies – Sam Altman of Openai, Dario Amodei of Anthropic and Demis Hassabis of Google Deepmind are here as well as herds of prominent AI leaders, academic researchers and civil society groups. (Vice President JD Vance, who leads the US delegation, is scheduled to appear on Tuesday.)
During the pain bites, there are some of the things I've seen so far:
Europe has regrets about regulations.
The background to the AI Summit is that Europe appears to be rethinking after passing strict data privacy and social media laws over the past decade and taking a head start to regulate AI with the European Union's AI law is.
Macron, who announced $112.5 billion in private investment in France's AI ecosystem this week, is particularly wary of falling behind. He became a cheerleader for Mistral, a French AI startup, and opposed “punitive” regulations that could degrade the country's tech sector.
High-tech companies (and their lobbyists) appreciate assists. But it's probably too late to stop the AI law. The AI Act is expected to come into effect in stages over next year. And some American AI executives still say to me that they still think Europe is a difficult place to do business compared to other large markets, such as India, where regulations are relatively loose. Ta.
AI athletics are losing the ground.
The Paris AI Summit is actually the third in a series of global AI Summits. The first two, which took place in the UK last year in 2023 and in South Korea, focused on the potential risks and harms of advanced AI systems, including human extinction.
But in Paris, Dwemers have been on the sidelines in favor of a more cheerful and optimistic vision of the possibilities of technology. Panelists and speakers were invited to talk about AI's ability to accelerate progress in fields such as medicine and climate science, and dark stories about AI acquisition risks were relegated to primarily informal side events. Ta. And the leaked draft of the official summit statement, which was expected to be signed by some attending countries, was panned by AI safety groups for not paying too much attention to catastrophic risks.
In part, it reflects the intentional decision of Mr Macron and his li. But safety when she starts on Monday. ) But it also reflects greater changes within the AI industry. This seems to be finding it easy for policymakers to be excited about AI advances if they are not worried about killing them.
Deepseek has also activated rans.
Like all AI events in the past month, the Paris Summit has surprised the world with a powerful reasoning model that is reportedly built for some of the costs of the major American models. It's bustling with conversations about the startup company Deepseek.
In addition to illuminating the fire under the American AI Giants, Deepseek has given smaller AI costumes in Europe and elsewhere new hope to other places counted from the lace. By building models using more efficient training techniques and clever engineering hacks, DeepSeek may only need hundreds of millions of dollars, not hundreds of millions, to maintain the pace of the AI frontier. I have proven that.
“Deepseek has shown that every country is part of AI, but this was not clear before,” said Clément Delangue, CEO of AI development company Hugging Face. He spoke.
Now, Dellang said, “The whole world is catching up.”
Trump's AI policy is a question mark.
This week's most popular speculation game is what the Trump administration's stance on AI will look at.
The new administration has made several moves with AI so far. For example, abolishing the Biden White House executive order, which laid out a test program for powerful AI models. But it still doesn't lay out a complete agenda of technology.
Some people here are one of the top advisors to the president, both of whom run AI companies and express fear about the powerful AI run amok, and Trump's more careful approach I hope they will convince me to take it.
Others say he is like venture capitalists and so-called AI accelerators on Trump's track, such as investor Mark Andreesen, tore through regulations that could neglect the AI industry and slow it down. I believe it will persuade you.
Vance may tilt the administration's hand on Tuesday during his summit speech. But no one here expects stability anytime soon. (An AI executive characterized the Trump administration as “highly distributed,” which is “chaos” AI speaking.)
No one is working on a really short AI timeline.
For me, the biggest surprise at the Paris Summit is that policymakers don't seem to understand how quickly AI systems arrive or how disruptive they are.
Hassabis of Google Deepmind said at an event at the company's Paris office on Sunday that AGI (AI systems that match or exceed the human abilities of many domains) could arrive within five years. Ta. (Human Amodei and Openai's Altman are predicting their arrival even faster within the next year or two.)
Even if you apply discounts to forecasts made by the tech CEO, the arguments I heard in Paris lacked the urgency you would expect if a strong AI was really round the corner. .
The policy work here is based on large concepts on fuzzy concepts such as “multi-stakeholder engagement” and “framework with innovation.” However, few people are seriously thinking about what will happen if an AI system that is smarter than humans arrives in a few months or asks the correct follow-up question.
What does that mean for workers if a powerful AI agent capable of replacing millions of white-collar jobs was an imminent reality rather than a distant fantasy? What kind of regulations are needed in a world where AI systems can be recursively self-improved or capable of autonomous cyberattacks? And if you are an AGI optimist, how should you prepare for rapid improvements in areas such as scientific research and drug discovery?
I have no intention of a policy maker. Policymakers are doing their best to maintain AI progress. Technology moves at once. The institution moves to another person. It could also lead to industry leaders far apart from AGI's forecasts, or new obstacles to AI improvements could emerge.
But this week we'll be listening to policymakers and discussing how to manage AI systems from a few years ago – using regulations that are likely to become obsolete right after they're written – I was impressed by how different these time scales are. Sometimes I feel like I'm struggling to ride a horse and look at policy makers and put seat belts on my Pas Lamborghini.
I don't know what to do about this. It's not as if industry leaders are ambiguous or unclear about their intentions to build AGIs or their intuition that it will happen soon. But if the Paris summit is any indication, something is lost in the translation.