For more than two years, technology leaders at the forefront of the development of artificial intelligence have made unusual demands from lawmakers. They wanted Washington to regulate them.
Technology executives warned lawmakers that generative AI, which can create text and images that mimic human creation, could disrupt national security and elections, and ultimately eliminate millions of jobs.
AI is “very wrong,” Openai CEO Sam Altman testified before Congress in May 2023.
But since President Trump's election, technical leaders and their companies have changed the songs, and in some cases reversed the course, becoming the most powerful push to advance their products so that the bold government demands don't get in their way.
In recent weeks, Meta, Google, Openai and others have called on the Trump administration to block state AI laws and declare that it is legal to use copyrighted material to train AI models. It also uses federal government data to develop technology and lobby to facilitate access to energy sources for computing demand. And they sought tax cuts, grants and other incentives.
This shift was made possible by Trump, who declared AI to be the most valuable weapon in a country that surpasses China with advanced technology.
On his first day in office, Trump signed an executive order to roll back safety testing rules for the AI used by the government. Two days later, he signed another order and created a policy to “maintain and strengthen the control of America's global AI” for industry proposals.
The tech companies “have been truly encouraged by the Trump administration, and even issues like safety and responsible AI have completely disappeared from concerns,” said Laura Karoli, a senior fellow at the Wadwani AI Center at the Center for Strategic and International Studies, a nonprofit think tank. “The key is to establish US leadership with AI.”
Many AI policy experts worry that such unlimited growth can be accompanied by the rapid spread of political and health disinformation, among other potential issues. Discrimination by automated finance, duties and housing application screeners. and cyberattacks.
A tech leader's reversal is tough. In September 2023, more than dozens approved AI regulations at a Capitol Hill summit hosted by then-New York Democrat, Sen. Chuck Schumer, and a majority leader. At the meeting, Elon Musk warned of the “civilization risks” raised by AI
In the aftermath, the Biden administration worked with the largest AI companies to voluntarily test the weaknesses of government safety and security and mandated safety standards. States like California have introduced laws to regulate technology with safety standards. Publishers, authors and actors sued high-tech companies over the use of copyrighted materials to train AI models.
(The New York Times sued Openai and its partner Microsoft, accusing them of copyright infringement over news content related to AI Systems. Openai and Microsoft denied these claims.)
However, after Trump won the election in November, tech companies and their leaders quickly stepped up lobbying. Google, Meta and Microsoft each have donated $1 million to Trump's inauguration. Meta's Mark Zuckerberg has thrown the inauguration party and has met with Trump many times. Musk, who owns his own AI company, Xai, has spent almost every day on the president's side.
Similarly, Trump welcomed the announcement of AI, including plans for Openai, Oracle and SoftBank. This is investing $100 billion in an AI data center, a huge building full of servers that provide computing power.
“We need to lean on the future of AI with optimism and hope,” Vice President JD Vance told government officials and technical leaders last week.
At the AI Summit in Paris last month, Vance also called for “promoting growth” AI policies, warning world leaders against “overregulation” that could “take off and kill transformative industries.”
Currently, AI-influenced high-tech companies and others are offering responses to the President's second AI executive order, “Removing barriers to American leadership in artificial intelligence,” mandating the development of AI policies that will grow within 180 days. Hundreds have submitted comments to the National Science Foundation and the Science and Technology Policy Office that influenced their policies.
Openai submitted a 15-page comment, asking the federal government to preempt the creation of AI laws. The San Francisco-based company also called out Deepseek, a Chinese chatbot created for a small portion of the cost of US development chatbots.
The AI competition has ended effectively if Chinese developers “have free access to data and if American companies were left without fair use access,” Openai said, demanding that the US government take over the data and supply it to the system.
Many tech companies also argued that it is legal to use copyrighted works to train AI models and that the administration should be on their side. Openai, Google and Meta said they believe they have legal access to copyrighted works such as books, films and art.
With its own AI model called llamas, the meta encourages the White House to issue executive orders or other lawsuits, “makes it clear that training the model is used clearly and fairly.”
Google, Meta, Openai and Microsoft have said that using copyrighted data is legal as information is converted in the model training process and is not used to replicate the intellectual property of rights holders. Actors, writers, musicians and publishers argue that tech companies need to compensate for the acquisition and use of their works.
Some tech companies have lobbyed the Trump administration to support “open source” AI, essentially allowing computer code to be copied, modified and reused at will.
Meta, which owns Facebook, Instagram and WhatsApp, is most strongly pushing open sourcing policy recommendations that other AI companies like humanity describe as increasing vulnerability to security risks. META says open source technology will help speed up AI development and help startups keep up with more established companies.
Andreessen Horowitz, a Silicon Valley venture capital firm with stakes in dozens of AI startups, is also looking for support for open source models where many companies rely on creating AI products.
Andreessen Horowitz has had the toughest debate over new regulations regarding existing AI laws on safety, consumer protection and civil rights, the company said.
“It prohibits harm and punishes bad actors, but does not require developers to jump over the nasty regulation hoops based on speculative fear,” Andreesen Horowitz said in the comment.
Others continued to warn that AI needs to be regulated. Civil rights groups have called for an audit of the system to ensure that they do not discriminate against vulnerable groups in housing and employment decisions.
The artists and publishers said that AI companies should disclose their use of copyrighted materials and asked the White House to reject the tech industry's argument that fraudulent use of intellectual property is within the scope of copyright law to train models. The Center for AI Policy Think Tanks and Lobbying Group have called for a third-party audit of the system on national security vulnerabilities.
“In other industries, if a product is harmful or injured to a consumer, the project is flawed and the same standards must be applied to AI,” said KJ Bagchi, vice president of the Civil Rights Technology Center, submitting one of the requests.