- Teb's Lab
- Posts
- Trump’s AI Action Plan
Trump’s AI Action Plan
A massive gift to the AI industry, a green light for algorithmic discrimination, an environmental nightmare, and a few less terrible surprises
The Lab Report
I’m Tyler Elliot Bettilyon (Teb) and this is the Lab Report: News and analysis at the intersection of computing technology and policy.
If you’re new to the Lab Report you can subscribe here.
Should we sell off public lands to monopolistic tech firms and then pay them to build data centers and power plants on that land? Should we stop enforcing the Equal Protection Clause as long as an AI is the one discriminating? Should we enshrine AI as a linchpin of the modern Military Industrial Complex?
The Trump administration seems to think so.
The new AI Action Plan — a document outlining the Trump Administration's policy goals regarding AI — is an incredible gift to AI firms. It calls for massive deregulation of the industry in areas spanning environmental impact, algorithmic discrimination, and general liability. It also calls for large-scale wealth transfer from the government into private AI firms and the selling of public land to accommodate massive new data center and power plant construction (no wind or solar allowed).
There’s plenty of classic MAGA fare mixed in. The NIST AI Risk Assessment Management Framework will “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” Additionally, to ensure “these systems be built from the ground up with freedom of speech and expression in mind,” the plan mandates that the federal government only work with LLM developers whose “systems are objective and free from top-down ideological bias.”
This goal got an entire executive order, “Preventing Woke AI in the Federal Government.”
Taken at face value, the order is impossible to satisfy. Here’s what the government must ensure when procuring LLMs:
(a) Truth-seeking. LLMs shall be truthful in responding to user prompts seeking factual information or analysis. LLMs shall prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory.
(b) Ideological Neutrality. LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI. Developers shall not intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or otherwise readily accessible to the end user.
The core functionality of LLMs makes these two goals quite literally impossible. Just look at the recent fiascos around sycophancy, the persistent issue of ‘hallucinations,’ and the impossibility of auditing the enormous training datasets. No one knows how to build such an LLM.
It’s also clearly in violation of this plan’s admonition to protect free speech: Political and ideological bias is precisely what the 1st Amendment was invented to protect. Free speech is fundamentally about elevating adversarial speech, combating ideas with other ideas. This order suppresses specific forms of speech.
If you take the order both seriously and literally, it’s a functional ban of LLM technology in the federal government. But because there is also a section titled “Accelerating AI Adoption in Government,” we can guess what will actually happen: The administration will find LLMs that favor their ideological goals and shower Trump himself with praise.
Notwithstanding, there were a few pleasant surprises. Investment in interpretability research, which the industry has sorely lacked. A small section on encouraging open source development, which would give academic researchers and less capitalized firms better access to models. Plus, a push for onshoring robotics and computer chip manufacturing, which might be hard to achieve but would support US strategic goals and could have positive impacts on the labor market.
The plan is structured into three “pillars,” but I had 6 main takeaways.
1) Framing China as a geopolitical enemy is a key justification for much of the plan.
Down to the subtitle of the document, “Winning the Race,” this plan is brimming with fear that China may overtake the US in AI development. It reminds me of the scare tactics that buttressed The Patriot Act, and the goals are again similar: Americans are being told we need to give up certain rights and protections in order to maintain our lead.
Whether it’s the revival of Biden era export controls on advanced chips, publishing “evaluations of frontier models from the People’s Republic of China for alignment with Chinese Communist Party talking points and censorship,” or “Counter[ing] Chinese influence in international governance bodies,” the spectre of a race that can be lost to a geopolitical enemy underpins much of the plan.
It is because we “need” to “win the race” that the administration believes…
2) It’s time to go full-speed ahead and throw caution to the wind.
The plan includes major carve-outs or exceptions to existing regulations for AI technologies, especially environmental laws that might affect permitting for new data centers, such as the Clean Air and Clean Water Acts. It also calls for several federal agencies to actively evaluate their existing policies, rules, memoranda, lawsuits, and investigations and remove anything that “unduly burdens AI firms.”
It also includes threats to withhold certain funding from individual states if their regulatory climate isn’t favorable to AI firms. I suspect this was added because the AI moratorium failed.
3) Expect a major transfer of wealth from public coffers to private AI firms.
Trump doesn’t just want to clear the path for AI firms, he wants to pave the trail.
From selling public lands to large-scale grants for building new AI infrastructure, the plan admonishes the government to bend over backwards to give tech firms gifts. Two of the executive orders target this wealth transfer. The first sets out to slash permitting regulations, sell public land, and create grants and loans for tech firms building data centers. The second aims to mobilize the federal government to help American AI firms export their technology by using federal financing tools and acting as a sort of external sales department.
There are also a few sections about bolstering the military industrial complex. From warfighting capabilities to defensive initiatives securing data centers and AI research, the administration wants to spend big on military & geopolitically focused AI. Firms like Palantir and Anduril are especially poised to seize the opportunity.
4) Algorithmic discrimination will not be addressed by this administration.
Here are the first two bullets in the section titled “Ensure that Frontier AI Protects Free Speech and American Values.”
* Led by the Department of Commerce (DOC) through the National Institute of Standards and Technology (NIST), revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.
* Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.
The irony of demanding models be free from “top-down ideological bias” while also banning the use of the words “misinformation, diversity, equity, inclusion, and climate change,” in NIST’s Risk Management Framework is apparently lost on the document’s authors.
Reading between the lines, the administration is actively preventing NIST from pursuing solutions to algorithmic discrimination either themselves or via grant programs. That’s a real shame because it’s one of the most consistently proven issues with machine learning systems.
5) In the research world, the plan wants to center open source, interpretability, and better evaluations.
I’ll be honest, this was a pleasant surprise. The plan calls for the government to build better systems and tools to evaluate model performance. AI evaluation is quite terrible right now, so I’d love to see more of this.
The plan also calls for more interpretability research and has a whole section dedicated to expanding and fostering open source development and open weight models. I think these two go hand in hand, and better open models would give a wide array of researchers better opportunities to study and evaluate the models. It’s crucial that academia and others outside of the huge tech firms be able to study these models, and open source is a meaningful way to make that happen.
6) This administration is known for frequently lying and changing their mind.
Just like believing them when they said they’ll release the Epstein files, taking this plan at face value is probably a mistake. They will do some of these things, but they probably don’t intend on doing all of them. Plus, regardless of current intent, they will change their mind about some of them in the near future. I’ll leave it as an exercise for the reader to prognosticate about which is which. But here's a hint: look at the three executive orders.
Those three things happened. For now, everything else is just talk.
Reply