Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models | WIRED


AI Summary Hide AI Generated Summary

Key Changes in AI Research Agreements

Under the Trump administration, the National Institute of Standards and Technology (NIST) revised its cooperative research and development agreement for the US Artificial Intelligence Safety Institute (AISI). The updated agreement eliminates references to 'AI safety,' 'responsible AI,' and 'AI fairness,' instead prioritizing 'reducing ideological bias' to promote 'human flourishing and economic competitiveness.'

Shift in Focus

The new agreement removes mention of developing tools for authenticating content and tracking its provenance, indicating reduced interest in combating misinformation and deepfakes. It also emphasizes an 'America First' approach, encouraging the development of tools to enhance the US's global AI standing.

Concerns and Criticisms

Researchers express concern that ignoring issues like AI safety, fairness, and misinformation could lead to unchecked discriminatory algorithms, harming users. The removal of these priorities is seen as a significant shift away from responsible AI development. One researcher notes the ambiguity surrounding the phrase “human flourishing.”

Elon Musk's Influence

Elon Musk's involvement is highlighted, given his efforts to reduce government spending and his criticism of AI models developed by OpenAI and Google. Musk's past statements about AI bias are mentioned, along with his involvement with xAI, an AI company.

Impact and Broader Context

The article notes a growing body of research indicating that political bias in AI models affects both liberals and conservatives. It also mentions the activities of Musk's "Department of Government Efficiency," which has led to firings and document deletion in several government departments, including NIST.

  • Focus shifted from AI safety and fairness to economic competitiveness and reducing ideological bias.
  • Concerns raised about unchecked discriminatory algorithms.
  • Elon Musk's influence and criticism of AI models highlighted.
  • Broader context of political bias in AI and government actions mentioned.
Sign in to unlock more AI features Sign in with Google

The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of “AI safety,” “responsible AI,” and “AI fairness” in the skills it expects of members and introduces a request to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness.”

The information comes as part of an updated cooperative research and development agreement for AI Safety Institute consortium members, sent in early March. Previously, that agreement encouraged researchers to contribute technical work that could help identify and fix discriminatory model behavior related to gender, race, age, or wealth inequality. Such biases are hugely important because they can directly affect end users and disproportionately harm minorities and economically disadvantaged groups.

The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signaling less interest in tracking misinformation and deep fakes. It also adds emphasis on putting America first, asking one working group to develop testing tools “to expand America’s global AI position.”

“The Trump administration has removed safety, fairness, misinformation, and responsibility as things it values for AI, which I think speaks for itself,” says one researcher at an organization working with the AI Safety Institute, who asked not to be named for fear of reprisal.

The researcher believes that ignoring these issues could harm regular users by possibly allowing algorithms that discriminate based on income or other demographics to go unchecked. “Unless you're a tech billionaire, this is going to lead to a worse future for you and the people you care about. Expect AI to be unfair, discriminatory, unsafe, and deployed irresponsibly,” the researcher claims.

“It’s wild,” says another researcher who has worked with the AI Safety Institute in the past. “What does it even mean for humans to flourish?”

Elon Musk, who is currently leading a controversial effort to slash government spending and bureaucracy on behalf of President Trump, has criticized AI models built by OpenAI and Google. Last February, he posted a meme on X in which Gemini and OpenAI were labeled “racist” and “woke.” He often cites an incident where one of Google’s models debated whether it would be wrong to misgender someone even if it would prevent a nuclear apocalypse—a highly unlikely scenario. Besides Tesla and SpaceX, Musk runs xAI, an AI company that competes directly with OpenAI and Google. A researcher who advises xAI recently developed a novel technique for possibly altering the political leanings of large language models, as reported by WIRED.

A growing body of research shows that political bias in AI models can impact both liberals and conservatives. For example, a study of Twitter’s recommendation algorithm published in 2021 showed that users were more likely to be shown right-leaning perspectives on the platform.

Since January, Musk’s so-called Department of Government Efficiency (DOGE) has been sweeping through the US government, effectively firing civil servants, pausing spending, and creating an environment thought to be hostile to those who might oppose the Trump administration’s aims. Some government departments such as the Department of Education have archived and deleted documents that mention DEI. DOGE has also targeted NIST, the parent organization of AISI, in recent weeks. Dozens of employees have been fired.

🧠 Pro Tip

Skip the extension — just come straight here.

We’ve built a fast, permanent tool you can bookmark and use anytime.

Go To Paywall Unblock Tool
Sign up for a free account and get the following:
  • Save articles and sync them across your devices
  • Get a digest of the latest premium articles in your inbox twice a week, personalized to you (Coming soon).
  • Get access to our AI features

  • Save articles to reading lists
    and access them on any device
    If you found this app useful,
    Please consider supporting us.
    Thank you!

    Save articles to reading lists
    and access them on any device
    If you found this app useful,
    Please consider supporting us.
    Thank you!