The "Wild West" of AI? Why data governance matters more than ever

Articles

While the federal government has opted to restrict state mandates and regulations on the usage of AI tools, Iron Mountain Government Solutions (IMGS) emphasizes that the importance of proper information governance and your role as the gatekeepers of this information remain unchanged.

Melissa Carson
Melissa Carson
February 2, 2026
AI frontier Wild West

The recent executive order has curbed the ability of individual states to regulate AI tools, creating an environment ripe for rapid innovation and deployment. But make no mistake: deregulation of the tools does not equal deregulation of the data.

As public sector leaders, we are currently navigating a regulatory paradox: A federal push for innovation and deregulation, but no clear roadmap for safety or responsibility. This places the burden of risk squarely on the shoulders of the information owners.

Data privacy regulations such as HIPAA, FCRA, and strict PII protections, remain in full force. If an AI tool ingests citizen data and mishandles it, the lack of an AI-specific law will not shield your agency from the consequences of a data breach or privacy violation.

Your mandate: The gatekeeper’s role

In accordance with the 2025 AI Action Plan, high quality data is a national strategic asset. As the gatekeepers, records and data professionals will need to deploy a responsible information governance model that ensures high quality, compliant data is available for AI tools.

Here are four immediate steps grounded in the strategies IMGS provides to federal agencies, you can take to ensure your agency’s AI adoption is secure, compliant, and trustworthy:

1. Enforce data minimization

AI models are hungry for data, but you must put them on a diet. Only collect and ingest the data strictly necessary for the specific task. If the model doesn’t need PII to function, do not feed it PII. Minimization reduces your attack surface and limits the potential fallout of a breach.

2. Implement "need-to-keep" retention policies

Data retention is often treated as a passive archive, but in the age of AI, it must be active. Define clear retention periods for training data and user interactions. If data is no longer serving a verified purpose, it should be defensibly destroyed. Hoarding data "just in case" creates unnecessary liability.

3. Demand privacy-preserving techniques

Before approving a new AI tool,ensure your technical teams explore potential partners' privacy architecture. Push for Anonymization (stripping PII while keeping data utility) and Differential Privacy (adding "noise" to data so identifying data cannot be reverse-engineered). You must ensure that the insights are accurate without compromising the individuals behind the data.

4. Mandate "human-in-the-loop" oversight

Algorithms are powerful, but they lck context and conscience. True information governance requires more than just securing the data; it requires validating the decisions derived from it. Ensure that no high-stakes decision, especially those affecting citizen services, benefits, or legal standing, is made solely by an automated system. AI should function as a decision-support tool, not a decision-maker. By maintaining a human element in the review process, you ensure accountability, catch potential "hallucinations," and mitigate the risk of algorithmic bias.

The bottom line

The legal landscape may be shifting, but the ethical imperative remains constant. By prioritizing robust information governance with a partner like IMGS you do more than just avoid penalties - you build the public trust necessary to unlock the true transformative potential of AI.