AI Transparency 2.0: Why Washington Must Go Beyond Deepfakes to Decision Provenance
HB 1170 is a strong start—but the Digital Government Summit made clear that Washington needs transparency not just for synthetic media, but for the AI shaping public decisions.
Washington’s HB 1170 is an important step forward. Like California’s AI Transparency Act, it focuses on labeling AI-generated and AI-altered content, embedding latent disclosures, and providing public detection tools. These measures matter. As Tom Kemp has documented, states that anchor AI policy in transparency, traceability, and consumer protection gain bipartisan traction and avoid unworkable or overbroad AI legislation.
But as I argued recently in AI is Cheap. Trust is Expensive., transparency for content is only half of the equation. What residents need is trust in the systems that inform decisions about them. And today’s Washington Digital Government Summit made that clearer than ever.
AI in government is no longer primarily about generating images or text. It is augmenting decisions, routing cases, prioritizing inspections, assisting in contracting, and shaping how residents interact with the state. Deepfakes aren’t the only risk. Opaque intelligence is, too.
Washington now needs AI Transparency 2.0: a model that provides provenance not just for synthetic media, but for AI-assisted decisions.
What HB 1170 Gets Right
HB 1170 focuses on media transparency:
Clear labeling of AI-generated or AI-altered content
Latent and manifest disclosures
Publicly accessible detection tools with APIs
Limits on retention of user-submitted content
Alignment with C2PA-style provenance principles and NIST AI RMF concepts of traceability
This is the right foundation. Synthetic media harms are real. Election security, misinformation prevention, and public trust all benefit from strong provenance requirements.
But the bill only addresses outputs that look like media. It does not address AI systems used for decision support, which is where the public sector is already moving.
Today’s Summit demonstrated that gap clearly.
What Washington’s Leaders Said Today
At the Washington Digital Government Summit, three themes emerged across the closing panel on “AI Governance and Digital Equity in Washington Government.”
Bill Kehoe, State CIO
“AI innovation must be risk-averse and transparent.”
Kehoe emphasized strong data foundations, privacy, security, and clear disclosures. He highlighted the modern wa.gov resident portal as an example of how structured data and personalization can enhance services, while noting that transparency and opt-outs are mandatory for public trust.
Jake Hammock, CISO, City of Seattle
“Seattle is adopting human-centered AI with humans in the loop — not displacement, but augmentation.”
Seattle is hiring a City AI Officer and implementing its Responsible AI plan across public safety, permitting, and customer-service operations. Hammock stressed equity, accessibility, language translation, and correct labeling of AI outputs.
Stephen Hurd, Acting CIO, King County
“Generative AI for decision-making remains tricky — human oversight is essential.”
Hurd emphasized productivity and capacity gains, but made it clear: any decision that affects residents must retain human review. King County’s upcoming AI policy is grounded in oversight, transparency, and digital equity.
Across all three leaders, one message was consistent:
Government needs innovation, but it must remain cautious, transparent, and accountable.
That requires more than content labeling.
It requires decision provenance.
The Gap in HB 1170: Transparency for Media but Not Decisions
HB 1170 does not apply to:
Case prioritization
Eligibility determination
Contract routing
Public safety triage
Fraud detection
Resource allocation
Workforce augmentation
Constituent-service recommendations
None of these produce synthetic media.
All of them influence residents’ lives.
As the National Conference of State Legislatures puts it, governments nationwide are expanding their use of AI to “improve efficiency, decision-making, and the delivery of government services.”1 Today’s Summit speakers described the same reality in Washington.
We need transparency for more than images and content.
We need transparency for how AI contributes to decisions.
A Three-Layer Provenance Model for Washington
Drawing from both HB 1170 and the guidance of Washington’s technology leaders, Washington can adopt a forward-looking model of AI provenance:
1. Content Provenance
This is the domain of HB 1170:
Labeling, watermarking, and detection of AI-generated or altered media.
2. System Provenance
Which model generated the output?
What version?
What training, tuning, and guardrails?
What data quality and risks were known?
This aligns with Kehoe’s emphasis on data foundations, Hammock’s focus on governance, and Hurd’s insistence on transparency.
3. Decision Provenance
When AI informs or influences a decision, residents deserve to know:
Who or what made the recommendation
What signals, data, or models informed it
How the reasoning chain was constructed
Which human reviewed or approved it
What alternatives were considered
This is where policy needs to evolve.
If content provenance protects residents from deception, decision provenance protects them from misgovernance.
How Washington Can Lead Nationally
To build on HB 1170 and match the future of public-sector AI use, Washington policymakers can consider the following:
1. Clarify provenance in legislative intent
Acknowledge content, system, and decision provenance even if only the first is mandated today.
2. Align with government-grade standards
NIST AI RMF
NIST Data Lifecycle guidance
C2PA for content provenance
OCIO Policy 188 updates
Seattle’s Responsible AI Framework
3. Require disclosures for AI-assisted decisions
Not bans. Not burdens.
Just clear notification, human review, and documented reasoning.
4. Support innovation funding
Kehoe’s call for agile modernization funds is critical for safe experimentation.
5. Encourage public-private collaboration
Seattle and King County are building their own frameworks.
The state can accelerate their progress by providing structure without over-prescription.
Washington can become a national leader by expanding transparency from media to the decisions that shape public outcomes.
Toward Trusted Intelligence in Government
The conversations at the Summit revealed something important:
Public-sector leaders aren’t asking for more automation. They’re asking for clarity, consistency, and confidence in the intelligence they rely on.
They want to know who — or what — they’re listening to.
They want to understand why a recommendation was made.
They want a traceable line from advice to authentic expertise.
They want AI that behaves less like a black box and more like a trusted colleague.
This is where the next generation of AI will evolve: toward systems that don’t just generate content, but embody verifiable expertise, maintain consistent reasoning, and operate with provenance by design. Systems where the source of insight is clear, the chain-of-custody is intact, and decision-makers can see why a certain answer was produced.
Because ultimately, as I wrote in AI is Cheap. Trust is Expensive., the future of AI isn’t about scaling intelligence — it’s about scaling trustworthy intelligence. And trust doesn’t come from speed or capacity. It comes from knowing what — and who — is behind the answers.
Conclusion
HB 1170 is the right starting point.
Transparency for synthetic media is essential.
But today’s Washington Digital Government Summit made clear that the real frontier is AI-informed decisions, not just AI-generated images.
Washington has an opportunity to lead the nation by expanding transparency to content, systems, and decisions — building a governance model that supports innovation while protecting residents.
AI transparency must move past detecting deepfakes.
It must ensure accountability for the intelligence we rely on.
National Conference of State Legislatures, “Artificial Intelligence in Government: The Federal and State Landscape,” 2024.




