Research shows Canada's AI transparency register obscures how government systems actually work
A peer-reviewed analysis of 409 systems in Canada's Federal AI Register finds that most deployments are internal, but the registry prioritizes technical details over the human decision-making and uncertainty that shape their real-world operation.
1 source · cross-referenced
- Canada released its first Federal AI Register in November 2025 as a transparency measure. Researchers analyzed all 409 systems using the ADMAPS framework, a structured methodology for evaluating government algorithms. The analysis found 86% of systems are deployed internally for operational efficiency rather than public-facing services. The Register's design choices—emphasizing technical specifications over sociotechnical context—systematically obscure how human judgment, training, and uncertai…
In November 2025, Canada operationalized a Federal AI Register as part of its stated commitment to transparency in government technology deployment. Researchers from multiple Canadian universities conducted a comprehensive analysis of the Register's full dataset covering 409 systems, using a structured framework called ADMAPS to evaluate how algorithmic decision-making is represented in the public sector context.
The analysis uncovered a significant gap between the stated purpose of the Register and its actual effect. While government rhetoric emphasized AI as a tool for better governance ('sovereign AI'), the data revealed that the vast majority of deployed systems—86 percent—serve internal bureaucratic functions like efficiency optimization rather than direct public service delivery. This concentration of internal use contradicts the typical public perception of government AI as primarily affecting citizen interactions.
The researchers' core finding concerns how the Register documents these systems. Rather than capturing the sociotechnical realities of operation—including human discretion, training protocols, and uncertainty management—the Register privileges technical descriptions that frame AI systems as reliable tools rather than contestable decision-making mechanisms. This framing choice, while appearing neutral, actively constructs a particular ontology of what AI is and how it should be understood by external parties.
The authors characterize this design pattern as 'ontological design'—the way that documentation systems shape which aspects of reality become visible and which remain obscured. By elevating technical over sociotechnical information, the Register systematically hides the human judgment required to operate these systems and the points at which contestation might occur. The result is visibility without accountability: the public can see that systems exist, but cannot meaningfully challenge how they function.
- Apr 28, 2026 · The Verge
Elon Musk testifies in trial against OpenAI leadership over company structure and mission
Trust70 - Apr 28, 2026 · OpenAI — News
OpenAI gains FedRAMP Moderate authorization for federal agency use
Trust79 - Apr 27, 2026 · OpenAI — News
OpenAI and Microsoft amend partnership agreement to clarify long-term terms
Trust67