Building a Pragmatic AI Governance Framework: Lessons from the Trenches

BS - Ben Saunders

Building a Pragmatic AI Governance Framework: Lessons from the Trenches

Having now helped various regulated organisations navigate the complexities of AI adoption and governance, I've learnt one fundamental truth: there's no such thing as a one-size-fits-all approach to AI governance. As someone who has witnessed both the triumphs and tribulations of AI implementation across various regulated industries, I want to share insights that could save you from the common pitfalls I've encountered along the way.

The Problem with Traditional Governance Approaches

Many organisations, particularly those in regulated industries, initially approach AI governance the same way they've handled other technology risks - with a rigid, uniform framework. However, AI presents unique challenges that demand a more nuanced approach. Whilst the EU AI Act provides a valuable classification framework, simply mapping controls to these risk classifications isn't enough for real-world applications.

Traditional technology risks typically involve known, static systems with predictable behaviours. A database either has proper access controls or it doesn't. A network is either properly secured or it isn't. These binary states make traditional governance frameworks effective - you can create clear checklists and controls that either pass or fail.

AI systems, however, are fundamentally different in several critical ways:

Dynamic Learning and Evolution

Unlike traditional software that behaves according to fixed rules, AI systems continue to learn and evolve based on new data. A model that passed all governance checks today might drift tomorrow as it encounters new patterns in production data. This dynamic nature means governance must be continuous rather than point-in-time.

Context-Dependent Performance

AI systems can perform differently across various contexts, even when the underlying code hasn't changed. A language model might perform perfectly for general customer service queries but produce inappropriate responses when dealing with sensitive medical information for instance. Traditional governance frameworks aren't designed to handle this context-dependency.

Complex Bias and Fairness Considerations

While traditional systems might have straightforward fairness requirements (like equal access times for all users), AI systems can inadvertently learn and amplify societal biases in ways that are subtle and complex. This requires sophisticated monitoring and mitigation strategies that go beyond simple compliance checkboxes.

Data Quality Dependencies

Traditional systems typically have clear data quality requirements. For AI, the relationship between data quality and system performance is more complex and often non-linear. Small changes in training data can have outsized impacts on model behaviour, requiring more sophisticated governance approaches.

Control Mismatch

Simply mapping controls to risk classifications can lead to either over-control (stifling innovation) or under-control (missing important risks) because the same AI system might require different controls based on:

  • The sensitivity of data it processes

  • Its integration with other systems

  • The business process it supports

  • The level of human oversight in its operation

Understanding Your AI Landscape

The reality is that AI use cases within your organisation will vary significantly in their risk profile based on several key factors. Through years of working with financial institutions, energy operators, and other regulated entities, I've seen how deployment scenarios dramatically affect risk profiles and control requirements.

Each of these elements demands careful consideration:

- Internal vs external facing applications

- Customer-facing vs colleague-facing systems

- Deployment environment (cloud, private cloud, or on-premises)

- Service delivery model (built internally, SaaS, or managed service)

- Data sensitivity and classification

- Regulatory implications and jurisdictional oversight (e.g. single region or multi region deployment)

Therefore, it’s best to get the basics right and be brilliant at them from an AI Governance perspective. Lets explore these needs based on my personal experiences.

Phase 1: Building Your Foundation - A Strategic Approach

Base Control Framework Development

Starting with a simple but effective foundation is crucial. One my prior customers learned this lesson the hard way where an overly complex initial framework led to widespread non-compliance and shadow AI development. Begin with a simple scoring matrix using a 1-5 scale - anything more complex tends to create decision paralysis, especially in regulated environments where teams are already juggling multiple compliance frameworks.

The trinity of risk categories - data sensitivity, user impact, and business criticality - isn't arbitrary. These map directly to regulatory requirements while remaining practical for business teams to understand and apply. I have heard of various organisations tracking up to twelve different risk categories, creating such complexity that even their risk team struggled to maintain consistency. When simplifying to these three core categories, compliance requirements are often easier to classify.

Initial Use Case Assessment

Your initial framework needs stress-testing with real scenarios. Select a diverse range that will challenge different aspects of your controls:

For instance, a customer-facing AI chatbot tests your customer protection controls and exercises fair treatment frameworks. An internal process automation tool validates your efficiency while maintaining appropriate oversight. A sensitive data analytics model exercises your highest control levels and validates privacy protection measures. An externally sourced AI service tests your vendor risk management and procurement controls. Finally, an internally developed model validates your development controls and model validation frameworks.

Framework Testing Process

Documentation during this phase isn't just bureaucracy - it's your evidence base for future refinements and regulatory discussions. One financial services client saved months of work when implementing a new AI system because they could demonstrate to regulators how similar use cases had been successfully governed in the past.

Record everything: your risk assessments, control requirements, implementation challenges, resource needs, and timeline impacts. This creates a valuable knowledge base for future governance decisions.

Phase 2: Learn and Adapt - Making Governance Work in Practice

Pattern Recognition and Analysis

As you move from theory to practice, patterns will emerge that allow you to refine your framework. Create a detailed catalogue of use cases and their outcomes. This isn't just a library - it's a living catalogue that captures what works, what doesn't, and why.

One insurance company I worked with used this approach to reduce their AI implementation timeline from months to weeks for low-risk use cases, all while maintaining regulatory compliance. They accomplished this by carefully documenting how their initial controls performed and using that data to justify appropriate streamlining.

Framework Refinement

Your governance framework should evolve based on empirical evidence, not theory. Adjust control requirements based on actual risk outcomes while maintaining alignment with regulatory expectations. Develop fast-track processes for well-understood, lower-risk scenarios, but maintain clear criteria for eligibility. Over time, you will possibly even be able to consider the use of AI and agents to govern your adoption of AI systems. Though, let’s take baby steps for now!

Stakeholder Engagement

Regular engagement with all stakeholders becomes crucial during this phase. Hold workshops with business units, feedback sessions with compliance teams, and technical reviews with architecture teams. These aren't just meetings - they're opportunities to identify improvements and ensure your framework remains practical and effective.

Phase 3: The Dynamic Chain of Verification - Automation with Control

A dynamic chain of verification is an automated, intelligent system that continuously monitors, validates, and enforces AI governance controls throughout the entire lifecycle of AI systems. Unlike traditional "check-and-forget" governance approaches, it creates an unbroken chain of evidence and verification that adapts to changing conditions in real-time.

But why does this matter exactly?

Let me share a real example that illustrates why this approach is crucial. A major financial institution I worked with had a perfectly compliant AI model for credit decisioning at deployment. However, three months later, the model had subtly drifted due to changes in customer behaviour, creating unintended bias. Under a traditional governance framework, this wouldn't have been caught until the next quarterly review. With a dynamic chain of verification in place, the issue was flagged within days, preventing potential regulatory issues and customer harm.

This is where data, automation and governance meet at a point of perfect convergence.

Automated Discovery and Classification

As your organisation's AI maturity grows, manual processes become unsustainable. Implement tools that can scan your environment for AI usage, monitor cloud resources, and integrate with procurement systems.

Evidence Capture and Documentation

Automate the collection of model documentation, data lineage, training datasets, and performance metrics. This isn't just about efficiency - it's about creating a consistent, reliable evidence base for auditors and regulators. Version control becomes crucial here, maintaining a clear audit trail of decisions and changes.

Dynamic Control Application

Build automated triggers for reviews based on risk classification changes, performance degradation, data drift, and usage pattern changes. One healthcare organisation caught a potentially serious model drift issue within hours rather than weeks because of their automated monitoring system.

Continuous Monitoring and Integration

Integrate your governance system with existing tools and platforms. Connect with your ITSM systems, risk management platforms, development pipelines, and monitoring tools. This integration makes governance part of your organisation's natural workflow rather than an additional burden.

Looking Ahead

As AI technology evolves, so too must our governance frameworks. The organisations that succeed will be those that build flexible, adaptable governance structures that can evolve with the technology whilst maintaining appropriate risk controls.

The journey to effective AI governance is continuous. Your framework must adapt as AI technology evolves and regulatory requirements change. The key is building a foundation that can evolve while maintaining control effectiveness.

Remember that successful AI governance isn't about creating perfect controls - it's about building a system that protects your organisation while enabling innovation. Start simple, learn continuously, and automate thoughtfully. Most importantly, keep your focus on practical effectiveness rather than theoretical perfection.

Previous
Previous

Securing the Generative AI Software Supply Chain

Next
Next

Key Safety Features for Creating AI-Enabled Products with Amazon Bedrock