BOARD BRIEF: AI USE POLICY

For: Board and Executive Leadership
Re: Governance approach to generative AI
Date: [Date]

Why This Approach

Generative AI represents a capability shift, not just a new software tool. Organizations that use it well can achieve mission outcomes that weren't previously possible: analyzing community input at scale, making services accessible in ways that were cost-prohibitive, making better-informed decisions faster.

The governance challenge is real: AI can expose privacy, perpetuate bias, and produce inaccurate outputs. But the solution isn't prohibition, it's disciplined enablement.

Our approach is permissive in pursuit of mission. We're giving staff permission to experiment with AI to advance impact, within clear boundaries that protect privacy, equity, and trust.

This isn't risk avoidance. It's risk management that enables capability building.

Risk Stance and Controls

What we're protecting:

  • Privacy and confidentiality of community members and staff

  • Equity and fairness in how we serve communities

  • Accuracy and trustworthiness of our work

  • Our reputation and stakeholder trust

How we're protecting it:

  • Clear safety rules: no personal data in unapproved tools, human oversight required for consequential decisions, active bias checking

  • Explicit prohibitions: no deepfakes, no impersonation, no automated decisions about people

  • Approved tools list with privacy controls

  • Incident protocol with immediate escalation

  • Nuanced transparency framework that preserves trust while enabling experimentation

  • Quarterly policy reviews

Risk appetite: We accept the risk of staff experimentation within guardrails. We do not accept risks to privacy, equity, or trust. When those are at stake, we require review and approval before proceeding.

Governance Structure

Roles:

  • Executive Sponsor: Sets risk appetite, receives quarterly reporting on use patterns, incidents, and impact

  • AI Steward: Coordinates tools, training, reviews; first escalation point for staff questions

  • Privacy/Security Leads: Ensure legal compliance and security controls

  • Team Leads: Coach practice, approve higher-risk uses in their areas

  • All Staff: Experiment within policy, use judgment, share learning

Oversight:

  • Monthly: AI Steward reviews use patterns, near-misses, tool requests

  • Quarterly: Executive receives report on impact outcomes, incidents, policy evolution

  • As needed: Escalation for novel high-risk uses, privacy concerns, or incidents

What We're Measuring

Success is mission outcomes, not adoption rates:

  • Impact: Are we serving more people or serving them better? Do we have evidence we didn't have before?

  • Capability: Are we doing things we couldn't do previously?

  • Safety: Incident rate, bias flags, privacy near-misses

  • Learning: Staff confidence, shared practices, policy improvements

We'll report on these quarterly with specific examples of mission-advancing use.

Compliance Assurance

This approach aligns with:

  • PIPEDA and applicable provincial privacy legislation

  • Accessibility commitments (AODA where relevant)

  • Records management obligations

  • Intellectual property protections

The policy operationalizes these requirements for AI-specific contexts. Privacy and legal leads have reviewed and will participate in quarterly policy reviews.

If federal AI legislation (Bill C-27) passes or provincial requirements emerge, we'll update accordingly.

What Could Go Wrong

Most likely risks:

  • Staff inadvertently puts sensitive data into unapproved tool → Incident protocol, immediate containment, update training

  • AI-generated content contains bias or error that ships externally → Human review requirement, catch before publication, issue correction if needed

  • Tool we use changes privacy terms or has security issue → Regular tool reviews, ability to switch tools quickly

  • Confusion about when to disclose AI use → Clear framework based on trust preservation and authorship

Lower probability, higher impact:

  • Automated decision disadvantages protected group → Explicit prohibition on automated decisions about people, equity checks required

  • Major privacy breach from AI tool vendor → Use only tools with strong security/privacy commitments, incident response plan

Mitigation: The policy's three safety rules and transparency framework address the most likely failure modes. Quarterly reviews let us spot emerging patterns and adjust before they become incidents.

Why Now

Staff are already experimenting with AI, some effectively, some not. An explicit permissive policy:

  • Gives clear guidance so good experiments can scale

  • Protects the organization by establishing boundaries

  • Positions us to learn faster than peers who are prohibiting or ignoring AI

  • Enables mission outcomes that our current capacity can't achieve

The risk of not having a policy is uncoordinated, hidden AI use that leadership can't see or guide.