Using AI to Do Hard Work Better: Why Your Organization Needs an Enabling Policy

For 15 years, I've watched incredible people in social impact spaces do extraordinarily hard work with limited resources. Now AI shows up, a tool that can substantively help with significant portions of that work. But for organizations that are stuck and those who want to leap forward, what's needed is a different approach, enabling experimentation in service of mission while protecting what matters.

Why This Matters

The work is hard and getting harder. AI won't solve that, but when used well, it can help organizations build capacity to do work they literally couldn't do before. Not faster work, different work. Better work. This isn't about inevitable adoption. It's about recognizing an opportunity to engage with something powerful in ways that serve the communities we're trying to help. A strong enabling policy creates clarity and permission while protecting privacy, equity, and trust.

Over the last 15 years, I've had the privilege of working with hundreds of organizations in social impact spaces—mostly nonprofits, occasionally private sector organizations with social impact teams. Regardless of sector, my work is in spaces where people are trying to make the world better.

And what I observe consistently is incredible people doing extraordinarily hard work. Serving communities, addressing complex social problems, stretching limited resources to meet unlimited need. The work was already difficult. Now it's happening against a backdrop of cascading crises, increasing complexity, and exhausted staff.

And then generative AI shows up.

For the past two years, I've been experimenting with how AI can support the kind of strategic work I do. What I've discovered is that when you learn to use it well, it feels like a superpower. Not because it replaces human judgment, it doesn't. But because it can hold cognitive load that was previously overwhelming. It lets you analyze patterns at scale that would take weeks. It helps you iterate on ideas faster. It frees up mental space for the relationship-building and strategic thinking that actually moves missions forward.

When an entire team learns to work this way, the capability shift is extraordinary.

The Opportunity

AI isn't one thing, just like the internet isn't one thing. It can be used for immense good or significant harm. What matters is how we engage with it.

And I think we have an opportunity to engage with something that can actually, really, substantively help with a significant portion of our work. Not everything - but enough that it matters.

Policy as Enablement

Years ago, I served on the board of Kitchener-Waterloo Community Foundation during a complete overhaul of our governance policies. We worked with a consultant who fundamentally reframed how I think about policy.

She challenged us to stop thinking about policies as lists of restrictions, things you can't do, and start thinking about them as enablement: defining what we're trying to achieve and setting guardrails that let people play safely within those boundaries.

Policy can be carrots, sticks, or sermons. The risks around AI are worrying, they're real and need to be taken seriously. But the result is that organizations are paralyzing themselves just when they could be building new capabilities.

What if we approached AI policy differently? What if we explicitly designed it to enable experimentation in pursuit of mission, while protecting the privacy, equity, and trust that make mission work possible?

What Becomes Possible

Here's what I mean by capability shift:

Right now, if you want to analyze patterns across a thousand qualitative survey responses, it would take a team weeks of manual coding and synthesis. With AI, you can speed up parts of that kind of work. And the insights are often richer because you can go deeper—you can get all those qualitative responses really integrated instead of accepting whatever could be produced under time pressure.

Need to make complex information accessible? Draft it once, then create plain-language versions, translations, and audio scripts. Have someone verify accuracy and tone. You've just reached communities you couldn't serve before without adding headcount or significant cost.

Want to strengthen decision-making? Generate options based on criteria you care about, apply human judgment and equity checks, make better choices faster while keeping bias risk low.

But here's what matters more: these are just some examples of applications that are by no stretch the limit or edge of what we could be doing.

It isn't about you doing any of those things specifically. The real opportunity isn't in replicating these examples. It's in discovering what becomes possible when people in your organization bring AI into their specific work, with their specific knowledge of what your communities need.

What if you could meaningfully analyze a thousand qualitative responses instead of a hundred? What if you could tag and verify patterns in community feedback at a scale that lets you intervene quickly, while problems are still manageable instead of after they've become crises?

The problems we're trying to solve—homelessness, food insecurity, climate adaptation, health equity—are increasingly complex. Cascading emergencies are becoming the norm. Anything that can help us keep pace with that complexity, while staying grounded in the communities we serve, is worth learning to use well.

The Real Challenge

Here's what I'm seeing across organizations right now:

Some are banning AI use out of fear. Others are letting people use it in uncoordinated, hidden ways that leadership can't see or guide. And increasingly, I'm working with organizations that want to use these tools, clients who are interested in how they can bring AI in thoughtfully.

The real challenge is that this is brand new. This has not existed before, and there is nothing else like it. It changes incredibly fast, and it is worrying.

And if used well, it responds to capacity issues we all have.

I don't think we should be ceding the capability advantage to organizations simply because we didn't engage. So we need to frame this differently entirely: permissive in pursuit of mission.

Give staff permission to experiment with AI to advance impact. Set clear boundaries that protect privacy, equity, and trust. Create structures for learning together. Make it explicitly about serving communities better, not just working faster.

What This Looks Like in Practice

I've been developing an AI use policy that embodies this approach and testing it through iterations and feedback with clients and other organizations. For me, this policy brings to life an example of what I'm talking about, to say this is what I think this could look like. Different organizations I'm working with are using versions of this in different ways. Not all of it is applicable to everyone. But I'm trying to provide an example of what I mean by enabling opportunity and using policy to enable this kind of work.

The organizing principle is simple:

Before using AI for any work task, ask yourself: Does this help us achieve better outcomes for the communities we serve?

If yes: explore it.
If no: probably don't bother.
If unsure: try it and reflect on whether it actually helped.

The policy opens with possibility, not prohibition. It acknowledges that we're at the beginning of understanding what AI can do, not the middle. It invites people to discover breakthrough applications, not just replicate examples.

Then it provides three safety rules that let people explore without compromising what matters:

  1. Protect privacy: no personal information in unapproved tools, anonymize when needed

  2. Keep humans in the loop: AI drafts, people craft; accountability stays with humans

  3. Check for harm: actively look for bias, verify you're not disadvantaging any group

It explicitly names what's off-limits (deepfakes, impersonation, automated decisions about people) while making clear that thoughtful experimentation is welcome.

It addresses the nuanced question of transparency—not "always disclose AI use" but "preserve trust by being clear about accountability and quality assurance."

And it creates structures for learning: how to get started, where to get help, how to share what works, how the policy itself will evolve based on what we discover.

The full policy is included below as an example (not a template to copy, but copy it and use it as a starting point). It is an invitation to think about what enablement could look like in your context.

Why This Matters

AI exists. None of us made it. We can't make it go away. The scientists who created it are legitimately concerned about its power and potential for harm.

But for organizations serving communities with urgent needs and limited resources, the response to "this is powerful" can't be to avoid it. It has to be: how do we learn to use this power responsibly, in pursuit of outcomes that matter?

This isn't about inevitable adoption for its own sake. It's about recognizing that we have an opportunity, maybe even a responsibility, to figure out how to wield this capability for good.

The work you do is hard. It's getting harder. AI won't solve that, but it can help you build capacity to stay in the fight, to analyze community needs more deeply, make decisions with better evidence, reach people you couldn't serve before, and free up your best thinking for the relationships and strategic work that actually changes outcomes.

When I work with this technology well—when I bring my expertise and judgment, use AI to handle cognitive load and pattern recognition, and maintain ownership of the thinking—I can do work I literally couldn't do before. Not faster work. Different work. Better work.

I think that's available to a lot of people working in social impact spaces. And I think the communities we serve benefit when we figure out how to make it happen safely.

So: draft an enabling policy. Train your people. Give them permission to experiment in service of mission. Build guardrails that protect what matters. Learn together what becomes possible.

The complexity isn't going away. But our capacity to respond to it can meet that complexity and respond.


Sample Policy

AI USE POLICY: PERMISSIVE IN PURSUIT OF MISSION

What Just Became Possible

Organizations like ours can now:

Analyze qualitative data at scale. Synthesize hundreds of survey responses in hours instead of weeks. Iterate on the analysis multiple times to find patterns we’d miss under time pressure. Use those insights to redesign programs while the feedback is still fresh.

Make our work accessible. Draft complex information once, then create plain-language versions, translations, and audio scripts. Have someone verify accuracy and tone. Reach people we couldn’t serve before without adding headcount.

Strengthen decision-making. Generate options based on criteria we care about, then apply human judgment and equity checks before finalizing. Make better choices faster while keeping bias risk low.

This isn’t science fiction. This is what generative AI makes possible right now. Not replacing human judgment, amplifying human capacity to do work that matters.

Why This Policy Exists

This policy has one purpose: to enable AI use that advances our mission.

We’re introducing AI capabilities permissively. We want you to explore, experiment, and discover new ways to extend your impact. The technology is powerful enough that careless use could cause harm, but that’s not a reason to avoid it. It’s a reason to engage with it intentionally.

The One Question

Before using AI for any work task, ask yourself:

Does this help us achieve better outcomes for the communities we serve?

If yes: explore it.

If no: probably don’t bother.

If unsure: try it and reflect on whether it actually helped.

This isn’t about efficiency for its own sake. It’s about building capacity to do work that matters.

What Good Use Looks Like

You’re using AI well when:

  • You’re doing something you couldn’t do before, or doing it in a way that creates better outcomes

  • You’re maintaining your judgment—AI drafts, you craft

  • You’re protecting the privacy and dignity of the people we serve

  • You can explain how this use connects to mission impact

  • You maintain authorship—you can defend every choice in the final work

You’re probably off track when:

  • You’re just making something faster without thinking about whether it should exist

  • You’re letting AI make decisions that affect people without your oversight

  • You’re uncomfortable explaining this use to the communities we serve

  • You can’t articulate the mission connection

  • You couldn’t explain the intellectual work that went into it

Three Safety Rules

1. Protect Privacy

  • Don’t put personal information (names, addresses, sensitive details about individuals) into AI tools unless they’re on our approved secure list

  • If you need to work with real data, anonymize it first—replace names with roles, remove identifying details

  • When in doubt, ask: would the person this data is about be comfortable with how I’m using it?

2. Keep Humans in the Loop

  • AI can help draft, analyze, or suggest, but you make the final call

  • Anything that affects someone’s access to services, benefits, or opportunities requires human review

  • If it matters to mission, a person needs to verify it before it’s live

  • You remain accountable for the work—AI is a tool you used, not an excuse

3. Check for Harm

  • Before using AI outputs, ask: could this disadvantage or misrepresent any group?

  • If AI is helping with decisions about people, verify the results don’t perpetuate bias

  • If something feels off, pause and get a second opinion

Transparency and Disclosure

When AI materially contributes to your work, transparency matters, but what matters is preserving trust and clarity about accountability, not announcing “AI was here” reflexively.

When to disclose:

Always disclose when:

  • The AI involvement would change how someone evaluates the work or trusts it

  • It’s external-facing work where accuracy or accountability is critical (research, recommendations, decisions affecting people)

  • You’re unsure whether your audience would care

Probably doesn’t need disclosure when:

  • You maintained authorship and intellectual ownership (you can defend every choice, the thinking is yours)

  • The work is internal or operational

  • The value is in the outcome, not the authorship (formatting, translation, basic editing)

  • AI was a tool for refining work that’s substantively yours

When you do disclose, be specific:

Not helpful: “AI-assisted”

More helpful: “AI helped analyze survey patterns; findings were verified by staff and validated with community members”

Not helpful: “Written with AI”

More helpful: “I used AI to help organize and clarify my thinking; the ideas and judgment are mine”

The goal is to make clear who’s accountable and what quality assurance happened, not just to flag that AI was involved.

If you’re unsure: Ask yourself: “If someone asked how I created this, could I explain the process and demonstrate I did the intellectual work?” If yes, you probably maintain authorship. If the AI did thinking you can’t account for, disclose with specifics.

Explicitly Off-Limits

Some uses are prohibited:

  • No deepfakes or deceptive media: don’t create fake images, videos, or audio that misrepresent reality

  • No impersonation: don’t use AI to impersonate real people (mimicking their voice, writing style to pretend it’s them)

  • No automated decisions about people: eligibility, employment, or program placement decisions require human judgment (AI can help analyze, but not decide)

  • No misleading the public: don’t publish AI outputs as human-created work when the AI did intellectual work you can’t account for

Approved Tools [Note: As with all of this policy, you need to customize what’s allowed for your organization.]

Currently approved for use:

  • ChatGPT Plus

  • Claude Pro

  • Microsoft Copilot

  • Google Gemini Advanced

To request a new tool: Submit a brief request to [AI Steward] including: what you want to use it for, why existing tools don’t work, and basic privacy/security info about the tool. We’ll review and respond within [a week].

When You’re Unsure

Ask yourself:

  1. Could this affect someone’s access, safety, or dignity? Get input before proceeding

  2. Am I working with sensitive data? Make sure you’re using approved tools

  3. Is this public-facing? Have someone else review it first

  4. Can I explain and defend the intellectual work? If not, either revise or disclose with specifics

Still unsure? Ask [AI Steward name]. That’s what they’re there for.

Incident Protocol

If something goes wrong (wrong data exposed, biased output published, misleading information shared, AI outputs used without proper verification):

  1. Stop using it immediately

  2. Tell [AI Steward] right away

  3. We’ll fix it together and learn from it

Mistakes are how we learn. What matters is that we catch them and improve.

Sharing What You Learn

[Note: Insert specifics about how: Slack channel, monthly meeting, shared doc, etc.]

What helps everyone:

  • “I used AI for [task] and it helped by [specific outcome]”

  • “I tried this and it didn’t work because [reason]”

  • “Here’s a prompt that saved me hours: [example]”

  • “Here’s how I’m thinking about disclosure for [type of work]”

Your experiments make the whole organization smarter.

What We’re Tracking

We’re not measuring how much you use AI. We’re measuring whether it’s helping us achieve mission:

  • Are we serving more people or serving them better?

  • Are we making decisions with better evidence?

  • Are we hearing from communities we previously couldn’t reach?

  • Are we building capabilities we didn’t have before?

Compliance and Privacy

This policy aligns with our existing privacy and data governance obligations, including PIPEDA (or applicable provincial privacy legislation) and our accessibility commitments. The safety rules above operationalize those requirements for AI use specifically. If you’re working on something where compliance requirements are unclear, consult [privacy lead/legal counsel] before proceeding.

For detailed compliance guidance, see [link to detailed document].

This Will Evolve

We’ll review this policy every three months based on what we’re learning. Your input shapes that evolution.

The technology is changing fast. Our mission isn’t. This policy ensures the former serves the latter.

Questions? Ask [AI Steward name].

Next
Next

From Strategy Documents to Strategy Systems: What AI Makes Possible