Strategy13 min read

How to Write an AI Policy for Your Business

Every company needs AI guidelines. Here's a balanced template you can use right away, whether you're 3 or 300 people.

Daniel Dahlen

Daniel Dahlen

February 4, 2026

"Can we use ChatGPT at work?"

The question comes up more and more often. And the answer is usually... silence. Or a nervous "don't know."

If you run a business and don't have clear guidelines for AI use, you have a problem. Not because AI is dangerous in itself. But because people are using it anyway, and without guidance, dumb things happen.

This guide helps you write an AI policy that works. Not one that nobody reads. Not one that bans everything. One that actually gives your team clear answers.

Why You Need a Policy (Even If You're Three People)

"We're a small company, do we really need a policy?"

Yes. Here's why.

People are already using AI. If you haven't given guidelines, they're guessing. And guesses lead to mistakes.

Customer data ends up in wrong hands. Someone pastes a customer email into ChatGPT to write a response. Now that information is with OpenAI. Was that okay?

Legal liability is on you. If something goes wrong, it's the company that's responsible, not the employee who "didn't know."

Clarity reduces uncertainty. Instead of people wondering what's okay, they have a document to check.

It's about clarity, not control

A good AI policy doesn't ban usage. It makes clear what's okay, what requires thought, and what's forbidden. The point is that people should be able to use AI safely.

What Actually Happens When You Share Data with AI?

Before we dive into the policy, we need to understand what actually happens technically and legally when someone pastes information into ChatGPT or Claude.

You're Transferring Data to a Third Party

Every time you type something into an AI tool, it's sent to the provider's servers (OpenAI, Anthropic, Google, etc.). It doesn't matter if you have "opt-out" enabled. Your data:

  • Is sent over the internet to their servers
  • Is processed by their systems
  • Is stored for a period that varies depending on provider, plan, and settings
  • Is subject to their terms of service and privacy policy

What "Opt-Out of Training" Actually Means

Many people think opt-out means their data is private. That's not quite true.

| Aspect | With opt-out | Without opt-out | | ------------------------------- | ------------------------------ | ------------------- | | Data sent to servers | Yes | Yes | | Stored by provider | Yes (retention varies by plan) | Yes (may be longer) | | Used for training | No | Yes | | Staff can review | For flagged content | For flagged content | | You're sharing with third party | Yes | Yes |

Opt-out is not the same as private. It just means your data won't be used to improve future models. What matters is not just where the company is headquartered, but whether personal data is transferred outside the EU/EEA and what safeguards are in place (e.g. EU-US Data Privacy Framework, Standard Contractual Clauses, and technical controls).

The GDPR Perspective

When you paste personal data into an AI tool, the following happens legally:

  1. You (the company) are typically the data controller for your usage
  2. The provider is often a data processor in business/enterprise setups where they process on your behalf and a DPA is in place. In consumer services, the provider may process for their own purposes, making the role distribution different
  3. You always need a legal basis. In a workplace context, consent is rarely a good option (hard to claim it's truly voluntary). Legitimate interest or contractual necessity is often more realistic. In many cases you should do a risk assessment (and where needed a DPIA) before inputting personal data
  4. You need a Data Processing Agreement (DPA) with the provider in cases where they act as processor

Free versions typically lack adequate DPAs and admin controls for business use. Commercial plans (e.g. ChatGPT Team/Enterprise, Claude for Business) have this in place.

Trade Secrets and NDAs

Trade secrets require that the holder has taken reasonable measures to keep the information confidential. If you share secret information via a consumer service without proper agreements and controls, that can undermine this requirement, which may threaten trade secret status. If you have NDAs with clients or partners, sharing their information could be a breach of contract, regardless of what the AI provider does with the data.

The EU AI Act

Beyond GDPR, the EU AI Act is being phased in gradually. Major parts apply from August 2026, but some provisions (prohibited uses and AI literacy requirements) already took effect earlier.

The key things to know:

  • AI literacy: Organizations using AI systems must ensure their staff have sufficient competence. This is already a requirement.
  • Prohibited uses: Certain types of AI use are completely banned, e.g. social scoring and manipulation.
  • High-risk systems: If you use AI in contexts like HR recruitment, credit scoring, or similar, there are additional requirements for risk management, transparency, and documentation.

What does this mean in practice?

For most companies using ChatGPT and Claude as productivity tools, the requirements are manageable. But it's good to be aware of the rules, especially if you use AI in decisions that directly affect people.

Free Accounts vs Business Accounts

This is a crucial distinction that many miss.

What You Get with Free Accounts

  • Opt-out from training (if you actively enable it)
  • Basic functionality
  • No legal guarantees for business use
  • No DPA (Data Processing Agreement)
  • Data may be stored longer

What You Get with Business Accounts

ChatGPT Team/Enterprise and Claude for Business/Enterprise:

  • Opt-out from training by default
  • Data Processing Agreement (DPA) included
  • SOC 2 certification
  • Shorter data retention
  • Admin control over users
  • SSO integration (Enterprise)
  • Dedicated support

Price Comparison (approximate prices, subject to change)

| Tool | Price/user/month | Best for | | ------------------- | ---------------- | -------------------------- | | ChatGPT Free | $0 | Personal use | | ChatGPT Plus | ~$20 | Individual professionals | | ChatGPT Team | ~$25 | Small teams (2-149 people) | | Claude Free | $0 | Personal use | | Claude Pro | ~$20 | Individual professionals | | Claude for Business | ~$30 | Companies |

Recommendation

For companies with employees: use business versions. The cost is negligible compared to the risk of GDPR fines or breach of contract. ChatGPT Team or Claude for Business gives you legal ground to stand on.

The Three Questions Your Policy Must Answer

Forget long legal texts. Your policy needs to answer three questions:

1. Which AI Tools Can We Use?

List approved tools explicitly:

  • ChatGPT (via company account or personal?)
  • Claude
  • Microsoft Copilot
  • Google Gemini
  • Industry-specific tools

Be clear about whether personal free accounts are okay or if you require business versions.

2. What Can We Share with AI Tools?

This is the core question. Categorize information:

Green (okay to share):

  • Public information
  • Generic questions without company-specific data
  • Information that's already on your website

Yellow (think first):

  • Internal material that isn't secret
  • Drafts and ideas
  • Aggregated data without personal information

Red (never):

  • Personal data (customer info, social security numbers, health information)
  • Passwords and login credentials
  • Confidential agreements
  • Non-public financial information
  • Trade secrets

3. Who Is Responsible for What?

Clarify responsibility:

  • Every employee is responsible for following the policy
  • Managers are responsible for their team knowing the policy
  • A named person (or function) owns questions and updates

Checklist: What Can Be Shared?

Here's a practical checklist for everyday decisions:

Before pasting something into an AI tool, ask yourself:

☐ Does this contain personal data? (names + context, email addresses, phone numbers) ☐ Would the customer be uncomfortable if they knew I shared this? ☐ Is this information competitors shouldn't have? ☐ Is there an NDA or agreement covering this information? ☐ Is this something that should only exist internally?

If you answer yes to any question: Either remove the sensitive information first, or don't share.

GDPR and personal data

If you paste personal data into ChatGPT or Claude, you're processing personal data with a third party. You always need a legal basis, and in a workplace context consent is rarely a good option (hard to claim it's truly voluntary). Legitimate interest or contractual necessity is often more realistic. In practice: anonymize or remove personal data before using AI tools.

Template: Copyable AI Policy

Here's a template you can customize. It's intentionally short so people actually read it.


[Company Name] AI Policy

Version: 1.0 Date: [Date] Owner: [Name/role]

Purpose

This policy provides guidelines for responsible use of AI tools at work. The goal is to benefit from AI while protecting customer data and trade secrets.

Approved Tools

The following AI tools are approved for work-related use:

  • [List your approved tools]

Other tools may be used after approval from [responsible person].

What Can Be Shared with AI Tools

Okay to share:

  • Public information
  • Generic questions without customer data
  • Own texts and drafts (without sensitive info)

Requires thought (remove sensitive info first):

  • Internal material
  • Business ideas and strategies

Forbidden to share:

  • Personal data (customer names, contact info, etc.)
  • Passwords and login credentials
  • Confidential agreements and documents
  • Financial information that isn't public

Responsibility

  • Every employee is responsible for following this policy
  • When uncertain, ask [responsible person]
  • AI-generated content should always be reviewed before external use

Violations

Violations of this policy are handled according to [your normal procedure].


Copy, customize, use.

How to Optimize Your Company's AI Setup

A policy is good, but the right technical setup makes it easier to follow. Here's a practical guide.

Step 1: Choose the Right Tools and Plan

For teams of 2-20 people:

  • ChatGPT Team or Claude for Business
  • One admin who manages accounts
  • Shared guidelines for everyone

For larger organizations (20+):

  • ChatGPT Enterprise or Claude Enterprise
  • SSO integration with your identity provider
  • Centralized administration and logging

Step 2: Configure Correctly

  1. Create a company account (not individual paid plans)
  2. Invite users via the admin panel
  3. Verify that DPA is in place (usually automatic with business plans)
  4. Turn off optional data sharing settings if you want to be extra careful

Step 3: Implement Technical Guardrails

Option A: Trust policy + training

  • Works for most small companies
  • Requires people to follow the guidelines

Option B: Use API + custom application

  • Build an internal tool that filters sensitive data
  • More control, but requires technical expertise

Option C: Specialized enterprise solutions

  • Tools like Microsoft Copilot for Microsoft 365 keep data within your tenant
  • More expensive but higher control

Step 4: Create Templates and Custom Instructions

Help your team use AI effectively:

  • Custom Instructions in ChatGPT to define context and tone
  • Projects in Claude to gather relevant information
  • Internal prompt templates for common tasks

This reduces the risk of people needing to share sensitive context every time.

Checklist for Optimal Setup

☐ Business account with DPA in place ☐ All users invited via admin (no personal accounts) ☐ Opt-out from training enabled (default on business plans) ☐ Policy documented and communicated ☐ Training completed ☐ Responsible person designated for questions ☐ Calendar reminder set for semi-annual review

How to Implement the Policy (Without People Ignoring It)

A policy nobody reads is worthless. Here's how to make it real:

1. Keep It Short

One page. Two max. Nobody reads ten pages.

2. Explain Why

"We want you to be able to use AI. This policy exists to make that safe." Not "You must follow the rules."

3. Go Through It in a Meeting

Don't just send an email. Take 15 minutes at the next meeting. Answer questions.

4. Make It Accessible

Put it where people actually find it. Not in a folder nobody opens.

5. Give Concrete Examples

"If you want to summarize a customer meeting, remove the customer's name and company first." Concrete beats abstract.

6. Update Regularly

The AI landscape changes fast. Plan to review the policy every six months.

Start with dialogue

Before writing the policy, ask your team: How are you using AI today? What feels uncertain? Their answers help you write a policy that addresses real questions.

TLDR

  1. You need an AI policy even if you're a small team. People are using AI anyway.
  2. Opt-out ≠ private. Your data is sent and stored regardless. It just won't be used for training.
  3. Business accounts matter. They provide DPA, legal basis, and better data protection.
  4. Three questions to answer: Which tools? What can be shared? Who's responsible?
  5. Green/yellow/red for categorizing information makes it easy to understand.
  6. Implement it properly. Go through it in a meeting, give examples, update semi-annually.

A good policy doesn't ban AI. It makes it possible to use it safely and legally. An AI policy is part of a broader AI strategy that helps your business use AI the right way.

Frequently Asked Questions

Is opt-out enough to protect our data?

No. Opt-out just means your data won't be used for training. It's still sent to the provider's servers, stored, and subject to their terms. How long data is retained varies by provider and plan. For real data protection, you need business accounts with DPA.

Do we need business accounts or are free versions enough?

Legally speaking: free versions typically lack DPA (Data Processing Agreement) required for GDPR compliance when handling personal data. For companies with employees, business versions are recommended. The cost is low compared to the risk.

What happens if an employee accidentally shares customer data?

The company is liable, not the employee. That's why a clear policy is important: it documents that you had guidelines, and it reduces the risk of it happening. In case of a GDPR incident, you may need to report to your data protection authority within 72 hours.

Is it safe to use AI for sensitive industries (healthcare, legal, finance)?

It depends on how you set it up. Regulated industries often have additional requirements (patient data privacy, attorney-client privilege, financial regulations). Use enterprise solutions with higher security, and be extra careful to never share protected information.

How often should we update our AI policy?

Every six months at minimum. The AI landscape, tools, and legislation change quickly. Set a recurring reminder in your calendar.


Need help developing an AI policy customized for your business? Book a call and we'll put together something that works for you.

AI policysecurityGDPRbusinessguidelines

Liked this article?

Share it with your network

Related articles

Need help with AI?

We help businesses implement AI solutions that actually work. Book a free consultation.

Book consultation

Cookies and tracking

We use Google Analytics for visitor statistics and Sentry for error tracking to improve the service. Analytics data is routed through our own domain. This requires your consent. Read more in our privacy policy