AI-powered tools are showing up everywhere in therapy practice: note-taking assistants that generate session summaries, intake form processors that extract key information, scheduling assistants that handle client communication, and even AI-informed treatment planning suggestions. The promise is compelling—less time on administrative tasks, more time for clinical work. But for Canadian therapists, adopting any AI tool that touches client data opens a minefield of privacy, consent, and regulatory questions that most vendors are not equipped to answer.
This guide breaks down what the Office of the Privacy Commissioner (OPC), provincial privacy commissioners, and regulatory colleges have actually said about AI in healthcare settings, and gives you a practical framework for evaluating whether a specific tool is safe to use in your practice.
The OPC's Position on AI and Personal Information
The Office of the Privacy Commissioner of Canada has been increasingly vocal about AI since releasing its Principles for Responsible, Trustworthy and Privacy-Protective Generative AI Technologies. While these principles are not legally binding on their own, they signal how the OPC will interpret PIPEDA complaints involving AI and are likely to shape future enforcement actions.
The OPC's core positions relevant to therapy practices include:
- Purpose limitation — personal information collected for one purpose (providing therapy) cannot be repurposed for AI model training without separate, meaningful consent. If your AI note-taking tool feeds session content into a model improvement pipeline, that is a PIPEDA violation unless the client explicitly consented to that specific use.
- Transparency — individuals have the right to know when AI is being used to process their personal information and how it affects decisions about them. In a therapy context, this means clients must be told that an AI tool is summarizing their sessions, categorizing their concerns, or generating notes based on what they said.
- Meaningful consent — the OPC has stressed that consent for AI processing must be specific, informed, and genuinely voluntary. Burying an AI disclosure in paragraph 47 of your intake paperwork does not meet this standard. The client must understand what the AI does, what data it accesses, and what happens to that data.
- Accuracy obligations — if AI-generated notes contain errors (and they will), PIPEDA's accuracy principle requires that you correct the record. You remain responsible for the accuracy of clinical documentation regardless of whether a human or an AI drafted it.
Provincial Privacy Commissioners
Provincial commissioners have added their own layers. Ontario's Information and Privacy Commissioner (IPC) has published guidance on AI in healthcare that emphasizes data minimization—AI tools should only access the minimum personal health information necessary to perform their function. British Columbia's OIPC has focused on the requirement for Privacy Impact Assessments (PIAs) before deploying AI tools that process personal information, a requirement that applies to any B.C. therapist considering an AI note-taking assistant.
Alberta's OIPC has been particularly direct: their position is that health information processed by AI must remain within the custodian's control, and cloud-based AI tools that transmit health information to third-party servers require explicit, informed consent from the individual and may require a PIA filed with the commissioner's office.
The Data Residency Problem
This is where most AI tools fail the Canadian compliance test. The vast majority of AI-powered note-taking, transcription, and documentation tools route data through servers located in the United States. Some route through servers in multiple countries. When your client's session content leaves Canada, several legal frameworks come into play:
- PIPEDA cross-border transfer provisions — PIPEDA does not prohibit transferring personal information outside Canada, but it requires that the transferring organization remain accountable for the information and ensure comparable protection. In practice, this means you need a contractual agreement with the AI provider that guarantees they will protect the data to Canadian standards.
- Provincial health privacy laws — Ontario's PHIPA, B.C.'s PIPA, and Alberta's HIA have stricter positions on data residency and privacy compliance. PHIPA, for example, requires that health information custodians ensure that personal health information stored outside Ontario is subject to comparable privacy protections, and some interpretations require explicit consent from the individual before any cross-border transfer of health information.
- US CLOUD Act exposure — any data stored on US servers or by US-headquartered companies is potentially subject to the US CLOUD Act, which allows US law enforcement to compel disclosure of data regardless of where it is physically stored. This is a real concern for therapy records, which may contain sensitive disclosures about criminal activity, immigration status, or other matters with legal implications.
Key question to ask any AI vendor: "Where exactly is my data processed? Where is it stored? Is any data, including temporary processing data, transmitted outside Canada at any point?" If they cannot give you a clear, specific answer, that is your answer.
What Your Regulatory College Says
Canadian regulatory colleges have been slower to issue specific guidance on AI, but several have now weighed in:
CRPO (Ontario)
CRPO's Professional Practice Standards require that any technology used in practice must meet the same confidentiality standards as in-person service delivery. While CRPO has not issued AI-specific guidance as of early 2026, their existing standards on electronic records and third-party service providers apply directly. If you use an AI tool to generate session notes, CRPO expects you to:
- Ensure the tool meets PHIPA requirements for personal health information
- Maintain a written agreement with the AI provider addressing confidentiality, data use, and breach notification
- Review and approve all AI-generated clinical content before it becomes part of the official record
- Disclose the use of AI tools to clients as part of informed consent
BCACC (British Columbia)
BCACC has been more explicit, issuing a practice advisory in 2025 reminding members that the use of AI tools in clinical practice does not diminish the counsellor's professional responsibility for clinical documentation. BCACC's position includes:
- AI-generated notes must be reviewed, edited, and approved by the counsellor before being entered into the clinical record
- Clients must be informed about any AI tools used in their care, including what data the tool accesses
- A Privacy Impact Assessment should be completed before adopting any new AI tool
- The counsellor remains fully accountable for the accuracy and completeness of clinical records regardless of whether AI was involved in their creation
CCPA National Standards
CCPA's updated Standards of Practice address technology use broadly, emphasizing that counsellors must be competent in the technologies they employ and must ensure that third-party technology providers cannot access client information in an unauthorized manner. CCPA has also stressed that the therapeutic relationship must not be compromised by technology use—if a client is uncomfortable with AI being used in their sessions, the counsellor must respect that and provide the same quality of service without AI assistance.
Consent Requirements for AI-Assisted Documentation
Based on the combined guidance from privacy commissioners and regulatory colleges, meaningful consent for AI use in therapy should address the following:
- What the AI tool does — describe in plain language what the tool does (e.g., "I use a software tool that uses artificial intelligence to help me generate session notes based on our conversation").
- What data it accesses — specify whether the tool processes audio recordings, transcripts, typed notes, or other session data.
- Where data is processed and stored — disclose the country and, ideally, the specific cloud provider where processing occurs.
- Data retention — how long does the AI provider retain your data? Is it deleted after processing, or stored indefinitely?
- Model training — is your client's data used to train or improve the AI model? This must be explicitly disclosed.
- Right to opt out — clients must have the genuine ability to decline AI-assisted documentation without it affecting their care. You must be prepared to take manual notes if a client opts out.
- Human review — confirm that all AI-generated content is reviewed and approved by the therapist before being finalized.
Practical tip: Do not combine AI consent with your general intake consent. Create a separate, standalone AI disclosure and consent form. This makes it easier to demonstrate meaningful consent in the event of a complaint and allows clients to consent to therapy without feeling pressured to also consent to AI.
Practical Steps for Using AI Tools Compliantly
If you have evaluated the landscape and decided to proceed with AI tools, here is a step-by-step framework:
1. Conduct a Privacy Impact Assessment
Before adopting any AI tool, complete a PIA that documents what personal information the tool will access, how it processes and stores that information, what risks are involved, and what safeguards are in place. This is legally required in British Columbia and Alberta, and strongly recommended everywhere else. Keep the PIA on file; your college may ask for it.
2. Vet the Vendor
Ask the AI vendor these questions in writing and keep their responses:
- Where are your servers located? Are any processing steps performed outside Canada?
- Is client data used to train your AI models? Can this be disabled?
- What encryption standards do you use for data in transit and at rest?
- Will you sign a data processing agreement that complies with PIPEDA and applicable provincial health privacy legislation?
- What is your breach notification process and timeline?
- How long do you retain data after processing, and can I request deletion?
3. Choose Canadian-First Solutions
Where possible, choose AI tools that process and store data exclusively on Canadian servers. The Canadian AI ecosystem for healthcare is growing, and there are options that do not require cross-border data transfer. If you must use a US-based tool, ensure you have a robust cross-border transfer agreement and explicit client consent for the transfer. We help therapy practices evaluate and implement AI solutions that meet Canadian privacy requirements.
4. Update Your Consent Process
Add a standalone AI disclosure and consent form to your intake process. Review it annually with ongoing clients. Make it clear, specific, and free of legal jargon. Your clients should understand exactly what they are consenting to after reading it once.
5. Establish a Human-in-the-Loop Workflow
Never let AI-generated content go directly into a clinical record without human review. Establish a workflow where AI generates a draft, you review and edit it for accuracy and clinical appropriateness, and only then does it become part of the official record. Document this workflow in your practice policies.
6. Monitor and Audit
Regularly review what data your AI tools are actually accessing and transmitting. Check for vendor policy changes—AI companies frequently update their terms of service, sometimes expanding what they do with your data. Set a calendar reminder to review your AI vendor agreements quarterly.
What to Avoid
A few common practices that put Canadian therapists at risk:
- Pasting session notes into ChatGPT or similar general-purpose AI — consumer AI tools explicitly state that they may use inputs for model training. Putting client information into these tools is almost certainly a PIPEDA violation and a breach of your professional standards.
- Using AI transcription tools without checking data residency — many popular transcription services (Otter.ai, Rev, etc.) process data on US servers and may retain transcripts for model improvement.
- Assuming "HIPAA compliant" means "PIPEDA compliant" — many US-based AI vendors advertise HIPAA compliance, but HIPAA is a US law. PIPEDA and provincial health privacy laws have different requirements, particularly around consent and cross-border data transfer. HIPAA compliance is necessary but not sufficient for Canadian use.
- Not disclosing AI use to clients — even if the AI tool never stores client data, the act of processing client information through a third-party AI system must be disclosed. Omitting this from your consent process creates both a privacy law violation and a professional standards issue.
The Path Forward
AI tools have genuine potential to reduce the administrative burden that contributes to therapist burnout. Session note generation alone can save 30 to 60 minutes per day for a full-time practitioner. But the path to adoption must be deliberate and compliant. Rush it, and you risk a privacy commissioner investigation, a college complaint, or a loss of client trust that is far more damaging than the time AI was supposed to save.
The regulatory landscape is evolving quickly. Both the federal government's proposed Artificial Intelligence and Data Act (AIDA) and provincial regulatory colleges are expected to release more specific AI guidance throughout 2026. Stay current, document your compliance decisions, and when in doubt, err on the side of client privacy.
If you are looking for help evaluating AI tools for your therapy practice or need assistance building a compliant AI workflow, explore our AI solutions for therapy practices or reach out for a consultation.