Creative AIEnterpriseGoogleMicrosoftIntelOpenAINorth America · Canada6 min read59.5k views

When Your Call Centre Agent is a Bot: Canada's New Transparency Laws Reshape Trust and Training, Says Telus's CEO

The global push for AI transparency is hitting Canadian businesses head-on, forcing companies to disclose when customers interact with AI. This shift is redefining customer service, employee training, and the very nature of trust in a digital world, with some Canadian firms leading the charge and others struggling to adapt.

Listen
0:000:00

Click play to listen to this article read aloud.

When Your Call Centre Agent is a Bot: Canada's New Transparency Laws Reshape Trust and Training, Says Telus's CEO
Chloé Tremblàŷ
Chloé Tremblàŷ
Canada·Apr 29, 2026
Technology

The line crackled a bit, a familiar sound even in 2026, but the voice on the other end was too perfect. "Bonjour, Chloé. How may I assist you today with your internet service inquiry?" It was polite, efficient, and utterly devoid of the subtle human imperfections I’d grown accustomed to from my local telecom provider, Bell Canada. I knew instantly, without any explicit disclosure, that I was speaking to an AI. But what if I hadn't? What if I was an elderly relative, less tech-savvy, who just wanted to resolve a billing issue and believed they were talking to a human? This is the core of the challenge facing businesses today, particularly here in Canada, as transparency laws around AI interactions spread like wildfire across the globe.

Just last week, my friend Sarah, who manages a team at a major Canadian bank, told me about a new directive. Every customer interaction, whether by chat, phone, or email, that involves an AI system must now carry a clear, upfront disclosure. "It's like putting a 'may contain nuts' label on everything," she quipped, "but instead of allergens, it's algorithms." The intent is clear: empower the consumer. But the ripple effects on businesses and their employees are profound.

Let me break down what Mila just published on this topic. A recent report from the Montreal Institute for Learning Algorithms, one of the world's leading AI research institutes, highlighted that over 60% of Canadian consumers surveyed felt it was "extremely important" to know if they were interacting with an AI. This isn't just a polite preference, it's a demand for clarity. And governments are listening. From the EU's AI Act to emerging frameworks in North America, the right to know is becoming a legal obligation.

Consider the scene at ConnectCare Solutions, a mid-sized BPO firm based in Mississauga, Ontario, that handles customer service for several major Canadian retailers. For years, ConnectCare prided itself on its human touch, but the allure of efficiency led them to integrate OpenAI's custom GPT models into their chat support, initially without explicit disclosure. Their internal metrics showed a 30% reduction in average handling time and a 15% increase in first-contact resolution. Sounds great, right? But then the new transparency laws hit.

"We saw an immediate, albeit temporary, dip in customer satisfaction scores by about 8% after we implemented the mandatory AI disclosure," explains David Chen, ConnectCare's Head of Operations. "Some customers felt misled, even if the service was good. It was a trust issue." Chen's team quickly adapted, retraining their human agents to handle the initial AI disclosure gracefully and to step in seamlessly when customers requested a human. This required a significant investment in upskilling, transforming agents from pure problem-solvers to empathetic AI navigators. "Our ROI calculations had to be completely re-evaluated," Chen admits, "but ultimately, rebuilding trust is priceless."

This isn't just about customer service. Think about recruitment. Many large corporations, including some of Canada's biggest banks and mining companies, have been quietly using AI tools from firms like HireVue or Pymetrics to screen resumes and even conduct initial video interviews. The new transparency push means candidates must be informed when an AI is assessing their application. This could lead to a different kind of applicant experience, one where candidates might feel less comfortable or even discriminated against, regardless of the AI's actual performance. It's a delicate balance.

Who are the winners and losers in this new landscape? Companies that embraced transparency early, or those with robust ethical AI frameworks, are clearly ahead. Take Telus, for example. "We've been transparent about our use of AI to enhance customer experience for years," says Darren Entwistle, CEO of Telus, in a recent interview. "Our customers appreciate knowing when an AI is helping them, and they value the efficiency it brings. It's about empowering our human agents, not replacing them." Telus has invested heavily in training its workforce to leverage AI tools like Google's Gemini for faster information retrieval, allowing human agents to focus on complex, empathetic interactions. This proactive approach has positioned them as a leader in responsible AI adoption, with customer trust scores remaining consistently high, even as AI integration deepens.

On the other hand, companies that viewed AI as a cost-cutting measure, implementing it covertly to reduce headcount, are facing a reckoning. Their initial efficiency gains are being eroded by public backlash, regulatory fines, and the significant cost of retrofitting their systems for transparency. We're seeing some smaller firms, particularly in the e-commerce sector, struggling to implement these disclosures effectively, leading to customer churn and reputational damage. It's a tough lesson: AI isn't just a technology, it's a social contract.

From the worker perspective, this shift is a mixed bag. For many, the initial fear of replacement by AI is evolving into a new reality of collaboration. "I used to spend half my day looking up policy details," says Marie-Claire Dubois, a customer service representative for a major Canadian airline, who now uses a Microsoft Copilot-powered assistant. "Now, the AI handles the routine stuff, and I can focus on helping people with complicated travel changes or emotional situations. It’s actually made my job more engaging." Her sentiment is echoed by a recent survey from Statistics Canada, which found that 72% of workers interacting with AI in their roles reported feeling more productive, while 45% felt their job satisfaction had increased due to offloading repetitive tasks.

However, there's still a significant portion, about 28%, who express anxiety about job security or the feeling of being constantly monitored by AI systems. This is where responsible AI governance and strong union representation, a hallmark of the Canadian labour landscape, become crucial. "Workers need to be at the table when these technologies are implemented," states Jean-Pierre Gagnon, President of the Canadian Labour Congress. "Transparency isn't just for customers, it's for employees too. They have a right to know how AI is impacting their work, their performance metrics, and their future." This perspective is gaining traction, with some unions negotiating clauses in collective agreements regarding AI disclosure and retraining programs.

Montreal's AI scene is world-class, here's the proof. Researchers at McGill University, in collaboration with Element AI, have been exploring the psychological impact of AI disclosure. Dr. Anya Sharma, a leading expert in human-computer interaction, notes, "Initial resistance to AI disclosure often stems from a perceived loss of human connection. However, when the AI is clearly presented as an assistant to a human, or as a tool for efficiency, acceptance rates rise dramatically." Her team's data suggests that the way disclosure happens is as important as the disclosure itself. A simple, clear statement like "You are speaking with an AI assistant. I can transfer you to a human agent at any time" performs far better than jargon-filled disclaimers.

What's coming next? I predict we'll see a global harmonization of AI transparency laws, driven by a growing consensus that consumers have a fundamental right to know. This isn't a passing fad; it's a foundational shift in how we interact with technology and with each other. Businesses that embrace this proactively, integrating AI ethically and transparently, will build stronger customer loyalty and a more engaged workforce. Those that resist, treating AI as a hidden cost-saving measure, will find themselves navigating a minefield of regulatory challenges and public mistrust. The research is fascinating, and the implications are vast.

We're moving beyond the novelty of AI into an era of accountability. The conversation isn't just about what AI can do, but what it should do, and how it should interact with us. And here in Canada, with our strong emphasis on ethical technology and public good, we have a unique opportunity to shape this future responsibly. For more insights into the evolving regulatory landscape, you can check out reports from Reuters Technology. The future of human-AI collaboration depends on these choices, and the right to know is just the beginning. For a broader perspective on AI's impact on society, Wired's AI section offers compelling analyses. And if you're interested in how Canadian courts are grappling with AI's legal challenges, you might find this article on Canada's Courts Weigh AI's Promise Versus Its Peril [blocked] insightful.

Enjoyed this article? Share it with your network.

Related Articles

Chloé Tremblàŷ

Chloé Tremblàŷ

Canada

Technology

View all articles →

Sponsored
AI MarketingJasper

Jasper AI

AI marketing copilot. Create on-brand content 10x faster with enterprise AI for marketing teams.

Free Trial

Stay Informed

Subscribe to our personalized newsletter and get the AI news that matters to you, delivered on your schedule.