EducationInterviewGoogleMicrosoftMetaIntelOpenAIAdobeTikTokOceania · New Zealand7 min read37.0k views

When the Digital Tide Turns Malicious: Dr. Hineata Te Rangi on Deepfakes, Democracy, and Meta's Role in Aotearoa's Elections

As elections loom, the threat of AI deepfakes casts a long shadow over democratic processes. I sat down with Dr. Hineata Te Rangi, a leading expert on indigenous data governance, to discuss how these digital fabrications could erode trust and what Aotearoa can do to protect its unique political landscape.

Listen
0:000:00

Click play to listen to this article read aloud.

When the Digital Tide Turns Malicious: Dr. Hineata Te Rangi on Deepfakes, Democracy, and Meta's Role in Aotearoa's Elections
Arohà Ngàta
Arohà Ngàta
New Zealand·Apr 29, 2026
Technology

The air in Dr. Hineata Te Rangi's office, nestled within Te Herenga Waka, Victoria University of Wellington, hummed with a quiet intensity. Sunlight, filtered through the intricate kōwhaiwhai patterns on her window, illuminated stacks of research papers and a small pounamu toki. Hineata, with her warm smile and eyes that held generations of wisdom, is a beacon in the often-abstract world of AI ethics. She is a prominent voice advocating for Māori data sovereignty and a fierce guardian of truth in the digital age. Today, our kōrero, our conversation, was about something deeply unsettling: the rising tide of AI-generated deepfakes threatening the very fabric of our democracies, particularly as election cycles draw near.

“Nau mai, haere mai, Arohà,” she greeted me, gesturing to a comfortable armchair. “This topic, it’s not just academic for us in Aotearoa, is it? It strikes at the heart of our collective decision-making, our mana motuhake.”

I nodded, settling in. “Indeed, Hineata. We’re seeing reports globally, from the US to India, of sophisticated deepfakes being used to manipulate public opinion. With our own general election on the horizon, the concern here is palpable. What’s your immediate reaction to the increasing sophistication of tools like OpenAI’s Sora or Google’s Gemini, and their potential for misuse in political campaigns?”

She leaned forward, her voice calm but firm. “The technology itself, in its purest form, is a marvel. The ability to generate hyper-realistic audio, video, and imagery is astounding. However, the ethical implications, particularly when these tools are weaponized for political gain, are profoundly disturbing. We’re not just talking about doctored photos anymore. We’re talking about entire fabricated narratives, speeches, and events that never happened, presented with a level of realism that can deceive even the most discerning eye.”

She paused, gazing out at the Wellington harbour. “In Te Reo Māori, we have a word for this: whakapōrearea. It means to cause trouble, to disturb, to create confusion. That is precisely what these deepfakes do. They whakapōrearea the public discourse, making it incredibly difficult for citizens to distinguish fact from fiction.”

“And the speed at which these can be generated, that’s a game-changer, isn’t it?” I pressed. “It’s not just state actors anymore, but potentially anyone with access to these advanced models, even open-source ones like Meta’s Llama 3.”

“Absolutely,” Hineata affirmed. “The democratisation of powerful AI models means the barrier to entry for creating convincing deepfakes is lower than ever. A small, well-resourced group, or even a highly motivated individual, could launch a targeted disinformation campaign. Imagine a deepfake video of a political candidate making a controversial statement they never uttered, or a fabricated audio clip of a leader appearing to endorse a rival. These can spread like wildfire on platforms like TikTok and X, especially in the crucial days leading up to an election, leaving little time for fact-checking and rebuttal.”

I thought about the recent local body elections, where even minor misinformations caused significant ripples. “What about the impact on trust? If people can’t believe what they see or hear, what does that do to our faith in democratic institutions?”

“That, Arohà, is the most insidious threat,” she said, her expression serious. “Deepfakes don’t just spread lies; they erode the very foundation of trust. When citizens constantly question the authenticity of information, they become cynical, disengaged. This creates fertile ground for extremism and undermines the collective will of the people. It’s a direct assault on the principles of transparency and accountability that underpin a healthy democracy.”

She continued, her voice gaining a passionate edge. “Aotearoa's approach to AI is rooted in indigenous wisdom, emphasizing collective well-being and long-term sustainability. We cannot allow technology, no matter how advanced, to become a tool for division and deception. Technology must serve the people, not the other way around. Our tikanga, our customs and values, demand that we approach these challenges with a sense of responsibility and foresight.”

I asked about specific vulnerabilities in New Zealand. “Are there particular aspects of our political landscape or cultural context that make us more, or less, susceptible to this kind of manipulation?”

“Our relatively small population and tight-knit communities can be a double-edged sword,” Hineata explained. “On one hand, strong community ties and local media can sometimes act as a bulwark against widespread disinformation. On the other hand, if a deepfake targets a specific community or a local issue, it can spread rapidly within those trusted networks, making it harder to contain. We also have a vibrant and diverse media landscape, but it’s not immune to the pressures of speed and sensationalism.”

She mentioned a recent study by Te Pūnaha Matatini, a New Zealand Centre of Research Excellence, which indicated that while New Zealanders generally have high media literacy, a significant portion, around 35 percent, admitted to struggling with identifying deepfake content without explicit warnings. “That 35 percent is a large enough segment to sway close elections,” she pointed out.

“What practical steps can be taken? Is it about regulation, education, or technological countermeasures?”

“It’s all three, and more,” Hineata stated firmly. “Firstly, education is paramount. We need comprehensive digital literacy programs, starting in schools, to equip our rangatahi, our young people, and indeed all citizens, with the critical thinking skills to identify manipulated content. This includes understanding how AI works, its capabilities, and its limitations.”

“Secondly, regulation. Governments, both here and internationally, need to work with tech giants like Meta, Google, and Microsoft to establish clear guidelines and accountability frameworks. This isn't about stifling innovation, but about ensuring responsible development and deployment. We need clear policies on content provenance, requiring platforms to label AI-generated content, especially in political contexts. The European Union’s AI Act, while not perfect, offers some valuable lessons in this regard. Reuters has reported extensively on these global regulatory efforts.”

“And thirdly, technological solutions. There’s promising research into AI-based detection tools that can identify deepfakes. Companies like Adobe are integrating content authenticity initiatives into their software. We need to invest in these areas and ensure that our local experts are part of this global effort. The challenge is that the detection tools often lag behind the generation tools, creating a constant arms race.”

I brought up the role of social media companies. “Platforms like Meta, with their vast reach, hold immense power. Are they doing enough?”

“Frankly, no, not yet,” Hineata said, her voice tinged with frustration. “While some platforms have introduced policies, their enforcement is often inconsistent and reactive. They need to be proactive, investing significantly more in moderation, fact-checking, and transparency, particularly in non-English languages and culturally specific contexts. For Māori and other indigenous communities, the potential for deepfakes to misrepresent cultural practices or historical narratives is particularly egregious. We need to demand more from these global corporations. Their algorithms are designed for engagement, which often inadvertently amplifies sensational and false content. This needs to change.”

She spoke of a recent incident where a deepfake audio clip of a Māori elder, digitally altered to endorse a contentious land development, circulated briefly on a local community Facebook group before being identified and removed. “It caused distress and confusion, even in that short time. Imagine that scaled up nationally.”

As our conversation drew to a close, Hineata offered a glimmer of hope. “The fight against deepfakes in elections is not just about technology; it’s about strengthening our communities, fostering critical discourse, and upholding our shared values. It’s about remembering that the power of our democracy lies with an informed and engaged citizenry. We have the opportunity in Aotearoa to lead by example, to show how indigenous principles of kaitiakitanga, guardianship, can guide us in navigating these complex digital waters.”

Her words resonated deeply. The challenge of deepfakes is formidable, a digital shadow cast over the integrity of our elections. Yet, in the wisdom and conviction of leaders like Dr. Hineata Te Rangi, there is a clear path forward: one paved with education, responsible governance, and a steadfast commitment to truth. The future of our democracy, both in Aotearoa and globally, hinges on our collective ability to discern the real from the fabricated, and to demand accountability from those who wield the power of AI. For more on the broader ethical challenges of AI, readers can explore resources like MIT Technology Review. The journey ahead will be complex, but it is a journey we must undertake together, with open eyes and critical minds. The stakes, after all, could not be higher. There is also a relevant discussion on AI ethics in general, which you can read about in When Algorithms Assess Risk: Is AI in Insurance a Fair Deal for Aotearoa, or Just a New Kind of Redlining? [blocked].

Enjoyed this article? Share it with your network.

Related Articles

Arohà Ngàta

Arohà Ngàta

New Zealand

Technology

View all articles →

Sponsored
AI SafetyAnthropic

Anthropic Claude

Safe, helpful AI assistant for work. Analyze documents, write code, and brainstorm ideas.

Learn More

Stay Informed

Subscribe to our personalized newsletter and get the AI news that matters to you, delivered on your schedule.