SportsGoogleMicrosoftMetaIntelOpenAIAnthropicDeepMindUberNorth America · USA8 min read50.4k views

From a Brooklyn Brownstone to a $500 Million Valuation: How Maya Sharma Built 'Veritas Kids' to Shield Our Children from AI's Dark Side

Meet Maya Sharma, the 28-year-old CEO of Veritas Kids, whose personal journey from a New York City classroom inspired a groundbreaking platform protecting minors from AI-generated content. Her startup, now valued at half a billion dollars, is redefining digital safety for the next generation, proving that even the biggest tech challenges can be met with fierce determination and a clear moral compass.

Listen
0:000:00

Click play to listen to this article read aloud.

From a Brooklyn Brownstone to a $500 Million Valuation: How Maya Sharma Built 'Veritas Kids' to Shield Our Children from AI's Dark Side
Amèlia Whitè
Amèlia Whitè
USA·Apr 29, 2026
Technology

The afternoon sun, filtered through the tall, narrow windows of a renovated Brooklyn brownstone, casts long shadows across a whiteboard covered in complex neural network diagrams and user flow charts. Maya Sharma, 28, CEO and co-founder of Veritas Kids, paces with a restless energy that belies her calm demeanor. She’s wearing a simple black turtleneck and jeans, her dark hair pulled back in a no-nonsense ponytail. She stops abruptly, tapping a marker against a section labeled “Generative Adversarial Networks: Child-Specific Vulnerabilities.”

“This is where the battle is truly fought, Amèlia,” she says, turning to me, her eyes intense. “It’s not just about filtering explicit content anymore. It’s about discerning intent, about understanding the subtle, insidious ways AI can manipulate young minds, create false realities, or even mimic trusted figures. We’re building a digital immune system for kids, and the pathogens are getting smarter every day.”

This isn't just another tech startup; it's a mission born from a deeply personal experience, one that has propelled Veritas Kids to a $500 million valuation in just four years, backed by heavyweights like Sequoia Capital and Founders Fund. Maya Sharma isn't just building software; she's building a bulwark against the digital wild west, a place where children are increasingly vulnerable to the unchecked power of frontier AI models. Her story is a potent reminder that innovation, at its best, is driven by a profound need.

Maya’s journey began not in a Stanford lab or a Silicon Valley incubator, but in a bustling public elementary school in Queens, New York. After graduating magna cum laude from the University of Pennsylvania with a degree in Cognitive Science and a minor in Computer Science, she initially felt drawn to education. She spent two years teaching fifth grade, a period she now calls her “foundational training.”

“I saw firsthand how quickly kids absorbed digital information, how seamlessly they integrated technology into their lives,” Maya recounts, her voice softening. “But I also saw the darker side. A student, barely ten, was convinced a deepfake video of a popular YouTuber was real, leading to significant emotional distress. Another was struggling with body image after an AI-generated ‘ideal’ influencer started popping up in their feeds. It was like watching a slow-motion car crash, and I felt utterly helpless.”

That helplessness ignited a fire. Maya realized the problem wasn't just content moderation; it was about the source and intent of the content, especially when generated by increasingly sophisticated AI. She saw the digital landscape shifting dramatically, with tools like OpenAI's Dall-e and Google's Imagen making photorealistic fakes accessible to anyone. The traditional filters were simply not enough.

Her “aha!” moment came during a particularly frustrating parent-teacher conference. A mother was distraught because her daughter was being cyberbullied by what appeared to be an AI-generated persona of a classmate, designed to mimic her voice and appearance. “I remember thinking, ‘This isn’t just a bad actor; this is a new class of threat,’” Maya says, leaning forward. “We needed something that could detect AI-generated manipulation, not just block keywords.”

She left teaching in 2021, a decision that surprised her family and friends. “My parents, bless them, thought I was crazy leaving a stable job for a nebulous idea,” she laughs. “But I couldn’t unsee what I had seen. The problem was too urgent.”

Her early days were spent in a tiny, rented office space above a deli in Manhattan, fueled by lukewarm coffee and an unwavering conviction. She dove deep into AI safety research, devouring papers from DeepMind and Anthropic, and attending every virtual conference she could find. It was at one such online symposium that she virtually met her co-founder, Dr. Ben Carter, a seasoned AI ethics researcher then working at Microsoft Research. Ben, a quiet but brilliant engineer with a Ph.D. from Carnegie Mellon, shared Maya’s alarm about the unchecked proliferation of generative AI.

“Ben was working on adversarial examples for image recognition, trying to fool AI,” Maya explains. “I realized we needed to flip that. We needed to train AI to unmask the fakes, to identify the tell-tale signs of AI generation, especially when it was designed to be subtle and manipulative for a young audience.” Their collaboration was instant, a meeting of minds across different coasts. Ben brought the deep technical expertise, Maya the pedagogical insight and fierce drive.

Their first prototype, a browser extension, was clunky and often wrong. “It was a disaster, honestly,” Maya admits, shaking her head. “It flagged legitimate cartoon characters as AI-generated and missed obvious deepfakes. We almost ran out of money. We were pitching to angel investors who just didn’t grasp the scale of the problem. They saw content filters; we saw a paradigm shift.”

The pivot came after a particularly brutal rejection from a seed investor. Ben, ever the pragmatist, suggested they narrow their focus. Instead of trying to detect all AI-generated content, they should specialize in manipulative AI content targeting specific age groups. They realized that the architecture of AI models often leaves subtle, almost imperceptible digital fingerprints. The architecture tells the real story.

“We started building a proprietary detection model, trained on massive datasets of both human-created and AI-generated content, specifically focusing on patterns of manipulation and persuasion,” Maya elaborates. “Think of it like a digital forensic tool, but operating in real-time on a child’s device. It learns to recognize the subtle 'tells' that indicate an AI is trying to influence, not just inform or entertain.”

This refined approach caught the eye of Altos Ventures. In late 2023, they closed a $5 million seed round. “That was our lifeline,” Maya says. “It allowed us to hire our first five engineers and move into a proper office, still in Brooklyn, but with actual windows.”

Building the company wasn't easy. The early days were a blur of coding, data labeling, and relentless product iteration. Maya fostered a culture of fierce dedication and ethical responsibility. “Every engineer, every data scientist, understands the gravity of what we’re doing,” she asserts. “We’re not just building a product; we’re protecting childhood.”

Their big breakthrough came with the release of Veritas Kids 1.0, a platform that integrated into popular browsers and streaming services, offering real-time detection and alerts for parents. It didn't just block; it explained why something might be problematic. For example, if an AI-generated character was subtly promoting unhealthy eating habits, Veritas Kids would flag it, explain the manipulation, and offer resources for parents. The initial user feedback was overwhelmingly positive. Parents, overwhelmed by the digital deluge, finally felt they had an ally.

In early 2025, Veritas Kids announced a $30 million Series A round led by Sequoia Capital, valuing the company at $300 million. “It was surreal,” Maya recalls, a rare smile breaking through her focused expression. “We went from struggling to make payroll to being able to scale our team and our research.” Founders Fund joined in a subsequent Series B, pushing their valuation to $500 million. Their annual recurring revenue (ARR) is projected to hit $100 million by the end of 2026, a testament to the urgent market need.

Today, Veritas Kids employs over 150 people, with offices in New York City and a research hub in Seattle, close to many of the major AI players. Maya still maintains a hands-on approach, often spending hours with her engineering teams, dissecting the latest multimodal models from Google and Meta. “You have to understand the beast to tame it,” she says, a glint in her eye. “We’re constantly reverse-engineering new generative techniques to stay ahead.”

“The landscape of AI is shifting so rapidly, it’s like trying to build a dam in a hurricane,” says Dr. Evelyn Reed, a leading AI ethicist at the Berkman Klein Center for Internet & Society at Harvard University. “What Maya Sharma and Veritas Kids are doing is critical. They are not just reacting; they are proactively developing defenses against threats that many in the industry are still only theorizing about. Their focus on the manipulative intent of AI, rather than just explicit content, is a game-changer for child safety.”

What drives Maya is not the valuations or the headlines, but the letters she receives from parents. “When a mom writes to tell me her child avoided a scam because Veritas Kids flagged an AI-generated voice message, that’s what keeps me going,” she says, her voice thick with emotion. “It’s about giving kids back a safe space to explore, to learn, to be kids, without having to navigate a minefield of sophisticated AI deception.”

Looking ahead, Maya sees Veritas Kids expanding beyond detection to education, developing AI-powered tools that teach digital literacy and critical thinking skills to children directly. “We want to empower kids to understand how AI works, how it can be used for good, and how to spot its misuse,” she explains. “It’s not enough to shield them; we have to equip them.” [Let me decode this for you]: the long-term solution isn't just about blocking bad actors; it's about fostering a generation of digitally savvy citizens.

“The biggest challenge remains staying ahead of the curve,” notes Dr. Alex Chen, a Senior Researcher at OpenAI who has followed Veritas Kids’ work. “Generative models are evolving at an exponential rate. The defensive measures need to be equally agile. Veritas Kids’ approach of continuous learning and adaptation is essential.”

Maya Sharma’s journey from a New York classroom to the helm of a half-billion-dollar company is a powerful narrative of purpose meeting innovation. In a world increasingly shaped by AI, where the lines between real and synthetic blur, Veritas Kids stands as a vital guardian, protecting the most vulnerable among us. It's a reminder that the true measure of technological advancement isn't just what we can create, but how responsibly we wield that power, especially when it comes to the safety and well-being of our children. Her work is a beacon, showing that the fight for a safer digital future is not just possible, but imperative. For more on the broader implications of AI on youth, you might want to read about affective AI's hidden vulnerabilities [blocked].

“We’re building the future of childhood, one secure digital interaction at a time,” Maya concludes, her gaze fixed on the Brooklyn skyline. “And we’re just getting started.” The sun dips lower, painting the city in hues of orange and purple, a fitting backdrop for a company trying to bring clarity and safety to the complex world of AI. This is what's actually happening inside Veritas Kids. It's a battle, and Maya Sharma is leading the charge. For more insights into how AI is shaping our world, check out the latest from Wired.

Enjoyed this article? Share it with your network.

Related Articles

Amèlia Whitè

Amèlia Whitè

USA

Technology

View all articles →

Sponsored
Generative AIStability AI

Stability AI

Open-source AI for image, language, audio & video generation. Power your creative workflow.

Explore

Stay Informed

Subscribe to our personalized newsletter and get the AI news that matters to you, delivered on your schedule.