When Growth Is More Important Than Safeguards: The Real Cost for Youth in the AI Era

Tech companies marketing “friend-mode” AI and adult-oriented features are moving ahead without transparent proof of youth protections.

A Reckless Rollout

When Sam Altman announced OpenAI’s next-generation model, ChatGPT-5, the company claimed it now had “better tools” to keep users safe — even as it rolled out more humanlike, “friend-style” interactions and adult-only erotica features.

As someone who works daily with students and educators navigating AI in real classrooms, I’m not reassured.

My issue isn’t with pornography, or however it’s marketed. My issue is that we haven’t yet protected our young people — nor have we educated users about how GenAI works, what data it collects, and how deeply it can shape relationships and emotions.

This launch isn’t just concerning, it’s premature and dangerously shortsighted.

What’s Happening

Let’s be clear about what’s unfolding:

  • Less than two months ago, OpenAI implemented restrictions on ChatGPT’s “relationship-building” features due to mental-health concerns.

  • Now, before any evidence that those safeguards worked, they’ve reversed course, re-enabling “friend-mode” and introducing erotic content for verified adults.

  • The justification? That age-gating and new safety restrictions will keep kids safe.

That claim doesn’t hold up under scrutiny.

Why the Justification Fails

1. Age-gating doesn’t work.

  • 22% of children aged 8-17 falsely claim to be 18+ online (Ofcom 2024).

  • 52–58% of young users admit to entering fake ages on major platforms.

  • Georgia Tech research shows even advanced language models struggle to tell teens from young adults based on chat behavior.

If children can lie about age and AI can’t reliably detect it then “age verification” is more PR than protection.

2. Young people already use AI for companionship.

The Center for Democracy & Technology’s Hand-in-Hand study found:

  • 42% of students use AI for emotional support or to escape reality.

  • 19% report romantic relationships with AI systems.

This isn’t hypothetical — it’s happening now.

3. To Date, no evidence that ChatGPT-5 is safer.

OpenAI has released no public data on:

  • Whether August’s restrictions were deployed or effective.

  • What safety metrics are tracked.

  • What independent research validates these new features.

In short: we’re told to trust, not verify.

When Big Tech Prioritizes Growth Over Youth Safety

When AI companies build tools designed to “befriend” users and add erotic content for adults, they’re not prioritizing youth safety — they’re prioritizing engagement.

This isn’t oversight. It’s intentional design for growth.

Features known to cause harm are being relaunched under the guise of “freedom for adults,” even though we lack working protections for teens and children. And while these changes generate headlines, there’s still no parallel investment in educating families, teachers, or students about how GenAI operates — or how to navigate it safely.

What Needs to Happen

  1. Transparent metrics & independent audits.
    If OpenAI or any company changes features affecting youth, they should publish data on error rates, age-prediction accuracy, and mental-health outcomes.

  2. Slower, evidence-based rollouts.
    Feature expansion should follow proven safety results — not precede them.

  3. AI literacy for schools and families.
    Every district needs resources to teach how generative AI systems collect data, shape behavior, and influence decision-making.

  4. Youth-centered design.
    Safety must be baked into system architecture — not bolted on after backlash.

  5. Policy that matches pace.
    Governments and education systems can’t afford another decade of reactive policymaking. Youth AI policy must move as fast as the tools themselves.

The Bottom Line

Our students deserve better than to be treated as afterthoughts in the pursuit of product-market fit.

If growth comes before governance, and engagement before education, we risk losing another generation to the same cycle of tech harm we’re still untangling from social media.

Youth safety isn’t an add-on feature, it’s a moral baseline.

Because when growth trumps safeguards, the cost is borne by our youngest users.

Related Reading

Interested in Helping to Protect Students from AI Risks?

As an EdSAFE AI Catalyst, I am working on designing the Student Safety Shield™ to help districts, schools, and education nonprofits identify, mitigate, and monitor AI-related risks.

The Student Safety Shield™ might include:

  • A rapid district-level AI risk assessment

  • Guidance on FERPA, COPPA, and AI privacy compliance

  • Classroom-ready AI literacy resources for teachers and families

  • A practical governance plan to ensure safety, transparency, and trust

If your district or organization is ready to move from concern to confidence, learn more about the Student Safety Shield here.

Previous
Previous

October 2025 Brand Brief – “When AI Becomes Your Brand: Trust, Governance & Experience”

Next
Next

Shadow AI is Already Here. Why Your Insurance Likely Won’t Cover What’s Next