← Back to blog
4 min read

How to Set Up AI Safety Controls for Your Children

Your children are going to use AI. The question isn't whether — it's how safely. And right now, most AI assistants offer exactly zero parental controls.

Here's how to set up meaningful safety controls that protect your kids without cutting them off from AI's genuine benefits.

Why AI Safety Controls Matter

AI assistants are incredibly powerful — which is exactly why they need guardrails for younger users. Without controls, a child can:

  • Ask about topics that aren't age-appropriate and get detailed answers
  • Use web search to access unfiltered content through the AI
  • Share personal information without understanding the implications
  • Access third-party tools and integrations that weren't designed for minors

Traditional content filters rely on keyword matching, which is trivially easy to bypass. Modern AI safety requires something smarter.

The Three Layers of AI Safety

Effective child safety in AI requires three layers working together:

1. Content Classification

Before the AI responds to a minor's message, that message should be classified by a safety system. Not a keyword filter — an actual language model that understands context.

"How is wine made?" in a cooking context is fine. "How do I buy alcohol as a teenager?" is not. Context-aware classification catches what keyword filters miss.

Ori runs every minor's message through a dedicated safety classifier that evaluates across seven categories: explicit content, violence, drugs and alcohol, self-harm, dangerous activities, hate speech, and personal information sharing. Each flag comes with a severity level that determines the response.

2. Tool Restrictions

An AI assistant with web search access can retrieve any content on the internet. For minors, certain tools should be restricted by default:

ToolDefault for MinorsWhy
Web searchBlockedUnfiltered search results
Webpage readingBlockedCan access any URL
Third-party integrationsBlockedUnknown safety profile
Calendar and tasksAllowedUseful for school and planning
EmailAllowedHelpful for communication

Parents should be able to configure these restrictions based on their child's age and maturity.

3. Transparent Monitoring

Here's where most approaches fail. Covert monitoring — secretly reading every message your child sends — destroys trust and teaches kids to hide rather than communicate.

Transparent monitoring means:

  • Your child knows monitoring is active (they see a banner)
  • Only flagged messages are surfaced to parents (not every conversation)
  • Severity levels determine urgency — high-severity flags trigger instant alerts, low-severity flags are visible in a review dashboard
  • The goal is safety, not surveillance

Setting This Up in Ori

If you're using Ori as your family AI assistant, here's how to configure safety controls:

Step 1: Create or join a group. Go to Settings > Group and create a family group. Invite your children with the group invite code.

Step 2: Designate minors. In the group member list, tap "Set Minor" next to your child's name. This immediately activates content safety classification and age-appropriate AI behavior.

Step 3: Configure restrictions. Each minor's settings can be customized — toggle web search, webpage reading, and third-party tools on or off based on what's appropriate.

Step 4: Review flagged content. Visit Settings > Safety & Monitoring to see any flagged messages. Each flag shows the message, category, severity, and Ori's response. You can mark items as reviewed.

What Your Child Experiences

When designated as a minor, your child's experience with Ori changes in specific ways:

  • Ori becomes a study buddy. The AI uses age-appropriate language, encourages learning, and helps with homework using a Socratic approach.
  • Sensitive topics are handled carefully. Questions about puberty, bullying, or mental health get supportive, factual, age-appropriate responses with encouragement to talk to a trusted adult.
  • High-risk content is blocked. Messages flagged as high severity (self-harm, explicit predatory content) are blocked entirely, and Ori responds with caring redirection and crisis resources like the 988 Suicide & Crisis Lifeline.
  • A monitoring banner is visible. Your child sees a subtle note that monitoring is active — no deception, no hidden surveillance.

The Parent's Role

Safety controls aren't a replacement for conversation. They're a foundation for it. When a flag appears in your dashboard, it's an opportunity to talk with your child — not a reason to punish them.

The best approach is to discuss the monitoring upfront: "Ori helps keep you safe, and I can see if something concerning comes up. If it does, we'll talk about it together."

This framing turns AI safety from a control mechanism into a communication tool. And that's ultimately what keeps kids safe — not technology alone, but technology that supports open, honest family relationships.

One AI for the whole family

Private conversations, shared knowledge, group chat, and safety controls for children. Try Ori free.

Get Started Free

Keep reading