On-Device Intelligence

Child safety without surveillance

Loxation detects grooming, harassment, and predatory patterns using on-device graph intelligence and Apple AI. End-to-end encryption is never broken. Messages never leave the device. Your child is still protected.

The false choice

Today's platforms force you to pick between your child's privacy and their safety. Loxation eliminates that tradeoff.

Server-side scanning

Centralized platforms read every message, image, and video on their servers. They claim it's for safety — but it means your child's private conversations are processed by corporate systems, vulnerable to breaches, subpoenas, and mission creep. Encryption is either absent or meaningfully weakened.

Encrypt-and-hope

Some apps offer strong encryption but zero safety tooling. Parents get no visibility. Groomers exploit the opacity. When something goes wrong, there's no trail, no detection, and no intervention — just an encrypted channel between a child and a predator.

Loxation: on-device intelligence

Move the safety analysis to the device itself. A local graph database tracks relationship patterns — who contacts your child, how often, through what social connections. Apple's on-device AI reasons about conversational tone and escalation. Sensitive content is flagged before it's shown. All processing happens on the Neural Engine. The encryption is never touched. Your child's messages stay between them and their friends — but the device itself becomes a guardian.

Four layers of on-device protection

Each layer operates entirely on-device. No cloud. No server. No third party ever sees the data.

Layer 1

Relationship Graph

Rukuzu — a Rust-based graph database — runs locally, mapping the social topology around your child. Device nodes, peer connections, group memberships, message frequency, mutual favorites. Trust scores propagate through the graph: friends-of-friends inherit partial trust; isolated strangers do not.

Layer 2

Behavioral Rules Engine

A CEL (Common Expression Language) rules engine evaluates contact behavior against configurable policies. Frequency acceleration, initiation ratios, isolation patterns, time-of-day anomalies. Parents can set thresholds. The rules run against graph properties — never raw message content.

Layer 3

Apple Foundation Models

On iOS 26+, Apple's on-device large language model analyzes flagged threads for contextual safety signals — tone escalation, secrecy language, age-inappropriate pressure, coercion tactics. This runs on the Neural Engine with zero network calls. The LLM receives behavioral metadata and message excerpts only from locally-flagged contacts.

Layer 4

Sensitive Content Analysis

Apple's SensitiveContentAnalysis framework scans images and videos on-device before display. In child protection mode (Communication Safety), explicit content is blurred with a descriptive warning, outbound sensitive images are blocked entirely, and parents are notified — all without any image leaving the device.

Trust propagation in action

The graph encodes what parents intuitively understand: relationships with social context are safer than isolated contacts.

Social graph topology — on-device analysis

Friend trust: 0.85 Friend trust: 0.72 Friend trust: 0.90 Your Child New Contact trust: 0.08 FLAGGED
Your child Trusted (mutual connections) Flagged (isolated, low trust)

What the graph detects

Pattern recognition across relationship metadata — without reading message content.

Grooming

Isolation + frequency acceleration

A new contact with zero mutual friends begins messaging with rapidly increasing frequency. The initiation ratio is heavily skewed — they always reach out first. The graph flags the combination: ISOLATED_CONTACT + FREQUENCY_ACCELERATION + CONTACT_ALWAYS_INITIATES.

Predatory behavior

Low trust + high volume

A contact with a trust score below 0.3 has sent more messages in the past 7 days than in the entire previous month. No shared groups, no mutual favorites. The device triggers on-device LLM analysis of the thread's behavioral tone.

Harassment

Blocked contact re-engagement

A previously blocked contact reappears under a new peer identity. The graph's BLOCKED relationship edges persist across ephemeral BLE connections, catching re-engagement attempts that simple block lists miss.

Threats & doxxing

Content + context escalation

When the behavioral rules engine flags a contact, Apple's on-device Foundation Model analyzes the thread for coercion language, threats, requests for personal information, or pressure to move to another platform. Analysis runs entirely on the Neural Engine.

How detection flows

From behavioral signal to parental notification — entirely on-device.

📊

Graph update

Every message updates the local relationship graph — frequency, recency, initiation direction

Rules evaluation

CEL rules engine checks behavioral patterns against parent-configurable thresholds

🧠

AI analysis

Flagged threads analyzed by on-device Foundation Model for tone, coercion, and escalation

🛡

Parent alert

PermissionKit notification to parent device. Low-trust new contacts require approval to message

Cryptographic guarantees preserved

On-device analysis means none of these are compromised. Ever.

Noise Protocol (XX Pattern)

All 1:1 messages use mutually authenticated Curve25519 sessions with forward secrecy. The safety layer operates above the transport — encrypted payloads are analyzed only after local decryption.

MLS Group Encryption (RFC 9420)

Group messages use the Messaging Layer Security standard. Epoch-based key rotation, tree-based key agreement, post-compromise security. The graph layer sees group membership topology, not message content.

SQLCipher Encrypted Storage

All local message data is encrypted at rest with AES-256-CBC. The graph database stores relationship metadata only — trust scores, message counts, timestamps — never message bodies.

Emergency Wipe

Triple-tap emergency wipe clears all key material, graph data, and message history instantly. Designed for physical safety scenarios. Irrecoverable by design.

How Loxation compares

Most platforms sacrifice one dimension for another. Loxation is designed to deliver on all of them.

Capability Centralized platforms E2E-only apps Loxation
End-to-end encryption ✗ None or partial ✓ Full ✓ Noise + MLS
Grooming detection ● Server-side scanning ✗ None ✓ On-device graph
Sensitive content filtering ● Cloud-based ✗ None ✓ Apple SCA on-device
Parental controls ● Platform-dependent ✗ None ✓ PermissionKit + trust gates
No server access to messages ✗ Server reads all ✓ Encrypted ✓ No servers at all*
Works without internet ✗ Requires cloud ✗ Requires cloud ✓ BLE mesh network

* Loxation operates peer-to-peer over Bluetooth mesh. Optional Nostr relay fallback for offline peers encrypts messages end-to-end before relay.

For parents

You shouldn't have to choose between reading your child's messages and protecting them.

  • Trust score dashboard — See which contacts have social context (mutual friends, shared groups) and which are isolated strangers
  • Approval gates — New low-trust contacts require your permission via Apple PermissionKit before messaging your child
  • Configurable thresholds — Set frequency, trust, and isolation thresholds that match your family's comfort level
  • Sensitive content blocking — Explicit images are blocked outbound and blurred inbound with age-appropriate warnings
  • No message reading — You see relationship patterns and flags, never message content. Your child's privacy is real.
  • Emergency wipe — If your child is in physical danger, a triple-tap destroys all data instantly
"The same intuition a parent uses when they notice their kid spending all their time with someone nobody else knows — encoded as graph queries and behavioral heuristics, running on the device that holds their trust."
— Design philosophy, Loxation safety architecture

A new paradigm

For twenty years, child safety online has meant one thing: give a corporation access to everything. Read every message. Scan every image. Build a profile. Hope the watchers are trustworthy.

The alternative was encrypted isolation — safe from corporations, but also safe from any form of protection.

Loxation proves there's a third path. Move the intelligence to the edge. Let the device that a child trusts with their conversations also be the device that watches for danger. Use the shape of relationships — not the content of messages — as the primary safety signal. Bring AI to the Neural Engine instead of the cloud. Preserve encryption absolutely while making protection tangible.

Privacy and child safety were never actually in opposition. We just didn't have the right architecture.

Protect without surveilling

Loxation is building the future of safe, private communication for families. On-device. Encrypted. Intelligent.

Learn more at loxation.com