Loxation detects grooming, harassment, and predatory patterns using on-device graph intelligence and Apple AI. End-to-end encryption is never broken. Messages never leave the device. Your child is still protected.
Today's platforms force you to pick between your child's privacy and their safety. Loxation eliminates that tradeoff.
Centralized platforms read every message, image, and video on their servers. They claim it's for safety — but it means your child's private conversations are processed by corporate systems, vulnerable to breaches, subpoenas, and mission creep. Encryption is either absent or meaningfully weakened.
Some apps offer strong encryption but zero safety tooling. Parents get no visibility. Groomers exploit the opacity. When something goes wrong, there's no trail, no detection, and no intervention — just an encrypted channel between a child and a predator.
Move the safety analysis to the device itself. A local graph database tracks relationship patterns — who contacts your child, how often, through what social connections. Apple's on-device AI reasons about conversational tone and escalation. Sensitive content is flagged before it's shown. All processing happens on the Neural Engine. The encryption is never touched. Your child's messages stay between them and their friends — but the device itself becomes a guardian.
Each layer operates entirely on-device. No cloud. No server. No third party ever sees the data.
Rukuzu — a Rust-based graph database — runs locally, mapping the social topology around your child. Device nodes, peer connections, group memberships, message frequency, mutual favorites. Trust scores propagate through the graph: friends-of-friends inherit partial trust; isolated strangers do not.
A CEL (Common Expression Language) rules engine evaluates contact behavior against configurable policies. Frequency acceleration, initiation ratios, isolation patterns, time-of-day anomalies. Parents can set thresholds. The rules run against graph properties — never raw message content.
On iOS 26+, Apple's on-device large language model analyzes flagged threads for contextual safety signals — tone escalation, secrecy language, age-inappropriate pressure, coercion tactics. This runs on the Neural Engine with zero network calls. The LLM receives behavioral metadata and message excerpts only from locally-flagged contacts.
Apple's SensitiveContentAnalysis framework scans images and videos on-device before display. In child protection mode (Communication Safety), explicit content is blurred with a descriptive warning, outbound sensitive images are blocked entirely, and parents are notified — all without any image leaving the device.
The graph encodes what parents intuitively understand: relationships with social context are safer than isolated contacts.
Pattern recognition across relationship metadata — without reading message content.
A new contact with zero mutual friends begins messaging with rapidly increasing frequency. The initiation ratio is heavily skewed — they always reach out first. The graph flags the combination: ISOLATED_CONTACT + FREQUENCY_ACCELERATION + CONTACT_ALWAYS_INITIATES.
A contact with a trust score below 0.3 has sent more messages in the past 7 days than in the entire previous month. No shared groups, no mutual favorites. The device triggers on-device LLM analysis of the thread's behavioral tone.
A previously blocked contact reappears under a new peer identity. The graph's BLOCKED relationship edges persist across ephemeral BLE connections, catching re-engagement attempts that simple block lists miss.
When the behavioral rules engine flags a contact, Apple's on-device Foundation Model analyzes the thread for coercion language, threats, requests for personal information, or pressure to move to another platform. Analysis runs entirely on the Neural Engine.
From behavioral signal to parental notification — entirely on-device.
Every message updates the local relationship graph — frequency, recency, initiation direction
CEL rules engine checks behavioral patterns against parent-configurable thresholds
Flagged threads analyzed by on-device Foundation Model for tone, coercion, and escalation
PermissionKit notification to parent device. Low-trust new contacts require approval to message
On-device analysis means none of these are compromised. Ever.
All 1:1 messages use mutually authenticated Curve25519 sessions with forward secrecy. The safety layer operates above the transport — encrypted payloads are analyzed only after local decryption.
Group messages use the Messaging Layer Security standard. Epoch-based key rotation, tree-based key agreement, post-compromise security. The graph layer sees group membership topology, not message content.
All local message data is encrypted at rest with AES-256-CBC. The graph database stores relationship metadata only — trust scores, message counts, timestamps — never message bodies.
Triple-tap emergency wipe clears all key material, graph data, and message history instantly. Designed for physical safety scenarios. Irrecoverable by design.
Most platforms sacrifice one dimension for another. Loxation is designed to deliver on all of them.
| Capability | Centralized platforms | E2E-only apps | Loxation |
|---|---|---|---|
| End-to-end encryption | ✗ None or partial | ✓ Full | ✓ Noise + MLS |
| Grooming detection | ● Server-side scanning | ✗ None | ✓ On-device graph |
| Sensitive content filtering | ● Cloud-based | ✗ None | ✓ Apple SCA on-device |
| Parental controls | ● Platform-dependent | ✗ None | ✓ PermissionKit + trust gates |
| No server access to messages | ✗ Server reads all | ✓ Encrypted | ✓ No servers at all* |
| Works without internet | ✗ Requires cloud | ✗ Requires cloud | ✓ BLE mesh network |
* Loxation operates peer-to-peer over Bluetooth mesh. Optional Nostr relay fallback for offline peers encrypts messages end-to-end before relay.
You shouldn't have to choose between reading your child's messages and protecting them.
For twenty years, child safety online has meant one thing: give a corporation access to everything. Read every message. Scan every image. Build a profile. Hope the watchers are trustworthy.
The alternative was encrypted isolation — safe from corporations, but also safe from any form of protection.
Loxation proves there's a third path. Move the intelligence to the edge. Let the device that a child trusts with their conversations also be the device that watches for danger. Use the shape of relationships — not the content of messages — as the primary safety signal. Bring AI to the Neural Engine instead of the cloud. Preserve encryption absolutely while making protection tangible.
Privacy and child safety were never actually in opposition. We just didn't have the right architecture.
Loxation is building the future of safe, private communication for families. On-device. Encrypted. Intelligent.
Learn more at loxation.com