AI on Your Phone: How Loxation Computes Trust and Compatibility Without a Server
Loxation Team
- 8 minutes read - 1680 wordsAI on Your Phone: How Loxation Computes Trust and Compatibility Without a Server
Every social app you use today ships your behavior to a data center, feeds it through collaborative filtering, and returns a recommendation. Your social graph lives on someone else’s infrastructure. Loxation eliminates that architecture entirely. Trust scoring, compatibility matching, vector similarity search, and LLM-powered safety analysis all run on-device — powered by a stack that combines a local graph database, a formal reasoning engine, and Apple Intelligence. No server ever learns who you match with.
This post is a technical deep-dive into how we made that work at interactive speeds on mobile hardware.
The Problem With Server-Side Social Intelligence
Centralized recommendation systems create three interlocking problems. First, they require your social graph to exist in plaintext on a server, which makes end-to-end encryption performative — you encrypted the messages, but the platform still knows who you talk to, how often, and who you’re interested in. Second, they create a single point of failure for both availability and privacy. Third, they’re opaque: users have no way to inspect why the system thinks two people are compatible.
Loxation’s approach is different by construction. All app messaging is end-to-end encrypted, a server couldn’t listen even if it wanted to. And the intelligence layer — the part that answers “should I trust this person?” and “would I want to connect with them?” — runs entirely in a local graph database with a formal reasoning engine.
Two Dimensions, Two Ontologies
Loxation maintains two independent scores per peer, each computed by a separate fuzzy EL++ ontology:
Trust (0.0–1.0) is a safety and reliability signal. It draws on behavioral evidence: mutual favorites, message reciprocity, group co-membership, block history, and LLM-assessed conversation patterns. Trust answers the question every mesh network user needs answered first: is this peer safe to interact with?
Compatibility (0.0–1.0) is a match-quality signal. It draws on profile data: shared keywords, emoji preferences, interest tags, seeking/identity alignment, bio similarity via vector embeddings, communication balance, and social signals like waves. Compatibility answers the follow-up question: would I want to?
Keeping these as separate ontologies with independent axiom sets is a deliberate architectural choice. A peer who shares all your interests but has exhibited concerning behavior should score high on compatibility and low on trust — not have one inflate the other.
Inside the Reasoning Engine
Why Fuzzy Description Logic?
Description Logics are a family of formal knowledge representation languages used extensively in biomedical ontologies, enterprise knowledge graphs, and the Semantic Web. EL++ is a tractable fragment — expressive enough to model concept intersection and existential quantification, but classifiable in polynomial time. That tractability is what makes it viable for real-time mobile computation.
The “fuzzy” extension replaces binary truth values with degrees between 0.0 and 1.0. When Loxation observes 60% keyword overlap between two users, it asserts SharedKeyword(peer, degree=0.6) rather than making a binary judgment. The reasoner propagates these degrees through subsumption axioms, producing graduated output scores rather than hard categories.
Incremental ABox: The Performance Key
The Dealer reasoner (Rust, accessed via C FFI) loads and classifies the ontology’s TBox (terminological knowledge — the class hierarchy and axioms) once at startup. This is the expensive step. After that, per-peer classification uses an incremental ABox pattern:
- Reset to TBox-only checkpoint —
dealer_reasoner_reset_abox() - Assert signal degrees for one peer —
dealer_reasoner_add_class_assertion(concept, individual, degree) - Classify via fuzzy subsumption — outputs a category and score
Cost: ~0.3ms per peer. On the Signals visualization page, where users drag a trust slider and watch the graph reorganize in real time, this latency is imperceptible. Slide a contact’s trust from 0.4 to 0.8, and bidirectional propagation ripples through their groups and back to other members — every affected node shifts position live.
The Three-Layer Intelligence Stack
Layer 1: Rukuzu Graph Database
Rukuzu is an embedded graph database (Rust-based, built on Kùzu’s storage engine) that runs entirely on-device. It stores the full social graph — Device nodes with trust scores, compatibility scores, and mirrored profile data; relationship edges for MEMBER_OF, SENT_TO, WAVED_AT, FAVORITED, BLOCKED.
The critical design decision: profile fields are mirrored into the graph as native properties. Keywords, emojis, and interests become typed arrays. Bio text becomes a string column. Vector embeddings become a FLOAT[384] column with an HNSW index. This enables graph-native fuzzy computation through Cypher:
-- Single query gathers most compatibility signals at once
WITH self, d,
CAST(size(list_intersect(self.keywords, d.keywords)) AS DOUBLE) /
CAST(size(list_union(self.keywords, d.keywords)) AS DOUBLE) AS keyword_jaccard,
array_cosine_similarity(self.embedding, d.embedding) AS semantic_score
One database round-trip returns keyword Jaccard, emoji Jaccard, interest overlap, semantic similarity, wave status, shared groups, and message reciprocity. These feed directly into the signal set that the reasoner classifies.
Layer 2: Dealer Fuzzy EL++ Reasoner
The reasoner operates on declarative ontology files — OWL documents that define signal classes, intermediate concepts, and output categories with fuzzy degree annotations:
MutualWave AND InterestAligned → HighCompatibility (0.95)
InterestAligned AND FilterAligned → GoodCompatibility (0.8)
SeekingMen AND Male → CompatibleKeyword (0.9)
These axioms are the entire compatibility logic. No switch statements, no hardcoded scoring weights, no feature flags. Adding a new compatibility dimension means adding classes and axioms to the ontology file and keyword mappings to a JSON config. Zero application code changes.
The same architecture powers trust scoring with a separate ontology (trust.ofn). Trust categories map to score bands — SelfTrust at 1.0, ExplicitTrust at 0.85–1.0, down to Untrusted at 0.0–0.1 — with the Dealer reasoner handling all the inference.
Layer 3: On-Device LLM + Vector Search
Apple Intelligence (and in the future, Android-local LLMs) serves as an input to both scoring pipelines:
For compatibility: The LLM generates 384-dimensional vector embeddings of profile text (bio + about + interests). These embeddings are stored on Device nodes in Rukuzu and indexed with HNSW for sub-millisecond k-NN search. array_cosine_similarity() between your embedding and a peer’s produces a SemanticMatch degree — two people who describe similar passions in completely different words still score well. This is the signal that captures meaning beyond keyword overlap.
For trust: Apple Foundation Models power an on-device safety analysis interface. Through tool-augmented generation, the local LLM can analyze message threads for concerning patterns, surface contacts flagged by the graph’s risk signals, and query recent messages — all without any data leaving the device. The graph provides structured safety tools (AnalyzeThreadSafetyTool, FlaggedContactsTool, GetRecentMessagesTool) that the LLM calls as needed.
Graceful degradation: On devices without Apple Intelligence or a suitable local model, vector-based signals simply aren’t asserted. The ontology handles missing signals naturally — fewer inputs produce a lower score, not an error. The structured signals (keyword overlap, group membership, waves) still provide meaningful scoring on their own.
Compatibility Signal Architecture
The full signal set demonstrates the hybrid graph + Swift + ontology approach:
| Signal | Source | Degree |
|---|---|---|
| SharedKeyword | Cypher list_intersect Jaccard |
0.0–1.0 (overlap ratio) |
| SharedEmoji | Cypher list_intersect Jaccard |
0.0–1.0 |
| SharedInterest | Cypher list_intersect Jaccard |
0.0–1.0 |
| SemanticMatch | Cypher array_cosine_similarity on LLM embeddings |
0.0–1.0 (cosine sim) |
| CompatibleKeyword | Keyword→ontology class mapping + Dealer axioms | From axiom (e.g. 0.9) |
| SharedGroupMember | Cypher graph traversal | 1.0 (boolean) |
| WaveReceived | Cypher WAVED_AT edge query | 1.0 (boolean) |
| MutualWave | Cypher bidirectional WAVED_AT | 1.0 (boolean) |
| CommunicationReciprocity | SENT_TO balance ratio | 0.0–1.0 |
| ProfileMatchScore | Cypher DiscoveryFilters evaluation | 0.0–1.0 (criteria ratio) |
| ProximityBoost | Active BLE connection check | 1.0 (boolean) |
Graduated signals (keyword overlap at 0.6) and boolean signals (mutual wave at 1.0) coexist naturally in the fuzzy framework. The reasoner combines them through axiom-defined intersections to derive intermediate concepts (InterestAligned, ActiveInterest, EngagementHealthy) and final output categories.
Performance Characteristics
| Operation | Latency | Notes |
|---|---|---|
| TBox loading (startup) | ~50ms | Once per app launch, both ontologies |
| Per-peer classification | ~0.3ms | Incremental ABox reset + assert + classify |
| Compound Cypher query | ~2ms | All graph-native signals in one round-trip |
| Vector cosine similarity | <1ms | Native Rukuzu operation |
| k-NN vector search (top-10) | <1ms | HNSW index on Device.embedding |
| Full pipeline (one peer) | <5ms | Query + classify + cache update |
| Batch classify (100 peers) | ~50ms | Linear scaling |
All graph operations are fire-and-forget from the networking layer’s perspective. No BLE, Noise Protocol, or UWB code path blocks on graph results. The Signals page visualization (concentric trust rings with compatibility glow) updates at interactive frame rates during slider manipulation.
The Extensibility Advantage
The declarative architecture makes the system unusually extensible:
New compatibility signals require adding a class to the ontology, an axiom defining how it combines with existing signals, and a data source (Cypher query or Swift computation). No changes to the classification pipeline.
New keyword categories (professional networking, language preferences, accessibility needs) require entries in the JSON config file and corresponding ontology classes. The reasoner handles the rest.
New graph properties in business rules require one line in the graphBindings config section of the rules file. The property immediately becomes available in CEL rule expressions (e.g., peer.compatibilityScore > 0.6).
New trust signals follow the same pattern with the trust ontology. The two systems share infrastructure but maintain independent knowledge bases.
What This Enables
Running the full intelligence stack on-device isn’t just a privacy feature — it unlocks capabilities that server-dependent architectures can’t offer:
Offline-first intelligence. Compatibility and trust scores work in airplane mode, in areas with no cell service, in disaster scenarios — anywhere the BLE mesh operates. The graph database and reasoner have no network dependencies.
Inspectable reasoning. Every score is the output of a formal proof against published axioms. Users can understand why a peer scored the way they did. Auditors can verify that the scoring system behaves as documented. This is a meaningful advantage over neural recommendation systems.
Zero-knowledge social matching. Two users can discover they’re highly compatible without any third party learning that fact. The computation happens locally on each device, using only the profile data already exchanged over the encrypted mesh.
Real-time interactive exploration. The sub-millisecond classification latency enables UI patterns that server-round-trip architectures can’t support — like the Signals page, where adjusting one node’s trust visibly propagates through the entire local graph in real time.
The phone in your pocket has enough compute to run a graph database, a formal reasoner, and a language model simultaneously. Loxation is built on the premise that your social intelligence should run where your data lives — on your device, under your control, at the speed of thought.