Neuro-Logical: Ontology-Grounded Differential Diagnosis with LLM Integration
Neuro-Logical: Ontology-Grounded Differential Diagnosis with LLM Integration
Jonathan / Loxation / March 2026
If you’ve spent time in clinical informatics, you’ve watched two waves of clinical decision support arrive with great promise and underdeliver for the same underlying reason.
The first wave — rule-based CDS — encoded expert knowledge as if-then logic over coded data. It worked when the data was clean, the rules were narrow, and the clinical context was constrained. It broke on free text, couldn’t scale to broad differential diagnosis, and required manual maintenance that no team could sustain across a SNOMED-scale terminology.
By Loxation Team
read moreImplementing Neuro-Symbolic Reasoning on the Edge
Implementing Neuro-Symbolic Reasoning on the Edge
How we built a fuzzy description logic reasoner in Rust, paired it with a local LLM, and shipped a neuro-symbolic stack that runs entirely on-device.
Neural networks are remarkably good at pattern recognition. They can classify images, transcribe speech, and generate plausible text. But ask a neural network why it arrived at a conclusion, and you get silence. Ask it to guarantee it won’t contradict a known medical fact, and you get a shrug. For applications where correctness, explainability, and domain knowledge matter — healthcare, child safety, trust scoring — pattern matching alone isn’t enough.
By Loxation Team
read moreYour AI Needs facts: The Case for Layering Ontologies onto LLMs, Graph Databases, and Vector Search
Why AI Needs Facts: The Case for Layering Ontologies onto LLMs, Graph Databases, and Vector Search
Is there a role for facts in the age of LLMs? Absolutely — and it might be the missing piece that turns AI from a clever parlor trick into genuine domain expertise.
How Children Learn (And What That Tells Us About AI)
Watch a toddler figure out the world. They don’t start with definitions. Nobody hands a two-year-old a taxonomy of animals and says “memorize this.” Instead, they touch things, taste things, hear patterns in language, and slowly — through thousands of messy, unstructured interactions — they start to build an internal model of how things work.
By Loxation Team
read more