Sara Loera
I build AI infrastructure with a simple philosophy: intelligent software should behave like real software—observable, reproducible, secure, and safe to ship. I approach the work as systems engineering, grounded in feedback loops, failure modes, constraints, and incentives. That lens matters more in AI, where reliability is harder and the stakes are higher.
Currently building
Noēsis—a cognitive runtime that makes agent runs traceable, replayable, and governable through durable artifacts, deterministic replay, and policy gates.
Reliability—deterministic replay, fault isolation, graceful degradation
Auditability—trace capture, decision provenance, compliance-ready logs
Governance—policy enforcement, guardrails, safe change control
Background
Production engineering and applied ML. Shipped systems and ML pipelines from training through deployment. Now building tools for agentic software.
Interested in Noēsis or want to collaborate? Let's talk.