Sara Loera

I build AI infrastructure with a simple philosophy: intelligent software should behave like real software—observable, reproducible, secure, and safe to ship. I approach the work as systems engineering, grounded in feedback loops, failure modes, constraints, and incentives. That lens matters more in AI, where reliability is harder and the stakes are higher.

Currently building

Noēsis—a cognitive runtime that makes agent runs traceable, replayable, and governable through durable artifacts, deterministic replay, and policy gates.

Reliabilitydeterministic replay, fault isolation, graceful degradation

Auditabilitytrace capture, decision provenance, compliance-ready logs

Governancepolicy enforcement, guardrails, safe change control

Background

Production engineering and applied ML. Shipped systems and ML pipelines from training through deployment. Now building tools for agentic software.

Interested in Noēsis or want to collaborate? Let's talk.

v0.1.0 · still iterating