Associate Professor of Computer Science · University of Colorado Boulder
My research develops formal methods for reinforcement learning and trustworthy AI, with a focus on verification and accountability in high-stakes decision-making systems.
Foundations and algorithms for verifying, synthesizing, and reasoning about
reinforcement-learning systems, including temporal objectives, recursion,
and symbolic representations.
Trustworthy Reasoning with Learning and LLMs
Combining formal verification, symbolic reasoning, and learning to produce
explanations and guarantees that are checkable, not just plausible.
AI, Software, and Accountability
Methods for auditing, testing, and debugging high-stakes software systems,
with applications to fairness, legal compliance, and socio-technical
decision-making.
Secure & Safe Cyber-Physical Systems
Rigorous methods for security, privacy, and safety of cyber-physical and
learning-enabled systems, with applications to medical devices and
critical infrastructure.
Recent news
Feb 2026
Hiring two Student Research Assistants for a project on Cardiac Digital Twins and Reinforcement Learning (Spring 2026). Details here.
Aug 2025
Teaching CSCI 5444 (Introduction to the Theory of Computation) this term.
Aug 2025
Our research on AI explainability in Sudoku was featured in CNET.
Aug 2025
New arXiv preprint with Maria Leonor Pacheco, Fabio Somenzi, and Dananjay Srinivas: Explaining Hitori Puzzles: Neurosymbolic Proof Staging for Sequential Decisions.
Aug 2025
Paper accepted at ASE 2025: Uncovering Discrimination Clusters: Quantifying and Explaining Systematic Fairness Violations.
I’m recruiting PhD students and occasional postdocs interested in formal methods,
reinforcement learning, cyber-physical systems, and AI accountability. Strong
theoretical foundations and curiosity about real-world impact are especially welcome.