Formal Methods for Reinforcement Learning
Foundations and algorithms for verifying, synthesizing, and reasoning about
reinforcement-learning systems, including temporal objectives, recursion,
and symbolic representations.
Trustworthy Reasoning with Learning and LLMs
Combining formal verification, symbolic reasoning, and learning to produce
explanations and guarantees that are checkable, not just plausible.
AI, Software, and Accountability
Methods for auditing, testing, and debugging high-stakes software systems,
with applications to fairness, legal compliance, and socio-technical
decision-making.
Secure & Safe Cyber-Physical Systems
Rigorous methods for security, privacy, and safety of cyber-physical and
learning-enabled systems, with applications to medical devices and
critical infrastructure.
Aug 2025
Teaching CSCI 5444 (Introduction to the Theory of Computation) this term.
Aug 2025
Our research on AI explainability in Sudoku was featured in CNET.
Aug 2025
New arXiv preprint with Maria Leonor Pacheco, Fabio Somenzi, and Dananjay Srinivas: Explaining Hitori Puzzles: Neurosymbolic Proof Staging for Sequential Decisions.
Aug 2025
Paper accepted at ASE 2025: Uncovering Discrimination Clusters: Quantifying and Explaining Systematic Fairness Violations.
Aug 2025
Paper accepted for presentation at CDC 2025: Objective Improvement Algorithm for Controller Synthesis in Uncertain Environments.