Research

My research develops mathematically rigorous foundations for trustworthy artificial intelligence. I study how learning-enabled and autonomous systems can be designed to operate safely, fairly, and accountably, even in complex, uncertain, and high-stakes settings. My work spans formal verification, reinforcement learning, explainability, and the integration of logic-based reasoning with data-driven methods.

A unifying theme of my research is that intelligent systems should not only perform well, but also admit principled reasoning about their behavior. To this end, I develop specification-aware learning algorithms, symbolic abstractions, and hybrid reasoning frameworks that enable verification, explanation, and accountability.

Formal Methods for Reinforcement Learning

Foundations for reasoning about learning agents beyond finite-state, episodic settings.

I investigate the foundational and algorithmic integration of formal methods with reinforcement learning, aiming to enable reliable and structured decision-making in complex environments. My work addresses settings where classical RL abstractions are insufficient, such as temporal objectives, recursion, and symbolic or continuous state spaces.

Trustworthy Reasoning with Learning and LLMs

Combining learning with symbolic reasoning to produce explanations and guarantees.

I develop methods for enabling large language models (LLMs) to reason in ways that are trustworthy, auditable, and verifiable. Rather than treating LLMs as monolithic predictors, my work integrates them with symbolic tools such as formal logic, automata, and SMT solvers, to produce explanations and decisions grounded in formal reasoning.

I design modular and multi-agent architectures that separate planning, reasoning, and explanation, allowing LLMs to generate structured proofs, translate formal artifacts into human-understandable explanations, and critique outputs for logical consistency and regulatory compliance.

AI, Software, and Accountability

Auditing and debugging AI-driven software in legal- and social-critical domains.

I study the interplay between algorithmic decision-making, software systems, and legal and ethical accountability. My research develops formal and data-driven techniques for auditing, testing, and explaining software behavior in domains where correctness has legal or societal consequences.

Application domains include tax preparation software, juvenile justice, and public services, where transparency and accountability are critical.

Selected papers: Parfait-ML, Metamorphic Testing of Tax Software

Secure & Safe Cyber-Physical Systems (CPS)

Formal guarantees for safety, security, and control under uncertainty.

I design provably correct and resilient controllers for safety-critical cyber-physical systems. My work combines control theory, formal verification, and symbolic reasoning to ensure safety and security in dynamic and uncertain environments.

Selected papers: Closure Certificates, Neural Control Barrier Certificates

Methods & Themes

Across these areas, my research draws on formal verification (model checking, SMT, SOS), reinforcement learning, automata theory, and program analysis. A recurring theme is the design of intelligent systems whose learning behavior can be reasoned about, explained, and held accountable.