My research interests currently center on fairness, robustness, and capabilities of language and multimodal foundation models.
I’ve been fortunate to be mentored by Finale Doshi-Velez at the Harvard Data to Actionable Knowledge (DtAK) Lab, Cynthia Dwork at the Harvard Kempner Institute, and Lionel Levine at the Cornell Long-term AI Safety Research (LAISR) lab.
Select Publications
- Reid McIlroy and Katrina Brown et. al. “Set-Based Prompting: Provably Solving the Language Model Order Dependency Problem”. In NeurIPS 2024, Vancouver, Canada.
- Katrina Brown and Reid McIlroy. “Order Independence With Finetuning”. In Bi-Align Workshop, ICLR 2025, Singapore.
- Sid Bharthulwar*, John Rho*, and Katrina Brown*. “Evolutionary Prompt Optimization Discovers Emergent Multimodal Reasoning Strategies in Vision-Language Models”. In Reasoning and Planning Workshop, ICLR 2025, Singapore.
- Katrina Brown, Marton Havasi, Finale Doshi-Velez. “Diverse Concept Proposals for Concept Bottleneck Models”. In Human Machine Collaboration and Teaming Workshop, ICML 2022, Hawaii.
* denotes equal contribution.
I’m currently interested in inference time scheduling for reasoning models and the tradeoffs between reasoning and multi-agent debate.
If you’re a fan of these topics, let’s chat!