skip to content
Site header image Mimansa Jaiswal

I'm actively exploring research scientist and engineering roles focused on post-training and evaluation of large language models (LLMs), including reward modeling, RLHF/RLAIF, DPO/GRPO, scalable synthetic data generation and orchestration pipelines, automated evaluation frameworks, and human-aligned metric design. I'm also interested in applied work at the intersection of LLMs, productivity, and health (see Publications and research notes). If you have any in mind, please reach out to me.

I'm also open to collaborations in these areas. Whether you have a project idea, want to discuss research, or just want to connect, please pick a time slot here.

(View my resume)

Last Updated @ Oct 30, 2025

Hi, I am Mimansa

LinkedIn | GScholar | Twitter | Bluesky | Resume | ✉️ me @: mimansa.jaiswal@gmail.com

Image uploaded to Notion

My work focuses on LLM post-training and evaluation, including reward modeling, RLHF/RLAIF, DPO/GRPO, scalable synthetic data generation and orchestration pipelines, and automated evaluation frameworks with interpretable, human-aligned metrics.

I obtained my Ph.D. in Computer Science from the University of Michigan, advised by Prof. Emily Mower Provost in the CHAI Lab. I've worked at Meta AI (MSL), Norm AI, Allen Institute for AI (AllenNLP), Facebook AI Research (FAIR) NLP, and Facebook Research Conversational AI.

Beyond research, I'm passionate about science communication, sketchnoting, personal knowledge management, and cooking.

I ❤️ cats, and they keep me sane. I have two: Oreo and Bert (yes, that Bert, you read it right!).


Updates

2025

February I started working at Meta’s MSL team working on post-training for production model use-cases.

Previously,
2023

October I submitted my thesis and joined Norm AI as a Member of Technical Staff. I will be working on evaluation and RAG.

May Defended my PhD titled Implicit Design Choices and Their Impact on Emotion Recognition Model Development and Evaluation. Read my acknowledgement page here.

2022

May I was selected as one of the 8 Barbour fellows for 2022-2023 across all of UMich.

2021

November Defended my PhD proposal titled Implicit Design Choices and Their Impact on Emotion Recognition Model Development and Evaluation. Read my acknowledgement page here

October I will be presenting our work on sociolinguistic inspired privacy evaluation at text as data (TADA) conference 2021, UMich, Ann Arbor.

September I am interning this Fall in Allen AI with Ana Marasović in the area of evaluation and interpretability.

May Excited to be interning in the FAIR NLP group this summer and looking forward to work at the intersection of Linguistics and ML.

2020

September I have been chosen as the student representative for Faculty Hiring at my university for the 2020-21 season. Looking forward to knowing and communicating the 'usually hidden' processes.

May Excited to be interning in the Conversation AI group at Facebook AI working on automated dialogue evaluation.

2019

December I'll be attending NeuRIPS. Shoot me an email if you want to meet and talk about anything!

January I'll be a teaching assistant for the course on Applied Machine Learning for Affective Computing with my advisor. So excited to be teaching for the first time!

Experience

Feb 2025 - present Research Scientist, MSL, Meta AI

Part of Meta AI's Product Capabilities team, developing and fine-tuning large language models using post-training techniques including RLHF/RLAIF (with and without checklists) and DPO/GRPO with reward modeling to improve production models.
Building end-to-end synthetic data generation pipelines and automated evaluation systems, including rubric-based judges, preference data collection workflows, and quality assessment frameworks for model improvement and iterative optimization.

Oct 2023 - Oct 2024 Member of Technical Staff, Norm AI

Developed domain-specific retrieval-augmented generation (RAG) systems for legal applications, using synthetic data augmentation to expand training datasets, create evaluation benchmarks, and improve model coverage.
Built agentic LLM simulation frameworks to model human group perception and consensus-building for subjective law interpretation and policy evaluation.

Fall 2021 Research Intern, Allen NLP, Allen Institute of Artificial Intelligence
Mentor(s) : Ana Marasović

Developed interpretable, decompositional evaluation frameworks for GPT-3 and other language models, creating benchmarks for multi-aspect quality assessment to measure fine-grained model capabilities and identify failure modes in natural language understanding tasks.

Summer 2021 Research Intern, NLP Team, Facebook AI Research (Meta AI)
Mentor(s) : Adina Williams; Scott Yih, Pedro Rogieruez

Identified systematic failure patterns in Natural Language Inference (NLI) through adversarial testing and error analysis, developing correction strategies and data augmentation techniques to improve model robustness on challenging examples using the CTRL model.

Summer 2020 Research Intern, Conversation AI, Facebook (Meta) AI
Mentor(s) : Ahmad Beirami, Shane Moon, Satwik Kottur, Chinnadhurai Sankar

Developed interpretable user satisfaction metrics for conversational AI systems by integrating human knowledge and behavioral signals, and trained weakly supervised hierarchal label models.
Created generalizable evaluation frameworks combining quantitative metrics with qualitative human feedback for dialogue quality assessment.

Previously,
Summer 2016 Research Intern, SENTIC Lab, Nanyang Technological University, Singapore
Mentor(s) : Erik Cambria

Developed automated multi-modal deception detection frameworks. Built models to identify empathetic and non-empathetic responses in user comments and predict where empathetic responses are most needed.

Winter 2016 Research Intern, NLP and Sentiment Analysis (NLPSA) Lab, Academia Sinica, Taiwan
Mentor(s) : Lun-Wei Ku

Designed sentiment visualization systems using color-based emotional mapping for instant messaging platforms. Implemented Gaussian Mixture Models (GMMs) for context-aware sentence recommendation to help ESL learners distinguish between synonyms.

Summer 2015 Research Intern, Indraprastha Institute of Information Technology (IIIT), Delhi
Mentor(s) : Pravesh Biyani

Analyzed user preferences and behavioral factors influencing ride-sharing decisions. Developed route optimization algorithms for cab-pooling systems with plans to integrate personalized recommendation features based on historical ride preferences.

Education

2017 - 2023 PhD in Computer Science and Engineering
University of Michigan
Advisor: Prof. Emily Mower Provost
Lab: Computational Human Analysis and Integration (CHAI) Lab
Thesis: Implicit Design Choices and Their Impact on Emotion Recognition Model Development and Evaluation
Awarded Barbour Fellowship (selected from 1,000+ applicants across all of UMich)

2017 - 2019 MS in Computer Science and Engineering
University of Michigan
Advisor(s): Prof. Emily Provost

2013 - 2017 BTech in Computer Engineering
IET Devi Ahilya University, India
Advisor(s): Prof. G.L. Prajapati