skip to content
Site header image Mimansa Jaiswal
Update Jun 6, 2025

I am passively interested in talking about industry research scientist and engineering positions in model evaluation and benchmarking, metric design, post-training alignment, model explanation, and interpretation, and, work at the intersection of LLMs and productivity or health (see publications Show information for the linked content and research notes). If you have any in mind, please reach out to me.

I am also interested in collaborations in the aforementioned areas. If you have an interesting project, and would like to collaborate or talk, or just want to talk about research in general, please pick a time slot hereShow information for the linked content.

(View my resume)

Hi, I am Mimansa

LinkedIn | GScholar | Twitter | Bluesky | Resume | ✉️ me @: mimansa.jaiswal@gmail.com

I obtained my doctorate in CS at the University of Michigan under Prof. Emily Provost as part of the CHAI group. My research focuses on developing cost-effective data collection and generation procedures, LLM orchestration schemas, and, designing evaluation methods and metrics that are interpretable and human aligned.

I've previously interned at Allen AI NLP, Facebook AI Research (FAIR) NLP, and Facebook Research Conversational AI.

Besides research, I am interested in science communication, sketchnoting, personal knowledge management and cooking.

I ❤️ cats, and they keep me sane. I have two: OreoShow information for the linked content and BertShow information for the linked content (yes, that Bert, you read it right!).


Experience

  • Feb 2025 - present Research Scientist, Meta AI
    Part of Meta AI's Capabilities team, focused on large language model (LLM) fine-tuning, and post-training techniques (RLHF, synthetic data generation and evaluation)
  • Oct 2023 - Oct 2024 Member of Technical Staff, Norm AI
    Retrieval Augmentation and persona based simulation in LLMs and their subjective evaluation
  • Fall 2021 Research Intern, Allen NLP, Allen Institute of Artificial Intelligence
    Mentor(s) : Ana Marasović
    Benchmarking interpretable and compositional evaluation for very large models in natural language processing
  • Summer 2021 Research Intern, NLP Team, Facebook AI Research (Meta AI)
    Mentor(s) : Adina Williams; Scott Yih, Pedro Rogieruez
    Identifying, understanding, and correcting ambiguity based faliure points in Natural Language Inference (NLI)
  • Summer 2020 Research Intern, Conversation AI, Facebook (Meta) AI
    Mentor(s) : Ahmad Beirami, Shane Moon, Satwik Kottur, Chinnadhurai Sankar
    Defining interpretable and generalizable user satisfaction metric informed by human knowledge
  • Summer 2016 Research Intern, SENTIC Lab, Nanyang Technological University, Singapore
    Mentor(s) : Erik Cambria
    Multimodal detection of human behavior: empathy and deception
  • Winter 2016 Research Intern, NLP and Sentiment Analysis (NLPSA) Lab, Academia Sinica, Taiwan
    Mentor(s) : Lun-Wei Ku
    Human perception of conversation sentiment visualization in chat interfaces
  • Summer 2015 Research Intern, Indraprastha Institute of Information Technology (IIIT), Delhi
    Mentor(s) : Pravesh Biyani
    Finding the optimal routes under user specified user-constraints for last mile cab-pooling

Education