Mimansa Jaiswal

I'm Mimansa, a fourth year PhD candidate in Computer Science (AI/Interactive Systems) at the University of Michigan. I am fortunate to be working with Prof. Emily Provost as part of the CHAI group. I completed my undergrad in Computer Engineering from Institute of Engineering and Technology, Indore in 2017, and worked with Prof. G.L. Prajapati for my bachelor's thesis.

Research interests

My present research interests lie in the topics that fall under the umbrella of Robust and Interpretable Human Centered Computing. I work in the area of robust and interpretable systems for social signal processing using natural language understanding and speech processing to the various problems, such as, conversation and discourse analysis, emotion modelling etc. I am primarily interested in understanding what causes the state-of-the-art machine learning models to fail [Failure Analysis of ML models] or perform differently than expected [Evaluation of ML models], how these failure points can be used by an expert [Model Testing and Debugging] and predictions be explained to a general audience [Model Explanation] , how can we train or tune these models such that do not learn about certain variables [Robust and Private ML models], and how these models can benefit from known human knowledge [Expert Informed Machine Learning or Human-in-the-Loop ML].

In the past, I have also worked in Machine Learning for Health, especially, in relation to mental health. I have worked on mental health prediction using social media posts as proxy (B.E. Thesis). I have also worked on multimodal deception detection and predicting empathy in human behavior.



  • [Sep 2020] [New!] I have been chosen as the student representative for Faculty Hiring at my university for the 2020-21 season. Looking forward to knowing and communicating the "usually hidden" processes.
  • [May 2020] [New!] Excited to be interning in the Conversation AI group at Facebook AI working on automated dialogue evaluation.
  • [Feb 2020] Our paper on Multimodal Stressed Emotion Dataset has been accepted to LREC 2020.
  • [Jan 2019] I'll be a teaching assistant for the course on Applied Machine Learning for Affective Computing with my advisor. So excited to be teaching for the first time!
  • [Dec 2019] I'll be attending NeuRIPS. Shoot me an email if you want to meet and talk about anything!


Accepted Publications

  • Noise-Based Augmentation Techniques for Emotion Datasets: What do we Recommend?

    Mimansa Jaiswal, Emily Mower Provost
    Association for Computational Linguistics, Student Research Workshop (ACL-SRW) 2020

    TLDR: Multiple noise-based data augmentation approaches have been proposed to counteract this challenge in other speech domains. But, unlike speech recognition and speaker verification, the underlying label of emotion data may change given the addition of noise. In this work, we propose a set of recommendations for noise-based augmentation of emotion datasets based on human and machine performance evaluation of generated realistic noisy samples using multiple categories of environmental and synthetic noise.

  • MuSE: Multimodal Stressed Emotion Dataset

    Mimansa Jaiswal, Cristian-Paul Bara, Yuanhang Luo, Rada Mihalcea, Mihai Burzo, Emily Mower Provost
    Conference on Language Resources and Evaluation (LREC) 2020

    TLDR: . This paper presents a dataset, Multimodal Stressed Emotion (MuSE), to study the multimodal interplay between the presence of stress and expressions of affect. We describe the data collection protocol, the possible areas of use, and the annotations for the emotional content of the recordings.

  • Privacy Enhanced Multimodal Neural Representations for Emotion Recognition

    Mimansa Jaiswal, Emily Mower Provost
    AAAI Conference on Artificial Intelligence (AAAI) 2020

    TLDR: In this work, we show how multimodal representations trained for a primary task, here emotion recognition, can unintentionally leak demographic information, which could override a selected opt-out option by the user. We analyze how this leakage differs in representations obtained from textual, acoustic, and multimodal data. We use an adversarial learning paradigm to unlearn the private information present in a representation.

  • Investigating and Tackling Privacy Concerns in Multimodal Neural Representations for Emotion Recognition

    Mimansa Jaiswal, Emily Mower Provost
    In Human Centered Machine Learning (HCML, NeuRIPS workshop) 2019.
    In Privacy in Machine Learning (PriML, NeuRIPS workshop) 2019.
    Abstract presentation at Women in Machine Learning Workshop, co-located with NeurIPS 2019.

  • Controlling for Confounders in Multimodal Emotion Classification via Adversarial Learning

    Mimansa Jaiswal, Zakaria Aldeneh, Emily Mower Provost
    International Conference on Multimodal Interaction (ICMI) 2019.

    TLDR: We study how stress alters acoustic and lexical emotional predictions, paying special attention to how modulations due to stress affect the transferability of learned emotion recognition models across domains

  • Identifying Mood Episodes Using Dialogue Features from Clinical Interviews

    Zakaria Aldeneh, Mimansa Jaiswal, Emily Mower Provost
    Interspeech 2019.

    TLDR: Mental health professionals assess symptom severity through semi-structured clinical interviews. During these interviews, they observe their patients’ spoken behaviors, including both what the patients say and how they say it. In this work, we move beyond acoustic and lexical information, investigating how higher-level interactive patterns also change during mood episodes.

  • MuSE-ing on the Impact of Utterance Ordering on Crowdsourced Emotion Annotations

    Mimansa Jaiswal, Zakaria Aldeneh, Cristian-Paul Bara, Yuanhang Luo, Mihai Burzo, Rada Mihalcea, Emily Mower Provost
    International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2019.

    TLDR: Emotion expression and perception are inherently subjective. There is generally not a single annotation that can be unambiguously declared “correct.” As a result, annotations are colored by the manner in which they were collected, i.e., with or without context.

  • The PRIORI Emotion Dataset: Linking Mood to Emotion Detected In-the-Wild

    Soheil Khorram, Mimansa Jaiswal, John Gideon, Melvin McInnis, Emily Mower Provost
    Interspeech 2018.

    TLDR: This paper presents critical steps in developing this pipeline, including (a) a new in the wild emotion dataset, the PRIORI Emotion Dataset, (b) activation/valence emotion recognition baselines, and, (c) establish emotion as a meta-feature for mood state monitoring.

  • "Hang in there": Lexical and Visual Analysis to Identify Posts Warranting Empathetic Responses

    Mimansa Jaiswal, Sairam Tabibu, Erik Cambria
    Florida Artificial Intelligence Research Society Conference (FLAIRS) 2017.

    TLDR: Saying "You deserved it!" to "I failed the test" is not a good idea. In this paper, we propose a method supported by hand-crafted features to judge if the discourse or statement requires an empathetic response.

  • "The Truth and Nothing But The Truth": Multimodal Analysis for Deception Detection

    Mimansa Jaiswal, Sairam Tabibu, Rajiv Bajpai
    International Conference on Data Mining Workshops (ICDMW) 2016.

    TLDR: We propose a data-driven method (SVMs) for automatic deception detection in real-life trial data using visual (OpenFace) and verbal cues (Bag of Words).

  • Contextual Text-mining Approach for Teacher Feedback

    Mimansa Jaiswal, Vinshi Vanvat, Manasi Tiwari
    Extended Abstracts, Information Systems Research and Teaching 2014.

    TLDR: Most teacher review systems use word matching to assign positive and negative scores. But sentiment of words change according to context. Which n-grams are common in reviews, and what sentiment do they show?


  • FBAI

    ConversationAI, Facebook AI

    Research Intern, May 2020 - August 2020

    Mentor: Ahmad Beirami

    Topic: Defining interpretable and generalizable user satisfaction metric informed by human knowledge

  • NTU, Singapore

    SENTIC Lab, Nanyang Technological University, Singapore

    Research Intern, April 2016 - July 2016

    Mentor: Prof. Erik Cambria

    Topic: Multimodal detection of human behavior: empathy and deception

  • Academia Sinica, Taiwan

    NLP and Sentiment Analysis (NLPSA) Lab, Academia Sinica, Taipei, Taiwan

    Research Intern, December 2015 - February 2016

    Mentor: Prof. Lun-Wei Ku

    Topic: Human perception of conversation sentiment visualization in chat interfaces

  • Indraprastha Institute of Information Technology, Delhi

    IIIT Delhi , India

    Research Intern, May 2015 - July 2015

    Mentor: Prof. Pravesh Biyani

    Topic: Finding the optimal routes under user specified user-constraints for last mile cab-pooling

  • Zootout


    Technical Intern, January 2015 - April 2015

    Mentor: Sumay Dubey

    Topic: Aspect Based Sentiment Analysis of reviews for better hotel and restaurant recommendation

Awards and Recognition

  • Awarded Fellowship for Year-1 of PhD program at University of Michigan [At department level]
  • Selected to attend CRA-W '18 and CRA-W '20
  • Selected to attend Grace Hopper Celebration '18 [At institute level]



  • Reviewer for ICMI 2018/2019, ACII 2019, ICDMW 2016