Skip to main content
Snigdha Chaturvedi
December 10, 2024

Photo by Andrew Russell, UNC Research

Imagine a world where AI can understand human emotions as well as a close friend. This is the vision driving Associate Professor Snigdha Chaturvedi’s research in natural language processing. Her goal is to align large language models (LLMs) more closely with human communication, making them more effective in applications where they interface with us.

The field of natural language processing, or NLP, is a branch of AI that helps computers understand and generate human language. Much of Chaturvedi’s work lies in the field of social NLP, where she is measuring and improving the ability of LLMs to grasp the subtle complexities of human communication and enhancing their ability to convey information back to humans. Machines don’t process and generate language the same way humans do, so enabling machines to generate text that is natural, helpful, and accurate can be a challenge.

“Every day, more people are interacting with AI through chatbots, virtual assistants, and automated services,” explains Chaturvedi. “It’s not enough to just make these systems functional. We need to make them actually understand people – their contexts, their real needs. Otherwise, we’re just creating sophisticated parrots.”

You have likely seen AI-generated summaries at the top of internet searches, on retail sites, or even in your email inbox. One aspect of Chaturvedi’s work seeks to make these summaries generated by LLMs more useful to humans by adding meaningful context or making the generated text clearer. This requires a computer to not only be able to read and parse the meaning and key elements of a passage, but to also be able to generate language that is concise and natural without sacrificing important information.

Chaturvedi’s work also centers on LLMs reading and writing narratives, as narrative writing is another area where AI can both assist humans and also learn from us. Chaturvedi has been working on using AI to help a human author craft a narrative by making recommendations as the story is being written. But she also found that training AI on human-written narratives is helpful for LLMs trying to understand human behavior, because these narratives offer insight into our decision-making that is typically omitted in daily life. Chaturvedi found that these models process language more like humans when they are made to factor in the emotions of the writers and subjects involved.

“People typically don’t usually write their internal justifications for the actions they take, so most social behaviors aren’t written for an LLM to learn,” Chaturvedi says. “It would help LLMs if we prefaced everything we write with the thought process that led us to write it, but unfortunately that context is typically not included. They are written to some extent in movies, novels, etc., so it can help for an LLM to learn about human behavior from these narratives, where it can better understand the characters’ emotional state and motivations.”

One of Chaturvedi’s projects asked an LLM to read anecdotes about interpersonal conflict and determine which person’s actions were more socially acceptable in each anecdote. This type of interaction is common in online forums, where a poster will share a story, and the audience of other users will reply to tell the poster whose actions were socially unacceptable. When the LLM was made to consider the viewpoints and emotional state of the speaker and subjects, its responses were less judgmental overall and much more closely aligned with those of humans. Chaturvedi envisions a future where AI systems could be used for interpersonal conflict resolution tasks in the workplace or in shared residency situations like college dorms. With sufficient training, they could even be used for therapy or customer service disputes. But first, Chaturvedi says, these systems need to understand not just what people say, but their mental states when they say it. The ability to process narratives through the lens of human emotion is critical to those roles.

An example of Chaturvedi's SocialGaze project determining who is at fault for a misunderstanding
The framework developed by Chaturvedi to judge the actions of each character in an anecdote operates through three phases of summarization, deliberation, and verdict declaration, assessing the perspectives of both the narrator and the opposing parties before judging social acceptability.

As Chaturvedi discusses these potential applications, she makes it clear that there is still quite a bit of work to be done to get to that point. One major concern in the field is transparency. Some of the most widely used LLMs have been developed by companies using proprietary methods and training data, which can make it difficult to understand and anticipate their actions or correct undesired behavior.

“When a company develops a model without making their training data and procedure public, the model becomes a black box, and we don’t know why it makes certain decisions,” Chaturvedi said. “We can see that the model has learned biases, but we don’t know where those biases came from.”

For example, the project that asked a popular LLM to weigh in on interpersonal conflicts found that it exhibited noteworthy biases related to gender and age. Compared to humans, the model was more likely to assign fault to men and was more forgiving of subjects who were older, even when gender and age were not important aspects of the story. Although her testing determined clearly that bias was present, Chaturvedi underlined that it is impossible to determine the source of that bias when the training dataset is kept secret.

AI has the potential to enhance our lives in many ways, from assisting our tasks to resolving our conflicts, and even in ways that we have not yet considered. Chaturvedi’s work will play an important role in keeping AI aligned with humans as the technology continues to develop.