Skip to main content
Loading Events

« All Events

UNC Symposium on AI and Society

April 25 - April 26

The UNC AI Project invites researchers to explore the intersection of artificial intelligence and philosophy at the Symposium on AI and Society, held April 25 & 26. This event features discussions on Large Language Models (LLMs), ethical AI considerations, and the impact of AI technologies, led by experts across disciplines including computer science and philosophy. Join us for a review of the latest in AI research and contribute to the dialogue on advancing AI’s role in society.

Featuring invited speakers Kathleen Creel, Josh Dever, Peter Hase, Brenden Lake, Bo Li, Maarten Sap, Jana Schaich Borg, Munindar P. Singh, and Walter Sinnott-Armstrong

Organized by the Departments of Computer Science and Philosophy with support from the School of Data Science and Society

Program:

Thursday, April 25
  • 9:00 – 9:15 | Welcome (Bansal, Hofweber)
  • 9:15 – 10:30 | Josh Dever (UT Austin) “(When) Is It What’s On The Inside That Counts? Balancing Internalist and Externalist Considerations in LLM Metasemantics”
  • 10:30 -10:45 | Break
  • 10:45 -12:00 | Munindar P. Singh (NC State) “New Models for Trustworthy AI, Norm Deviation, and Consent in Responsible Autonomy”
  • 12:00 – 2:00 | Lunch Break
  • 2:00 – 3:15 | Jana Schaich Borg and Walter Sinnott-Armstrong (Duke) “Moral AI and How We Get There”
  • 3:15 – 3:30 | Break
  • 3:30 – 4:45 | Bo Li (Chicago) “Risk Assessment, Safety Alignment, and Guardrails for Generative Models”
Friday, April 26
  • 9:15 – 10:30 | Maarten Sap (CMU) “Artificial Social Intelligence? On the challenges of Socially Aware and Ethically informed LLMs”
  • 10:30 – 10:45 | Break
  • 10:45 – 12:00 | Peter Hase (UNC) “Toward Safe LLMs: Interpretability, Model Editing, and Scalable Oversight”
  • 12:00 – 2:15 | Lunch Break
  • 2:15 – 3:30 | Kathleen Creel (Northeastern) “Algorithmic Monoculture and the Ethics of Systemic Exclusion”
  • 3:30 – 3:45 | Break
  • 3:45 – 5:00 | Brenden Lake (NYU) “Addressing two classic debates in cognitive science with AI”

Invited Talk Information

Algorithmic Monoculture and the Ethics of Systemic Exclusion

Kathleen Creel

Mistakes are inevitable, but fortunately human mistakes are typically heterogenous. Using the same machine learning model for high stakes decisions creates consistency while amplifying the weaknesses, biases, and idiosyncrasies of the original model. When the same person re-encounters the same model or models trained on the same dataset, she might be wrongly rejected again and again. Thus algorithmic monoculture could lead to consistent ill-treatment of individual people by homogenizing the decision outcomes they experience.

Is it wrong to allow the quirks of an algorithmic system to consistently exclude a small number of people from consequential opportunities? Many philosophers have claimed or indicated in passing that consistent and arbitrary arbitrary exclusion is wrong, even when it is divorced from bias or discrimination. But why and under what circumstances it is wrong has not yet been established. This talk will formalize a measure of outcome homogenization, describe experiments that demonstrate that it occurs, then present an ethical argument for why and in what circumstances outcome homogenization is wrong.


(When) Is It What’s On The Inside That Counts? Balancing Internalist and Externalist Considerations in LLM Metasemantics

Josh Dever

Addressing two classic debates in cognitive science with AI

Brenden Lake

I’ll describe two case studies in using deep neural networks to address classic debates in cognitive science, both of which have implications for the nature of human intelligence:

1) What ingredients do children need to learn early vocabulary words? How much is learnable from sensory input with relatively general neural networks, and how much requires stronger inductive biases (e.g., innate knowledge, domain-specific constraints, social reasoning)? Using head-mounted video recordings from a single child (61 hours of video slices over 19 months), we show how deep neural networks can acquire many word-referent mappings, generalize to novel visual referents, and achieve multi-modal alignment. These results show how critical aspects of word meaning are learnable without strong inductive biases.

2) Can neural networks capture human-like systematic generalization? We address a 35-year-old debate catalyzed by Fodor and Pylyshyn’s classic article, which argued that standard neural networks are not viable cognitive models because they lack systematic compositionality — the algebraic ability to understand and produce novel combinations from known components. We’ll show how neural network can achieve human-like systematic generalization when trained through meta-learning for compositionality (MLC), a new method for optimizing the compositional skills of neural networks through practice. With MLC, a neural network can match human performance and inductive biases in a head-to-head comparison of artificial language learning.

These findings emphasize the power of neural network learners, even when trained on child-scale data, and their increasing capability for addressing longstanding issues in cognitive science.


Risk Assessment, Safety Alignment, and Guardrails for Generative Models

Bo Li

Large language models (LLMs) have garnered widespread attention due to their impressive performance across a range of applications. However, our understanding of the trustworthiness and risks of these models remains limited. The temptation to deploy proficient foundation models in sensitive domains like healthcare and finance, where errors carry significant consequences, underscores the need for rigorous safety evaluations, enhancement, and guarantees. Recognizing the urgent need for developing safe and beneficial AI, our recent research seeks to design a unified platform to evaluate the safety of LLMs from diverse perspectives such as toxicity, stereotype bias, adversarial robustness, OOD robustness, ethics, privacy, and fairness; enhance LLM safety through knowledge integration; and provide safety guardrail and certifications. In this talk, I will first outline our foundational principles for safety evaluation, detail our red teaming tactics, and share insights gleaned from applying our DecodingTrust platform to different models, such as proprietary and open-source models, as well as compressed models. Further, I will delve into our methods for enhancing model safety, such as hallucination mitigation. I will also explain how knowledge integration helps align models and prove that the RAG framework achieves provably lower conformal generation risks compared to vanilla LLMs. Finally, I will briefly discuss our robust guardrail framework for risk mitigation in practice.


Artificial Social Intelligence? On the challenges of Socially Aware and Ethically informed LLMs

Maarten Sap

Modern AI systems such as LLMs are pervasive and helpful, but do they really have the social intelligence to seamlessly and safely engage in interactions with humans? In this talk, I will delve into the limits of social intelligence of LLMs and how we can measure and anticipate their risks.

First, I will introduce Sotopia, a new social simulation environment to evaluate the interaction abilities of LLMs as social AI agents, showing that even today’s most powerful models struggle to achieve social goals in interactions.

Then, I will shift to how LLMs pose new ethical challenges in their interactions with users. Specifically, through their language modality and possible expressions of uncertainty, we show that LLMs tend to express overconfidence in their answers even when incorrect, which users tend to over-rely on.

Finally, I will introduce ParticipAI, a new framework to anticipate future AI use cases and dilemmas. Through our framework, we show that lay users can help us anticipate the benefits and harms of allowing or not allowing an AI use case, paving the way for more democratic approaches to AI design, development, and governance.

I will conclude with some thoughts on future directions towards socially aware and ethically informed AI.


Moral AI and How We Get There

Jana Schaich Borg and Walter Sinnott-Armstrong

Artificial intelligence (AI) is entering more and more areas of our lives. Each new application raises pressing ethical issues. AI pessimists are worried about potential abuses. AI optimists are hopeful about potential benefits. Both are correct, in our view. To show why, we will survey some good news and the bad news about safety, privacy, justice, and responsibility in AI. Then we will propose ways to make AI more moral by building human morality into AI systems and AI companies. This talk summarizes the main points in our recent book with Vincent Conitzer.


Details

Start:
April 25
End:
April 26
Website:
https://aip.unc.edu/events/

Venue

141 Brooks Building
201 S. Columbia Street
Chapel HIll, NC 27599-3175 United States
+ Google Map
Comments are closed.