Skip to main content
Old Well placeholder

Computational Robotics Research Group

PI: Ron Alterovitz
We develop new algorithms and investigates new robot designs to enable physicians to provide better medical care and to assist people in their homes. To enhance robot autonomy and ease of use, we focus on the computational challenge of motion planning: computing actions that will guide a robot around obstacles to accomplish a task, such as reaching a tumor inside the body or cleaning a table. Our algorithms compensate for uncertainty, learn from human experts, integrate data from diverse sources, leverage the power of the cloud, and provide guarantees on safety.
MURGe Lab logo

Multimodal Understanding, Reasoning, and Generation for Language Lab (MURGe-Lab)

PI: Mohit Bansal
Our MURGe-Lab (Multimodal Understanding, Reasoning, and Generation for Language Lab; pronounced as “merge-lab”) has research interests in statistical natural language processing and machine learning, with a focus on multimodal, grounded, and embodied semantics (i.e., language with vision and speech, for robotics), human-like language generation and Q&A/dialogue, and interpretable and structured deep learning. We are a group of PhD, MS, BS, and visiting students who work with Prof. Mohit Bansal and collaborators in the Computer Science department at the University of North Carolina (UNC) Chapel Hill.
Old Well placeholder

Bertasius-Lab

PI: Gedas Bertasius
Our research interests are in computer vision and machine learning. In particular, we are interested in the topics of video understanding, human behavior modeling, multi-modal deep learning, and transfer learning.
Old Well placeholder

Chaturvedi-Lab

PI: Snigdha Chaturvedi
Our lab focuses on Natural Language Processing with a focus on narrative-like and socially aware understanding, summarization, and generation of language.
Old Well placeholder

UNITES Lab

PI: Tianlong Chen
Our research interests span the area of artificial intelligence (AI), machine learning (ML), optimization, computer vision, natural language processing, and data science, with two major focuses on (A) establishing robust and efficient AI systems; (B) bridging the gap between AI and societal & scientific challenges.
Old Well placeholder

UNC Graphics & Virtual Reality Group

PI: Henry Fuchs
Our Graphics and Virtual Reality group research includes 3D scene acquisition & reconstruction, 3D tracking, fast rendering hardware and algorithms, autostereoscopic 3D displays, head-mounted and other near-eye displays, telepresence, and medical applications.
LUPA Lab

LUPA Lab

PI: Junier Oliva
We are looking to see what makes data tick, and understanding data at an aggregate, holistic level. The LUPA Lab is using techniques ranging from modern deep learning architectures to nonparametric statistics to make strides in areas like: high-dimensional density estimation and modeling; sequential modeling and RNNs; and learning over complex or structured data.
Old Well placeholder

Sengupta-Lab

PI: Roni Sengupta
Our research lies at the intersection of Computer Vision and Computer Graphics, mainly centered around 3D Vision and Computational Photography. We am particularly interested in solving Inverse Graphics problems where the goal is to decompose images into its’ intrinsic components (e.g. geometry, material reflectance, lighting, alpha matte etc.). We solve Inverse Graphics problems to create next-generation video communication and content creation by democratizing high-quality video production and 3D capture.
Old Well placeholder

Srivastava-Lab

PI: Shashank Srivastava
Our lab focuses on Natural Language Processing, with special interests in grounding language in perception, neuro-symbolic methods, and interactive machine learning.
VisuaLab

VisuaLab

PI: Danielle Szafir
The University of North Carolina’s VisuaLab is an interdisciplinary research laboratory headed by Dr. Danielle Albers Szafir. Situated within the Department of Computer Science at UNC and also affiliated with the University of Colorado’s ATLAS Institute, the VisuaLab explores the intersection of data science, visual cognition, and computer graphics. Our goal is to understand how people make sense of visual information to create better interfaces for exploring and understanding information. We work with scholars from psychology to biology to the humanities to design and implement visualization systems that help drive innovation. Our ultimate mission is to facilitate the dialog between people and technologies that leads to discovery.
Old Well placeholder

Yao-Lab

PI: Huaxiu Yao
Our research focuses on both the theoretical and applied aspects of building reliable and responsible foundation models (e.g., LLMs, VLMs, Diffusion Models). Additionally, we are keen on utilizing these models to facilitate diverse scientific and social applications, including healthcare, drug discovery, genomics, transportation, and ecology.