Mohit Bansal receives NSF CAREER Award

July 22, 2019
Mohit Bansal NSF CAREER Award

Mohit Bansal NSF CAREER Award

Dr. Mohit Bansal, assistant professor of computer science at UNC-Chapel Hill and director of the UNC-NLP Lab, has received a Faculty Early Career Development (CAREER) Award from the National Science Foundation (NSF). The CAREER program is a Foundation-wide activity that offers NSF’s most prestigious awards in support of junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organizations.

This five-year, $450,000 grant, titled “CAREER: Semantic Multi-Task Learning for Generalizable and Interpretable Language Generation”, will support his continued research on enhancing natural language generation (NLG) models with crucial linguistic-semantic knowledge skills. These skills include logical entailment to avoid contradictory and unrelated information with respect to the input, saliency to extract the most important information subsets, and discourse structure to enforce coherent order in the generated text. The project will focus on interpretable and generalizable NLG approaches and the release of a public suite of such knowledge skills and NLG frameworks, eventually allowing the technology to be widely accessible and societally impactful via diverse real-world applications in human-robot interaction and collaboration.

Bansal joined the Department of Computer Science in 2016. Prior to joining UNC, he was a Research Assistant Professor at the Toyota Technological Institute at Chicago. He received his Bachelor of Technology in computer science and engineering from the Indian Institute of Technology Kanpur and received his Doctor of Philosophy in computer science from the University of California, Berkeley.

UNC professor’s high-tech robot promises earlier detection of lung cancer

July 19, 2019

by WRAL TechWire staff — July 19, 2019

CHAPEL HILL — Imagine a high-tech robot that can be used to help detect lung cancer before it’s too late.

It might sound like a scene from a sci-fi film. But thanks to the work of UNC Chapel Hill professor Ron Alterovitz and his cross-disciplinary team of researchers, it could soon be a reality.

With the support from the National Institute of Health, he and his group are developing a new medical robot that can enable “earlier, less invasive, and more accurate” diagnosis of lung cancer.

“The new robot has the potential to automatically curve around vasculature and other sensitive anatomical structures in the body, thereby reducing negative side effects, while safely and accurately reaching difficult-to-access nodules throughout the lung for biopsy and treatment,” UNC said in a statement.

The team includes researchers at UNC Computer Science, the UNC School of Medicine and Vanderbilt University.

Lung cancer is the most common cancer worldwide, accounting for 2.1 million new cases and 1.8 million deaths last year.

In the US, the American Cancer Society estimates:

  • About 228,150 new cases of lung cancer (116,440 in men and 111,710 in women) in 2019
  • About 142,670 deaths from lung cancer (76,650 in men and 66,020 in women) in 2019

The work hasn’t gone unnoticed.

Alterovitz recently received the Presidential Early Career Award for Scientists and Engineers (PECASE), the highest honor bestowed by the United States Government to outstanding scientists and engineers.

The White House Office of Science and Technology Policy coordinates the PECASE with over a dozen departments and agencies. Only around 100 recipients are named per year.

Alterovitz receives Presidential Early Career Award for Scientists and Engineers

July 16, 2019

Ron AlterovitzProfessor Ron Alterovitz was recognized with the Presidential Early Career Award for Scientists and Engineers (PECASE). The PECASE is the highest honor bestowed by the United States Government to outstanding scientists and engineers who are in the early stages of their independent research careers and who show exceptional promise for leadership in science and technology.

Recipients of the PECASE are selected for their pursuit of innovative research at the frontiers of science and technology and for their commitment to community service as demonstrated through scientific leadership, public education, or community outreach. The White House Office of Science and Technology Policy coordinates the PECASE with over a dozen departments and agencies. Only around 100 recipients are named per year.

Alterovitz was named a recipient by the White House in a press release on July 2, 2019. He was nominated for the award by the United States Department of Health and Human Services, which operates the National Institutes of Health (NIH). He is the only recipient named in the White House press release who is currently at UNC. Alterovitz is the ninth researcher from UNC to receive the award since its inception 23 years ago and the first recipient in computer science.

Alterovitz’s research focuses on robotics for medical applications. With support from NIH, Alterovitz and his research group are developing a new medical robot that can enable earlier, less invasive, and more accurate diagnosis of lung cancer. Lung cancer is currently the deadliest form of cancer in the United States, killing more Americans than breast, prostate, and colorectal cancer combined. Alterovitz is leading a cross-disciplinary team of researchers at UNC Computer Science, the UNC School of Medicine, and Vanderbilt University to create a robotic steerable needle capable of autonomously navigating to sites in the human body. The new robot has the potential to automatically curve around vasculature and other sensitive anatomical structures in the body, thereby reducing negative side effects, while safely and accurately reaching difficult-to-access nodules throughout the lung for biopsy and treatment.

The White House press release announcing winners for 2015-2017 can be found here.

 

How can machines, humans talk better? UNC prof, colleague land $1.5M to find out

July 13, 2019

CHAPEL HILL — Ever give a command to Amazon’s Alexa or Google Assistant, and find that some things get lost in communication?

It’s safe to say that you’re not alone.

To perform such functions, voice-activated virtual assistants rely on artificial intelligence (AI) technologies such as natural language processing and machine learning to understand what the user is saying. However, as with any emerging technology, there’s always room for improvement.

Enter Mohit Bansal, an assistant professor in the Department of Computer Science and director of the UNC-NLP Lab. He recently received a Google Focused Research Award in natural language processing, a subfield of computer science concerned with the interactions between computers and human languages.

Thanks to this $1.5 million injection — which he will split equally with fellow principal investigator Yoav Artzi, an assistant professor in the Department of Computer Science and Cornell Tech at Cornell University — the award will fund his research into spatial language understanding, analyzing how to program computers to process large amounts of natural language data. In particular,  it will be done “in interactive settings using resources that provide real-life visual input and environment configurations.”

Most research in spatial language takes place in environments with constrained mobility or visibility or limited interactivity, which limits the utility of the results. Bansal and Artzi hope that the use of real-life visual input and environment configurations will enable better study and model development of the language in real-life environments, the university announced in its release.

The Focused Research Awards program is one way Google supports a small number of multi-year research projects in areas of study that are of key interest to Google, as well as the research community. The awards are invitation-only and typically last for two to three years, and the recipients gain access to Google tools, technologies and expertise.

“These unrestricted gift awards are highly prestigious, and usually considered as a significantly larger and more selective version of the Google Faculty Award program — only a handful of professors have received these awards since its establishment nearly a decade ago in 2010,” the university said.

Identifying perceived emotions from people’s walking style

July 12, 2019

by Ingrid Fadelli , Tech Xplore

A team of researchers at the University of North Carolina at Chapel Hill and the University of Maryland at College Park has recently developed a new deep learning model that can identify people’s emotions based on their walking styles. Their approach, outlined in a paper pre-published on arXiv, works by extracting an individual’s gait from an RGB video of him/her walking, then analyzing it and classifying it as one of four emotions: happy, sad, angry or neutral.

“Emotions play a significant role in our lives, defining our experiences, and shaping how we view the world and interact with other humans,” Tanmay Randhavane, one of the primary researchers and a graduate student at UNC, told TechXplore. “Perceiving the emotions of other people helps us understand their behavior and decide our actions toward them. For example, people communicate very differently with someone they perceive to be angry and hostile than they do with someone they perceive to be calm and contented.”

Most existing  and identification tools work by analyzing facial expressions or voice recordings. However, past studies suggest that body language (e.g., posture, movements, etc.) can also say a lot about how someone is feeling. Inspired by these observations, the researchers set out to develop a tool that can automatically identify the perceived emotion of individuals based on their walking style.

“The main advantage of our perceived emotion recognition approach is that it combines two different techniques,” Randhavane said. “In addition to using , our approach also leverages the findings of psychological studies. A combination of both these techniques gives us an advantage over the other methods.”

The approach first extracts a person’s walking gait from an RGB video of them walking, representing it as a series of 3-D poses. Subsequently, the researchers used a long short-term memory (LSTM) recurrent neural network and a random forest (RF) classifier to analyze these poses and identify the most prominent emotion felt by the person in the video, choosing between happiness, sadness, anger or neutral.

The LSTM is initially trained on a series of deep features, but these are later combined with affective features computed from the gaits using posture and movement cues. All of these features are ultimately classified using the RF classifier.

Randhavane and his colleagues carried out a series of preliminary tests on a dataset containing videos of people walking and found that their model could identify the perceived emotions of individuals with 80 percent accuracy. In addition, their approach led to an improvement of approximately 14 percent over other perceived emotion recognition methods that focus on people’s walking style.

“Though we do not make any claims about the actual emotions a person is experiencing, our approach can provide an estimate of the perceived emotion of that walking style,” Aniket Bera, a Research Professor in the Computer Science department, supervising the research, told TechXplore. “There are many applications for this research, ranging from better human perception for robots and autonomous vehicles to improved surveillance to creating more engaging experiences in augmented and virtual reality.”

Along with Tanmay Randhavane and Aniket Bera, the research team behind this study includes Dinesh Manocha and Uttaran Bhattacharya at the University of Maryland at College Park, as well as Kurt Gray and Kyra Kapsaskis from the psychology department of the University of North Carolina at Chapel Hill.

To train their deep learning model, the researchers have also compiled a new dataset called Emotion Walk (EWalk), which contains videos of individuals walking in both indoor and outdoor settings labeled with perceived emotions. In the future, this dataset could be used by other teams to develop and train new emotion recognition tools designed to analyze movement, posture, and/or gait.

“Our research is at a very primitive stage,” Bera said. “We want to explore different aspects of the body language and look at more cues such as facial expressions, speech, vocal patterns, etc., and use a multi-modal approach to combine all these cues with gaits. Currently, we assume that the walking motion is natural and does not involve any accessories (e.g., suitcase, mobile phones, etc.). As part of future work, we would like to collect more data and train our deep-learning model better. We will also attempt to extend our methodology to consider more activities such as running, gesturing, etc.”

According to Bera, perceived emotion recognition tools could soon help to develop robots with more advanced navigation, planning, and interaction skills. In addition, models such as theirs could be used to detect anomalous behaviors or walking patterns from videos or CCTV footage, for instance identifying individuals who are at risk of suicide and alerting authorities or healthcare providers. Their model could also be applied in the VFX and animation industry, where it could assist designers and animators in creating virtual characters that effectively express particular emotions.

AMD’s SEV tech that protects cloud VMs from rogue servers may as well stand for…Still Extremely Vulnerable (feature on security work by Jan Werner and Fabian Monrose)

July 11, 2019

Evil hypervisors can work out what apps are running, extract data from encrypted guests

Five boffins from four US universities have explored AMD’s Secure Encrypted Virtualization (SEV) technology – and found its defenses can be, in certain circumstances, bypassed with a bit of effort.

In a paper [PDF] presented Tuesday at the ACM Asia Conference on Computer and Communications Security in Auckland, New Zealand, computer scientists Jan Werner (UNC Chapel Hill), Joshua Mason (University of Illinois), Manos Antonakakis (Georgia Tech), Michalis Polychronakis (Stony Brook University), and Fabian Monrose (UNC Chapel Hill) detail two novel attacks that can undo the privacy of protected processor enclaves.

The paper, “The SEVerESt Of Them All: Inference Attacks Against Secure Virtual Enclaves,” describes techniques that can be exploited by rogue cloud server administrators, or hypervisors hijacked by hackers, to figure out what applications are running within an SEV-protected guest virtual machine, even when its RAM is encrypted, and also extract or even inject data within those VMs.

This is possible, we’re told, by monitoring, and altering if necessary, the contents of the general-purpose registers of the SEV guest’s CPU cores, gradually revealing or messing with whatever workload the guest may be executing. The hypervisor can access the registers, which typically hold temporary variables of whatever software is running, by briefly pausing the guest and inspecting its saved state. Efforts by AMD to prevent this from happening, by hiding the context of a virtual machine while the hypervisor is active, can also, it is claimed, be potentially thwarted.

SEV is supposed to safeguard sensitive workloads, running in guest virtual machines, from the prying eyes and fingers of malware and rogue insiders on host servers, typically machines located off-premises or in the public cloud.

The techniques, specifically, undermine the data confidentiality model of guest virtual machines by enabling miscreants to “recover data transferred over TLS connections within the encrypted guest, retrieve the contents of sensitive data as it is being read from disk by the guest, and inject arbitrary data within the guest,” according to the study.

As a result, the paper calls into question the confidentiality promises of cloud service providers. Pulling off these techniques, in our view, is non-trivial, so if anyone does fancy exploiting these weaknesses in SEV in real-world scenarios, they’ll need to be determined and suitably resourced.

In 2016, AMD introduced two memory encryption capabilities to protect sensitive data in multi-tenant environments, Secure Memory Encryption (SME) and Secure Encrypted Virtualization (SEV). The former protects memory against physical attacks like cold boot and direct memory access attacks. The latter mixes memory encryption and virtualization, allowing each virtual machine to be protected from other virtual machines and underlying hypervisors and their admins.

Other vendors have their own secure enclave systems, like Intel SGX, which offers a different set of potential attack paths.

SEV, says AMD, protects customers’ guest VMs from one another, and from software running on the underlying host and its administrators. Whatever happens in these virtual machines should be off limits to other customers as well as the host machine’s operating system, hypervisor, and admins. However, the researchers have demonstrated that this threat model fails to ward off register inference attacks and structural inference attacks by malicious hypervisors.

“By passively observing changes in the registers, an adversary can recover critical information about activities in the encrypted guest,” the researchers explain in their paper.

A variant technique even works against Secure Encrypted Virtualization Encrypted State (SEV-ES), an extended memory protection technique that not only encrypts RAM but encrypts the guest’s virtual machine control block: this is an area of memory that stores a virtual machine’s CPU register contents when it is forced to yield to the hypervisor. This encryption should thus stop the hypervisor from making any sense of the paused VM’s context, though its contents can still be inferred, we’re told.

“We show how one can use data provided by the Instruction Based Sampling (IBS) subsystem (e.g. to learn whether an executed instruction was a branch, load, or store) to identify the applications running within the VM,” the paper says. “Intuitively, one can collect performance data from the virtual machine and match the observed behavior to known signatures of running applications.”

To conduct their work, the boffins used a Silicon Mechanics aNU-12-304 server with dual AMD Epyc 7301 processors and 256GB of RAM, running Ubuntu 16.04 and a custom 64-bit Linux kernel v4.15. Guest VMs received a single vCPU with 2GB of RAM, running Ubuntu 16.04 with the same kernel as the host.

While the security implications of accessing encrypted data and injecting arbitrary data are obvious, even exposing what applications are running in a guest VM has potentially undesirable consequences. Service providers could use the technique for application fingerprinting and banning unwanted software; malicious individuals could conduct reconnaissance to target exploits, to developing return-oriented programming (ROP) attacks or to undermine Address Space Layout Randomization (ASLR) defenses.

The researchers recommend the IBS subsystem be changed so that guest readings are discarded when secure encrypted virtualization is enabled.

The Register asked AMD for comment, and we’ve not heard back.

Bansal receives Google Focused Research Award in NLP

July 8, 2019
Mohit Bansal

Mohit Bansal

Mohit Bansal, an assistant professor in the Department of Computer Science and director of the UNC-NLP Lab, received a Google Focused Research Award in natural language processing to fund exploration of spatial language understanding.

The award is worth $1.5 million, which will be split evenly between Bansal and fellow principal investigator Yoav Artzi, an assistant professor in the Department of Computer Science and Cornell Tech at Cornell University.

The awarded project seeks to study both spatial language comprehension and generation in interactive settings using resources that provide real-life visual input and environment configurations. We use and interpret language dealing with our immediate environment every day. But most research in spatial language takes place in environments with constrained mobility or visibility or limited interactivity, which limits the utility of the results. Bansal and Artzi hope that the use of real-life visual input and environment configurations will enable better study and model development of the language in real-life environments.

The Focused Research Awards program is a means by which Google supports a small number of multi-year research projects in areas of study that are of key interest to Google as well as the research community. The awards are invitation-only and typically last for two to three years, and the recipients gain access to Google tools, technologies and expertise. These unrestricted gift awards are highly prestigious, and usually considered as a significantly larger and more selective version of the Google Faculty Award program–only a handful of professors have received these awards since its establishment nearly a decade ago in 2010.

For more information, please visit the award page.

2019 Barcamp engages and informs IT pros

June 8, 2019

Carolina Technology Consultants brought together the University’s IT professionals and sparked discussion of the latest technologies at its annual BarCamp unconference on May 23.

Participants discuss the topics they want covered at the BarCamp unconference
Participants discuss the topics they want covered at the BarCamp unconference

About 30 people attended CTC’s half-day unconference, held at the Department of Computer Science’s Sitterson and Brooks buildings. Corporate sponsors SKC Communications, Panasonic and Sonic Foundry Inc. helped make the event possible along with co-sponsors ITS, OASIS and the Department of Computer Science.

Attendees participated in nine sessions over three rooms, with an additional room dedicated to spillover discussions. This year’s BarCamp facilitated lively discussion on topics such as ServiceNow, cloud security and makerspaces.

‘Meaningful discussions’

Participants converse around a conference table at one of the breakout sessions.
BarCamp facilitated lively discussion on such topics as ServiceNow, cloud security and makerspaces

“It was an enjoyable morning with other people in my department talking about our craft,” said first-time attendee Adam Lenz, Lead Web Developer at ITS Digital Services. “I enjoyed being able to have meaningful discussions about our craft.”

Lenz attended discussions about cloud security and containers. “I thought (the discussion about) containers was pretty interesting and have taken some time to learn more about them,” he said.

At a session on training and promotion in IT, participants discussed what is currently available to IT staff, perceived barriers to training and promotion, and possible customer service and interaction training.

A session about the University’s makerspaces informed attendees about where the spaces are located on campus and how 3D printers can be used at the spaces. Another popular discussion centered around ServiceNow.

Cookout wraps up event

BarCamp concluded with OASIS inviting participants of the unconference to join its annual cookout, which drew a long line of University employees hungry for hot dogs and hamburgers.

View more photos from 2019 BarCamp on ITS’ Flickr.

Alumnus Raskar gives 2019 UNC Doctoral Hooding keynote address

June 4, 2019

Ramesh RaskarAlumnus Ramesh Raskar (PhD, 2003), returned to his alma mater in May to give the Keynote Address in the university’s Doctoral Hooding Ceremony.  

Raskar’s contributions to the field have been innovative, collaborative, and have served in supporting solutions to global problems.  His speech to doctoral graduates mirrored his work, challenging graduates to consider how they too can tackle some of society’s biggest problems by, “thinking global” and “acting local”.  Reminding graduates that the power of a Carolina degree, also comes with the responsibility to give back.

In his speech, Raskar spoke about our willingness to trust one another’s personal experiences, as well as the technology needed to help us navigate making some specific decisions in our lives; such as when we search for restaurant reviews before selecting a place to eat, or when we determine the best driving routes in order to avoid traffic.  He asked graduates to consider what would happen if we were willing to share some of our most personal experiences – our dreams, challenges, and frustrations – what could we all learn from each other if we developed a shared “road map” to life, rather than just relying on our own “internal GPS”?   

Raskar challenged graduates to think about a world where “Split AI”, a concept he is currently researching, could support us in navigating some of our most difficult societal problems. A practical application of this “God’s eye view”, where “Split AI” could utilize personal, private data, which when collected and interpreted, could help find solutions to problems we cannot solve on our own, could be a possibility.  He asked graduates to consider a world where everyone would have knowledge and access to this information, and it would be used in determining the appropriate “destination, route, and navigation” to solve these problems. He encouraged graduates to follow their passion, and yet consider that they have the responsibility they have to knowing the what, how, and when – the destination, route, and navigation, required to achieve their dreams.  Through this research, conducted by Raskar and others, it may be soon enough that AI would help future graduates to navigate those next steps. 

The department was incredibly excited and proud to welcome back Raskar, he continues to demonstrate the far reaching power of a UNC CS degree. Thank you for sharing your wisdom with our 2019 graduates! Experience the full speech here: https://www.youtube.com/watch?v=hHV2WR7nCQk.

For more about Raskar, read the Graduate School’s preview of his keynote address.

UNC Computer Science recognizes more than 300 graduates in annual commencement

May 27, 2019
Helen Qin receives the Weiss Award
Helen Qin receives the Weiss Award for Outstanding Achievement in Computer Science from Professor Emeritus Stephen F. Weiss.
Gabi Stein delivers the Class of 2019 address
Gabriela Stein delivers her address to the Class of 2019, their family and friends, and the faculty of the Department of Computer Science.
Josh Bakita laughs as Department Chair Kevin Jeffay reads excerpts from his nominations for the Glotzer Teaching Assistant Award
Joshua Bakita laughs as Department Chair Kevin Jeffay reads excerpts from his nominations for the Glotzer TA Award.

The UNC Department of Computer Science celebrated its summer 2018, winter 2018, and spring 2019 graduates with an annual ceremony in Carmichael Arena. With 335 graduates, the 2019 event was the largest to date.

The department recognized 16 doctor of philosophy degrees, 23 master of science degrees, 173 bachelor of science degrees, 123 bachelor of arts degrees, and 60 undergraduate minors that had been earned over the past year. The number of undergraduate degree earners increased by more than fifty percent from 2018.

Among the undergraduate degree earners were four graduating with honors, two graduating with highest honors, 28 Phi Beta Kappa inductees, six Buckley Public Service Scholars, four Chancellor’s Science Scholars, and three Morehead-Cain Scholars.

Several graduating students and faculty members were also recognized with individual awards at the event. The undergraduate Class of 2019 honored professor Donald Porter with the Undergraduate Faculty Award for Excellence in Undergraduate Teaching. Professors Montek Singh and James Anderson were recognized by the graduate Computer Science Student Association with the CSSA Excellence in Teaching Award.

Graduating master’s student Joshua Bakita received the John M. Glotzer Teaching Assistant Award. Graduating senior Nicholas Rewkowski was recognized with the Learning Assistant Award, which was renamed in honor of retiring Director of Undergraduate Studies Diane Pozefsky. Another graduating senior, Helen Qin, was awarded the Stephen F. Weiss Award for Outstanding Achievement in Computer Science. All three students will have their names added to plaques hanging in the lower lobby of Sitterson Hall.

Morehead-Cain Scholar Gabriela Stein was selected by a vote of the undergraduate Class of 2019 to deliver a commencement speech. Stein stressed the need for her fellow students and the Department of Computer Science to continue to make inclusiveness a priority in the field. In February, The Chronicle of Higher Education ranked UNC among the top computer science departments in the nation in terms of the percentage of bachelor’s degrees awarded to women and the percentage change since 2009-2010. 

Prior to the department-wide commencement in Carmichael Arena, all graduating students were invited to a reception in Sitterson Hall and Brooks Building, where they were served cake and ice cream by computer science faculty and alumni.

Photos of each graduating student were taken at Carmichael Arena by Photo Specialties. If you would like to purchase photos from the event, please email info@photospecialties.com or call 919-967-9576.