Real Sound in a Virtual World

March 21, 2017

Gaming giant Valve acquires Impulsonic — a UNC-created 3-D sound simulation software company started by two PhD students and faculty within the Department of Computer Science.

By
March 13th, 2017
Business | Computer Science | Invention: Redefining Enterprise and Technology | Videos | Visual Arts

Take a moment to focus on the sounds around you. Maybe you’re sitting at a desk and someone walks by in the hallway. Even though you can’t see them, you can hear their footsteps and can tell when they get closer, when they walk away, and that they’ve just passed by to your left.

We take sound cues from our environment every moment of every day and rely on them much more than we consciously realize. Virtual reality (VR) and gaming researchers are studying how to replicate the paths sound takes before reaching our ears — they call it three-dimensional spatialized sound. In order for a virtual world to be truly immersive, the sounds in that world need to give us these cues. That’s what engineers of 3-D sound simulation hope to achieve. Besides VR, these sound propagation technologies are also used for evaluating the acoustic effects of architectural buildings and noise modeling in both indoor and outdoor environments.

In 2011, PhD students Anish Chandak and Lakulish Antani approached computer science professors Ming Lin and Dinesh Manocha with the idea of using existing UNC technologies for physically based sound simulation to create a startup to develop sound simulation tools for different applications. This resulted in the formation of Impulsonic — a startup company supported by a Carolina Express License from the UNC Office of Commercialization and Economic Development, as well as strong backing from the Department of Computer Science, College of Arts & Sciences, and Launch Chapel Hill, a member of the Innovate Carolina network.

Starting a company without much business experience can be challenging, Manocha points out. “It is always a bit risky in the technology space because very few startups really make it.”

This past January, gaming giant Valve, widely known for its Steam platform, purchased Impulsonic. With the acquisition, Chandak and Antani now work at Valve on Steam Audio, a direct continuation of Impulsonic’s product, formerly called Phonon. “As far as our products and team is concerned, we are with Valve now,” Antani says. “Steam Audio is just the next release of our original product at Impulsonic.”

The acquisition will provide a vast amount of resources to help the team further develop one of the best 3-D sound simulation solutions into an even better product. It also allows Valve’s enormous audience, with more than 125 million Steam users, to experience this incredibly realistic sound software. “This is a great mechanism to transition the ideas from university research into widely used commercial products,” Manocha says.

Hearing in 3-D

In current games, sound is usually layered on top of the visuals and does not interact with the world on the screen. It’s similar to the soundtrack of a movie, which doesn’t change based on where the characters are in the film or what’s happening in it.  Movies like “Apocalypse Now” and “Saving Private Ryan,” have attempted to change this by using stereo speaker systems that play sounds left or right, to make the viewer feel like they’re immersed in the movie. While that can be effective, 3-D audio does more than play sounds out of different speakers and actually manipulates sound waves so they act as they would in the real world.

Before it reaches our ears, sound waves undergo a number of changes based on the environment, the position of the sound, the location of the listener, and even the shape of the listener’s head. Phonon uses several algorithms to manipulate sound sources before they reach the listener to reflect these changes.

“In my office, there is a carpet and a wall,” Manocha describes. “Imagine if my floor was all concrete and my wall replaced by all glass. Do you know how my office would sound? There would be a lot of reverberation and less absorption.” Phonon can predict these sound effects without building the physical prototypes. “We are working with many engineering and architecture design firms to design better tools for acoustic evaluation,” Manocha adds.

Manipulating sound waves so they reflect this change is called physics-based sound propagation. It also takes into account the relative positions of the listener to the sound source and is likely the reason Valve chose Impulsonic over competing 3-D sound startups, according to Antani. “On the industry side, you can probably count the number of teams that are working on this on one hand,” he says. “It’s not always something that people work on within the virtual reality realm.”

Three-dimensional sound researchers typically spend their time studying the shape of the listener’s head to determine how sound molds itself. To do this, they use a mathematical model called HRTF (head-related transfer function). This is part of a larger area of sound research called binaural rendering, which attempts to produce a form of surround sound that’s heard not only from the left and right sides of the body but also above and below.

“If you are sitting to my left as I speak, some of my sound waves will go over what’s called line-of-sight sound to your left ear directly. Then, parts of those waves bend around your head and go into your right ear,” Manocha says, adding that the latter waves produce lower frequency sound effects. The human brain can process the differences between these sounds to figure out the exact direction of the original sound source.

In the same way prescription glasses or fingerprints are specific to an individual, HRTF varies for each listener. The current solution in games and virtual reality is to use a generic HRTF, but this means some people will hear sounds more realistically than those whose HRTF does not match as closely to the generic one. Antani and Chandrak chose to work more on sound propagation because of the relative ease of getting a decently accurate HRTF. “Once you get the HRTF to a reasonable baseline, it becomes super important to make sure that any sound you hear is coherent with whatever you are seeing,” Antani says.

Binaural rendering — and, by extension, Phonon — works best with headphones, because each side represents an ear in the sound simulation setup. With speakers, the sounds will play with the rendering, but the listener’s environment outside of the virtual world will affect how the sound reaches the ears and muddy the effect. Most virtual reality headsets come with headphones, so this is usually not an issue unless you’re listening to demos of binaurally rendered sound outside of VR.

Following a different path

The move to Valve was a natural transition for Chandak and Antani, since its structure actually resembles that of a startup. “Valve is very well-known for having a flat organizational structure, which means you don’t have bosses and such,” Antani says. “Everybody owns a piece of whatever it is they’re working on. That’s very much like a startup, except you have access to resources from a larger company.”

For technology startups, one of the best exit strategies is acquisition, according to Lin and Manocha. With the money, resources, and support of a successful company, Chandak and Antani can focus on developing the technology while leaving the bureaucratic issues to Valve. “As a CEO, you’re also worrying about the marketing and customer support, along with fundraising,” Manocha says. “It gets very challenging doing five jobs.”

In parallel to the commercial developments at Impulsonic, Lin and Manocha have continued their research in sound synthesis and propagation with other undergraduate and graduate students at Carolina. Some of their recent graduates are currently working on sound simulation research and commercial products at other leading companies such as Oculus/Facebook, Microsoft, and Google.

Lin and Manocha are very proud of their students and optimistic about the future of Phonon. With the acquisition, the likelihood that this UNC research will be adopted by the world of 3-D sound is high. “VR wasn’t hot in 2011, but Anish and Lakulish took the risk,” Manocha says. “They could have pursued high-paying industry jobs like their fellow graduate students, but instead took a much more nontraditional route. You have to give them a lot of credit for the success of Impulsonic.”

Anish Chandak received his PhD in computer science from UNC in 2011. He was the CEO of UNC startup Impulsonic, which was acquired by Valve in 2017. He began his role as senior engineer at Valve in January 2017.

Lakulish Antani received his PhD in computer science from UNC in 2013. He was the vice president of engineering for UNC startup Impulsonic and began his role as virtual reality audio engineer at Valve in November 2016.

Ming Lin is the John R. & Louise S. Parker Distinguished Professor of computer science at UNC and one of the co-leaders of the GAMMA research group.

Dinesh Manocha is a Phi Delta Theta/Mason Distinguished Professor of computer science at UNC and one of the co-leaders of the GAMMA research group.

The Vice Chancellor’s Office for Innovation, Entrepreneurship and Economic Development is led by Judith Cone, who came to UNC in 2010 after 15 years as a senior executive at the Kauffman Foundation. The office works to strengthen a collaborative and supportive ecosystem for innovation and entrepreneurship collectively referred to as Innovate Carolina. It connects resources, people, and programs with existing and emerging opportunities.

Impulsonic received funding from the National Science Foundation, the U.S. Army Research Office, and NC IDEA — and also supported the research at UNC via small business grants. Other supporters of sound simulation research within the UNC Computer Science Department include Intel, Microsoft, NVIDIA, and Link Foundation. More details of this research can be found here.

Fruits Of Valve’s Impulsonic Acquisition Appear: Steam Audio Beta Launched

February 22, 2017

by Kevin Carbotte February 23, 2017 at 2:30 PM – Source: Valve

In early January, we discovered that Valve had acquired a 3D audio company called Impulsonic. Valve didn’t reveal the news, but Impulsonic’s website indicated that Valve swallowed up the company’s assets and employees.

Impulsonic made a physics-based binaural 3D audio utility called Phonon 3D that enhances 3D audio experiences to create believable sound profiles for virtual environments. 3D audio technology is a big deal for virtual reality experience and games, so it was no surprise that Valve saw value in Impulsonic’s technology.

Impulsonic’s employees transitioned to Valve’s headquarters in January and they didn’t waste time getting to work. The team created the Steam Audio SDK, which Valve described as the next generation of Impulsonic’s Phonon special audio tools adapted specifically for virtual reality applications.

Valve wants to give VR developers every opportunity to build amazing content with as little risk as possible. Valve already offers an open-source version of its VR hardware driver set called OpenVR, and the company licenses its Lighthouse positional tracking technology at no cost to developers. (Now the training material is free, too). The Steam Audio SDK is another royalty-free component of Valve’s growing virtual reality development ecosystem.

“Valve is always trying to advance what the very best games and entertainment can offer,” said Anish Chandak of Valve. “Steam Audio is a feature-rich spatial audio solution available to all developers, for use wherever and however they want to use it.”

The initial version of the Steam Audio SDK has native support for the Unity engine, but if you’re using any other engine, you must write code to integrate the software. Valve included a C API so that you can integrate Steam Audio into other game engines and middleware.

A plugin for Unreal Engine isn’t far behind the Unity plugin, though. Epic Games plans to demonstrate Steam Audio in Unreal Engine at GDC in March.

“As a new plugin for the new Unreal Audio Engine, Steam Audio fundamentally extends its capabilities and provides a multi-platform solution to game audio developers who want to create realistic and high-quality sound propagation, reverberation modeling, and binaural spatialization for their games,” said Aaron McLeran, audio programmer at Epic Games.

Valve’s Steam Audio SDK is available today in beta form. You can find more information on Valve’s GitHub page and the Community Hub.

Pearl Hacks 2017 unites more than 300 women for overnight hackathon

February 17, 2017

Sponsor Fair Pearl Hacks 2017Pearl Hacks 2017 brought women from all over the country to UNC-Chapel Hill in February. The event sought to address the gender imbalance in computer science and technology by providing resources and education and by building community among women in computer science.

Pearl Hacks, an annual 24-hour coding competition for women in high school and college, was held for the fourth time at UNC-Chapel Hill. Event organizers said that the number of participants this year was between 300 and 400, and many of those attendees were coding for the first time. In addition to high school and college students from around the state, participants came from schools all over the East Coast, including Virginia Tech, Georgia Tech, Maryland, Rutgers and NYU.

Participant Pearl Hacks 2017Founded in 2014 by computer science major Maegan Clawges as a way to support and encourage women interested in computer science, Pearl Hacks has inspired similar all-female hackathons around the country. Because the field is male-dominated, many women who enter computer science as students or professionals eventually leave for other disciplines. Women-focused events like Pearl Hacks give women the opportunity to enhance their skills and knowledge in a different environment while building lasting friendships with peers who will become their classmates and co-workers.

CrossBorderThis year’s winning project, CrossBorder, is a blockchain-secured service to register refugees when they arrive in a new country. The service allows refugees to register their information and track checkpoints they reach as they immigrate, and an integrated feature automatically sets up a credit card in the user’s name, making it easier to send aid to individual groups of refugees in specific circumstances.

Other impressive projects from the hackathon include vaLIVEtine, an app that alerts distracted mobile users of impending collisions with motor vehicles; encore, a mobile app for locating and paying street performers; ProBoKnow, a website to connect lawyers willing to work pro bono or at discounted rates with clients who need them and NiceType, a tool that detects negative comments on YouTube.

Demo Fair Pearl Hacks 2017The event began with an opening ceremony at 10 a.m. on Saturday, February 11, and participants started hacking at 11 a.m. The weekend was full of non-hacking options, including icebreakers, a sponsor fair, tech talks and workshops. In addition to technical topics like programming languages, project management and wearable electronics, tech talks and workshops also covered life after college, diversity and inclusion in technology, and even how to win a hackathon.

Pearl Hacks 2017 was made possible by support from Capital One, Fidelity, Red Hat, Google, CapTech, Infusion, GE, SentryOne, Red Ventures, Samsung, RENCI, IBM, Ticketmaster, Make School, SAS, JPMorganChase & Co., Accenture, UnitedHealthGroup, Credit Suisse, Interactive Intelligence, Genesys, Optum, Esri, Innovate Carolina, Cisco and Deutschebank.

For more information about Pearl Hacks, please visit pearlhacks.com.

AI Predicts Autism From Infant Brain Scans

February 15, 2017

By Megan Scudellari

Twenty-two years ago, researchers first reported that adolescents with autism spectrum disorder had increased brain volume. During the intervening years, studies of younger and younger children showed that this brain “overgrowth” occurs in childhood.

Now, a team at the University of North Carolina, Chapel Hill, has detected brain growth changes linked to autism in children as young as 6 months old. And it piqued our interest because a deep-learning algorithm was able to use that data to predict whether a child at high-risk of autism would be diagnosed with the disorder at 24 months.

The algorithm correctly predicted the eventual diagnosis in high-risk children with 81 percent accuracy and 88 percent sensitivity. That’s pretty damn good compared with behavioral questionnaires, which yield information that leads to early autism diagnoses (at around 12 months old) that are just 50 percent accurate.

“This is outperforming those kinds of measures, and doing it at a younger age,” says senior author Heather Hazlett, a psychologist and brain development researcher at UNC.

As part of the Infant Brain Imaging Study, a U.S. National Institues of Health–funded study of early brain development in autism, the research team enrolled 106 infants with an older sibling who had been given an autism diagnosis, and 42 infants with no family history of autism. They scanned each child’s brain—no easy feat with an infant—at 6-, 12-, and 24 months.

The researchers saw no change in any of the babies’ overall brain growth between 6- and 12-month mark. But there was a significant increase in the brain surface area of the high-risk children who were later diagnosed with autism. That increase in surface area was linked to brain volume growth that occurred between ages 12 and 24 months. In other words, in autism, the developing brain first appears to expand in surface area by 12 months, then in overall volume by 24 months.

The team also performed behavioral evaluations on the children at 24 months, when they were old enough to begin to exhibit the hallmark behaviors of autism, such as lack of social interest, delayed language, and repetitive body movements. The researchers note that the greater the brain overgrowth, the more severe a child’s autistic symptoms tended to be.

Though the new findings confirmed that brain changes associated with autism occur very early in life, the researchers did not stop there. In collaboration with computer scientists at UNC and the College of Charleston, the team built an algorithm, trained it with the brain scans, and tested whether it could use these early brain changes to predict which children would later be diagnosed with autism.

It worked well. Using just three variables—brain surface area, brain volume, and gender (boys are more likely to have autism than girls)—the algorithm identified up eight out of 10 kids with autism. “That’s pretty good, and a lot better than some behavioral tools,” says Hazlett.

To train the algorithm, the team initially used half the data for training and the other half for testing—“the cleanest possible analysis,” according to team member Martin Styner, co-director of the Neuro Image Analysis and Research Lab at UNC. But at the request of reviewers, they subsequently performed a more standard 10-fold analysis, in which data is subdivided into 10 equal parts. Machine learning is then done 10 times, each time with 9 folds used for training and the 10th saved for testing. In the end, the final program gathers together the “testing only” results from all 10 rounds to use in its predictions.

Happily, the two types of analyses—the initial 50/50 and the final 10-fold—showed virtually the same results, says Styner. And the team was pleased with the prediction accuracy. “We do expect roughly the same prediction accuracy when more subjects are added,” said co-author Brent Munsell, an assistant professor at College of Charleston, in an email to IEEE. “In general, over the last several years, deep learning approached that have been applied to image data have proved to be very accurate,” says Munsell.

But, like our other recent stories on AI out-performing medical professionals, the results need to be replicated before we’ll see a computer-detected biomarker for autism. That will take some time, because it is difficult and expensive to get brain scans of young children for replication tests, emphasizes Hazlett.

And such an expensive diagnostic test will not necessarily be appropriate for all kids, she adds. “It’s not something I can imagine being clinically useful for every baby being born.” But if a child were found to have some risk for autism through a genetic test or other marker, imaging could help identify brain changes that put them at greater risk, she notes.

Free online library for UNC students celebrates 10 millionth book read

February 3, 2017

MARCO QUIROZ-GUTIERREZ | PUBLISHED 02/03/17 1:04AM

The online library of free books for students with disabilities was co-founded by computer science professor Gary Bishop and Director of the Center for Literacy and Disability Studies Karen Erickson. Since its founding in 2008, the site has seen a huge increase in users­ — all without a single dollar spent on advertising.

Kevin Jeffay, chairperson of the computer science department, said TarHeelReader.org has seen such widespread success partly due to buzz spread by fans of the site.

“(Bishop and Erickson) built this thing, and they put it out there, and teachers started writing books, and pretty much by word-of-mouth, this thing spread across the globe,” he said.

Bishop said the site was built with a focus on children with visual impairments, but later it was used by students with other disabilities.

“Most of our users have motor impairment or cognitive impairment,” he said. “They can’t handle a conventional book.”

The collaborative project began with no funding and a goal of eventually hosting 1,000 books on the site. This goal was surpassed within months, and now, nine years later, the site hosts over 50,000 books written in 27 different languages.

Bishop said he is still in disbelief about the rate at which the site has grown.

“When we started it, I had no idea it would get this big,” he said.

By incorporating an easy-to-use online form on the site itself, he said the creators have made it possible for anyone to write a book of his or her own and post it to the site.

“The key thing is that it enables civilians to create the books easily,” he said.

Jeffay said projects like TarHeelReader.org are exactly what the computer science department encourages its faculty to develop.

“It all fits into the model of research for this department, which is to work with others to solve real world problems,” he said.

TarHeelReader.org has encouraged students from around the world to read, including a fourth-grader named Leo who cannot hold a book on his own and uses a speech generating device to communicate.

“I like the Tar Heel Reader because it’s free!” he said over email. “My favorite book is ‘I Love Roller Coasters’!”

Erickson said she has high hopes for the future of the website.

“We’d also like more really smart young people to make more books for us because over and over again when you look at the books that are most widely read on the site, they’re always written by other young people,” she said. “And they tend to be written by young people who don’t have any disabilities at all; they just decided to write a book to make the world a better place.”

university@dailytarheel.com

Tar Heel Reader Celebrates 10 Million Books Read (Video)

January 31, 2017

Tar Heel Reader, an online, open source library of free, easy-to-read, accessible books, recently reached 10 million books read.

The site was launched in 2008 and has averaged more than one million books read per year. Books are available in 27 languages on a wide range of topics, all accessible using standard keyboards, touch screens, assistive switches, and other assistive devices.

Hear about Tar Heel Reader from the creators themselves, Dr. Karen Erickson of the Center for Literacy and Disability Studies in the Department of Allied Health Sciences and Dr. Gary Bishop of the Department of Computer Science.

Researchers Demonstrate 100° Dynamic Focus AR Display With Membrane Mirrors

January 27, 2017

Achieving a wide field of view in an AR headset is a challenge in itself, but so too is fixing the so-called vergence-accommodation conflict which presently plagues most VR and AR headsets, making them less comfortable and less in sync with the way our vision works in the real world. Researchers have set out to try to tackle both issues using varifocal membrane mirrors.

Researchers from UNC, MPI Informatik, NVIDIA, and MMCI have demonstrated a novel see-through near-eye display aimed at augmented reality which uses membrane mirrors to achieve varifocal optics which also manage to maintain a wide 100 degree field of view.

Vergence-Accommodation Conflict

In the real world, to focus on a near object, the lens of your eye bends to focus the light from that object onto your retina, giving you a sharp view of the object. For an object that’s further away, the light is traveling at different angles into your eye and the lens again must bend to ensure the light is focused onto your retina. This is why, if you close one eye and focus on your finger a few inches from your face, the world behind your finger is blurry. Conversely, if you focus on the world behind your finger, your finger becomes blurry. This is called accommodation.

Then there’s vergence, which is when each of your eyes rotates inward to ‘converge’ the separate views from each eye into one overlapping image. For very distant objects, your eyes are nearly parallel, because the distance between them is so small in comparison to the distance of the object (meaning each eye sees a nearly identical portion of the object). For very near objects, your eyes must rotate sharply inward to converge the image. You can see this too with our little finger trick as above; this time, using both eyes, hold your finger a few inches from your face and look at it. Notice that you see double-images of objects far behind your finger. When you then look at those objects behind your finger, now you see a double finger image.

With precise enough instruments, you could use either vergence or accommodation to know exactly how far away an object is that a person is looking at (remember this, it’ll be important later). But the thing is, both accommodation and vergence happen together, automatically. And they don’t just happen at the same time; there’s a direct correlation between vergence and accommodation, such that for any given measurement of vergence, there’s a directly corresponding level of accommodation (and vice versa). Since you were a little baby, your brain and eyes have formed muscle memory to make these two things happen together, without thinking, any time you look at anything.

But when it comes to most of today’s AR and VR headsets, vergence and accommodation are out of sync due to inherent limitations of the optical design.

In a basic AR or VR headset, there’s a display (which is, let’s say, 3″ away from your eye) which makes up the virtual image, and a lens which focuses the light from the display onto your eye (just like the lens in your eye would normally focus the light from the world onto your retina). But since the display is a static distance from your eye, the light coming from all objects shown on that display is coming from the same distance. So even if there’s a virtual mountain five miles away and a coffee cup on a table five inches away, the light from both objects enters the eye at the same angle (which means your accomodation—the bending of the lens in your eye—never changes).

That comes in conflict with vergence in such headsets which—because we can show a different image to each eye—is variable. Being able to adjust the imagine independently for each eye, such that our eyes need to converge on objects at different depths, is essentially what gives today’s AR and VR headsets stereoscopy. But the most realistic (and arguably, most comfortable) display we could create would eliminate the vergence-accommodation issue and let the two work in sync, just like we’re used to in the real world.

Eliminating the Conflict

To make that happen, there needs to be a way to adjust the focal power of the lens in the headset. With traditional glass or plastic optics, the focal power is static and determined by the curvature of the lens. But if you could adjust the curvature of a lens on-demand, you could change the focal power whenever you wanted. That’s where membrane mirrors and eye-tracking come in.

In a soon to be published paper titled Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors, researchers demonstrated how they could use mirrors made of deformable membranes inside of vacuum chambers to create a pair of varifocal see-through lenses, forming the foundation of an AR display.

The mirrors are able to set the accommodation depth of virtual objects anywhere between 20cm to (optical) infinity. The response time of the lenses between that minimum and maximum focal power is 300ms, according to the paper, with transitions between smaller focal powers happening faster.

But how to know how far to set the accommodation depth so that it’s perfectly in sync with the convergence depth? Thanks to integrated eye-tracking technology, the apparatus is able to rapidly measure the convergence of the user’s eyes, the angle of which can easily be used to determine the depth of anything the user is looking at. With that data in hand, setting the accommodation depth to match is as easy as adjusting the focal power of the lens.

Those of you following along closely will probably see a potential limitation to this approach—the accommodation depth can only be set for one virtual object at a time. The researchers thought about this too, and proposed a solution to be tested at a later date:

Our display is capable of displaying only a single depth at a time, which leads to incorrect views for virtual content [spanning] different depths. A simple solution to this would be to apply a defocus kernel approximating the eye’s point spread function to the virtual image according to the depth of the virtual objects. Due to the potential of rendered blur not being equivalent to optical blur, we have not implemented this solution. Future work must evaluate the effectiveness of using rendered blur in place of optical blur.

Other limitations of the system (and possible solutions) are detailed in section 6 of the paper, including varifocal response time, form-factor, latency, consistency of focal profiles, and more.

Retaining a Wide Field of View & High Resolution

But this isn’t the first time someone has demonstrated a varifocal display system. The researchers identified several other varifocal display approaches, including free-form optics, light field displays, pinlight displays, pinhole displays, multi-focal plane display, and more. But, according to the paper’s authors, all of these approaches make significant tradeoffs in other important areas like field of view and resolution.

And that’s what makes this novel membrane mirror approach so interesting—it not only tackles the vergence-accommodation conflict, but does so in a way that allows a wide 100 degree field of view and retains a relatively high resolution, according to the authors. You’ll notice in the chart above, that, of the different varifocal approaches the researchers identified, they show that any large-FOV approach results in a low angular resolution (and vice-versa), except for their solution.

– – — – –

This technology is obviously at a very preliminary stage, but its use as a solution for several key challenges facing AR and VR headset designs has been effectively demonstrated. And with that, I’ll leave the parting thoughts to the paper’s authors (D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Akşit, P. Didyk, K. Myszkowski, D. Luebke, and H. Fuchs.):

Despite few limitations of our system, we believe that providing correct focus cues as well as wide field of view are most crucial features of head-mounted displays that try to provide seamless integration of the virtual and the real world. Our screen not only provides basis for new, improved designs, but it can be directly used in perceptual experiments that aim at determining requirements for future systems. We, therefore, argue that our work will significantly facilitate the development of augmented reality technology and contribute to our understanding of how it influences user experience.

CS major Benjamin Kompa named Churchill Scholar

January 27, 2017

Benjamin Kompa, a fourth-year student at the University of North Carolina at Chapel Hill, has been named a recipient of the prestigious Churchill Scholarship, a research-focused award that provides funding to outstanding American students for a year of master’s study in science, mathematics and engineering at Churchill College, based at the University of Cambridge in England.

Kompa is one of only 15 selected for the award, which not only requires exemplary academic achievement but also seeks those with proven talent in research, extensive laboratory experience and personal activities outside of academic pursuits, especially in music, athletics and social service. He is Carolina’s 17th Churchill Scholar.

“Receiving a Churchill Scholarship is an incredible opportunity for a young scholar and Benjamin is so deserving of this prestigious award,” said Chancellor Carol L. Folt. “He is focused on applying his significant skills in computer science and statistics to solve challenging, global biomedical problems. We are very pleased for Benjamin and know his studies at Cambridge will help pave the way for him to make life-changing impacts in the fields of computational biology and bioinformatics.”

Kompa, 22, is a native of Columbus, Ohio and plans to graduate from Carolina this May with a double major in mathematics and computational science, and a minor in biology from the College of Arts & Sciences. He is a Colonel Robinson Scholar, a Phi Beta Kappa member and an Honors Carolina student and has worked in biology labs since high school. Kompa is also a two-time national champion Bridge player, who upon request from the World Bridge Federation, successfully investigated cheating in Bridge using computer methods.

He has worked in the same lab since his first year at UNC-Chapel Hill, where he learned biology lab techniques and conducted computational research to model chromosomes. His research has been pioneering in its exploration of new models and approaches, emphasizing lasting impacts over quickly publishing papers. Kompa also spent a summer at Harvard Medical School studying methods of artificial intelligence neural networks and applied them to analyzing MRIs.

Deeply interested in applying the techniques of computer science and statistics to biomedical problems, Kompa plans to use the Churchill Scholarship to pursue a Master of Philosophy in computational biology in the department of applied math and theoretical physics, and conduct research on disease comorbidities with Dr. Pietro Lio. He then hopes to pursue a Ph.D. and career in research in bioinformatics.

“We are thrilled to see the Churchill go to such an exceptional and worthy student, “ said Inger Brodey, director of Carolina’s Office of Distinguished Scholarships.

The Churchill started in 1963 with three awards and since grown to an average of 14 awards. The Scholarship was set up at the request of Sir Winston Churchill in order to fulfill his vision of U.S.-U.K. scientific exchange with the goal of advancing science and technology on both sides of the Atlantic, helping to ensure our future prosperity and security. There have now been approximately 500 Churchill Scholars. This is the third in a row that a UNC student has been awarded with the Churchill.

Published January 26, 2017.

Politics Aside, Counting Crowds Is Tricky

January 23, 2017

Heard on All Things Considered

Jon Hamilton

There has been a lot of arguing about the size of crowds in the past few days. Estimates for President Trump’s inauguration and the Women’s March a day later vary widely.

And for crowd scientists, that’s pretty normal. “I think this is expected,” says Mubarak Shah, director of the Center for Research in Computer Vision at the University of Central Florida. Shah says he encountered something similar during mass protests in Barcelona, Spain a couple of years ago.

“The government was claiming smaller number than the opposition was claiming,” he says.

Counting quarrels have popped up during previous events in the U.S. as well. During the Million Man March in 1995, the National Park Service estimated the crowd to be far smaller than the organizers claimed. The controversy led Congress to bar the Park Service from doing head counts on the National Mall.

The reason that disagreements frequently arise is that there’s no foolproof way to get an accurate head count of a large crowd.

Decades ago, crowd estimates were done by people who simply looked at photographs of an event. They would count the number of people in one small area of a photo, then extrapolate that number to estimate the entire field of view.

This method was inaccurate, though, in part because some areas might have lots people packed together, while others would have just a few people with large spaces between them.

Computers have improved counting somewhat. They don’t suffer fatigue the way humans do, and a computer doesn’t have any political bias, Shah says.

But even computers have limits, says Dinesh Manocha of the University of North Carolina at Chapel Hill. They have no problem sorting a few people who aren’t packed together. But when you have big crowds, like those seen across the country in the past few days, it gets tricky.

“When it’s more than 100,000, we just can’t estimate right. We don’t have an answer today,” he says.

It often comes down to image resolution. Manocha says even professional cameras only capture about 40 million pixels. So if there are one million people, each person will appear as a 40-dot smudge.

A company called Digital Design & Imaging Service, is actually trying to make an estimate of attendance at the Womens’ March.

They used high-resolution cameras attached to a tethered balloon to take photos of the marchers. Even with his high-tech surveillance system, Curt Westergard, the company’s president, says he doesn’t expect to get a precise figure. Clouds meant the company couldn’t supplement their own photos with satellite images. And the number of people changed constantly throughout the day.

“Our main goal really on this just to ascertain a rough order of magnitude,” he says. “So if somebody says a million vs. 100,000 we can easily prove one or the other.”

Westergard says the company’s head count should be out by the end of the week. The firm will also share its raw data, so that others can try to make their own estimates.

“We can and do make all of our data transparent. We put it online,” he says. “If you don’t like what we said, count it yourself, and here’s the data.”

Profiles in Computing: Tanya Amert

December 15, 2016

December 15, 2016 In: , , ,


By Shar Steed, CRA Communications Specialist

Tanya Amert, a computer science Ph.D. student at University of North Carolina, Chapel Hill, found herself drawn to computer science because she enjoyed figuring out how things work. At 13 years old, she was a big fan of the Neopets website and online community. Amert noticed some users had customized homepages, and her interest grew even more. Despite not knowing any HTML at the time, she learned how to look at the source code and figured out how to change the color of the scroll bar within the CSS. “I discovered that specific lines of HTML made that happen. And I thought that was mind boggling and awesome.”

Computer science was not offered at her high school, so as a freshman at MIT, she enrolled in her first programming class and it “completely clicked” for her. After graduating Amert, spent three years in industry working at Microsoft. But she began to feel like the projects she was really excited about were coming out of Microsoft Research. So, she got in touch with a contact there who told her if she really wanted to be working on the cutting edge of research, a Ph.D. was needed. Amert then felt like a Ph.D. would open more doors than they would close, and began appling to Ph.D. programs.

Amert’s specialty is in computer graphics, specifically physically-based simulations. She first took an interest in cloth simulation after watching extra features on a Shrek DVD. “I was so fascinated that they had these tools to model characters and improve the visualization.” It inspired her to take a computer graphics class in her junior year of college, and this is the focus of her Ph.D. research.

In undergrad, Amert did not participate in many women in computer science activities because of her heavy course load. But after experiencing some isolation in the working world, she returned to school with a personal commitment to become more active in the community. At Microsoft, Amert would often be one of only two female engineers in a room of 15-20 people, and began to feel the disparity. So when she was invited to speak at CRA-Women’s Virtual Undergraduate Town Hall (VUTH) this summer, Amert gladly accepted and shared her experiences with the participants. VUTH events are webinar sessions designed to give students the opportunity to learn more about a specific discipline in computer science and also ask the host and speaker mentoring questions to help them prepare for graduate school.

During the webinar, Amert conducted a research presentation titled, “Accelerated Cloth Simulation for Virtual Try-On.” She described the hosting experience as both “intimidating” and “exciting”. It was intimidating to know that the audience was tuning in from around the globe and that she may influence the trajectory of a young person’s career. It was exciting because the participants are on the cusp of a big life step, and Amert vividly remembers her experiences applying to graduate school. “It was also really motivating to be able to share my insights with other people because I’ve already been through the experience.”

Presenting at the webinar also helped her practice how to explain her research in a high level way to broader audience. She presented the same set of slides to her mother, who doesn’t have a technical background, to help her mother understand specifically what her research is about.

Despite successes, Amert also battles with feelings of imposter syndrome. To combat this, one thing she finds useful, especially when she starts to feel discouraged or like she doesn’t belong, is to focus on her positive outcomes. Amert was previously a tutor and kept her course evaluations, so she often looks back at her positive reviews when she gets discouraged.

Profiles in Computing
Part of the mission of the Computing Research Association (CRA) is to mentor and cultivate the talent development of computing researchers at all levels. Several programs led by the Committee on the Status of Women in Computing Research (CRA-W) focus on increasing gender diversity in computing. This new column, “Profiles in Computing,” showcases successful women in computing, who donate their time and energy to mentoring future generations and strengthening the community of female computing researchers through CRA-W initiatives.