Amitabh Varshney (M.S. 1991, Ph.D. 1994) Named Dean of the College of Computer, Mathematical, and Natural Sciences at UMD

January 24, 2018
Date:
Wednesday, January 24, 2018

‘Adversarial glasses’ can fool even state-of-the-art facial-recognition tech

January 11, 2018

By   Posted on January 11, 2018 10:39 am

You may have heard about so-called “adversarial” objects that are capable of baffling facial recognition systems, either making them fail to recognize an object completely or prompting them to classify it incorrectly — for example, thinking that a rifle is actually a 3D-printed toy turtle. Well, researchers at Carnegie Mellon University and the University of North Carolina at Chapel Hill have just found a practical, scaleable, and somewhat scary application — anti-facial-recognition glasses.

Building on previous work by the same group from 2016, the researchers built five pairs of adversarial glasses, which can be successfully used by 90 percent of the population, making them a nearly “universal” solution. When worn, the glasses render wearers undetectable (or, as the researchers describe it, “facilitate misclassification”) even when viewed by the latest machine intelligence facial recognition tech. And far from looking like the kind of goofy disguises individuals might have worn to avoid being recognized in the past, these eyeglasses also appear completely normal to other people.

The eyeglasses were tested successfully against VGG and OpenFace deep neural network-based systems. Although the instructions for building them have not been made publicly available, the researchers say that the glasses could be 3D-printed by users.

Facial recognition has no problem identifying the Owen Wilson on the left. The one on the right? Not so much.

Whether the technology is good or bad depends largely on how you perceive facial recognition. On the one hand, it’s easy to see how privacy advocates would be excited at the prospect of glasses that can help bypass our surveillance society, in which we’re not only photographed 70 times per day, but can also be readily identified through facial recognition. (There are already examples of similar facial recognition disguises available on the market.)

On the other hand, facial recognition is frequently used to keep citizens safe by identifying potentially dangerous individuals in places like airports. For this reason, the researchers have passed on their findings to the Transportation Security Administration (TSA), and recommended that the TSA consider asking passengers to remove seemingly innocuous items like glasses and jewelry in the future, since these “physically realizable attack artifacts” could be used to beat even state-of-the-art recognition systems.

A paper describing the researchers’ work was recently published online, titled “Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition.”

Facial recognition fooling glasses could subvert TSA security

January 9, 2018

Researchers at the US’ Carnegie Mellon University and University of North Carolina at Chapel Hill developed a technique to fool facial recognition algorithms including those used at airports.

Researchers in the US at Carnegie Mellon University and the University of North Carolina at Chapel Hill developed a technique to fool facial recognition algorithms including those used at Airports.

Using seemingly inconspicuous glasses, a user can trick the algorithm into producing an inaccurate reading of a person’s face prompting researchers to present their findings to the Transportation Security Administration and recommend the agency require people to remove glasses and jewellery to prevent the attack from being carried out, according to their study.

Researchers developed five pairs of adversarial generative nets (AGN) glasses that could be used by 90 percent of the population to evade detection. Furthermore, researchers claim these attacks can be scaled up.

The glasses are able to deceive the software by making the texture in the glasses as close to possible as real designs found online and then were subjected to user scrutiny to determine whether or not the glasses would raise alarm under normal wear.

AI-Fooling Glasses Could Be Good Enough to Trick Facial Recognition at Airports

January 5, 2018

Adversarial objects, for your face.

Samantha Cole

In the not-too-distant future, we’ll have plenty of reasons to want protect ourselves from facial detection software. Even now, companies from Facebook to the NFL and Pornhub already use this technology to identify people, sometimes without their consent. Hell, even our lifelines, our precious phones, now use our own faces as a password.

But as fast as this technology develops, machine learning researchers are working on ways to foil it. As described in a new study, researchers at Carnegie Mellon University and the University of North Carolina at Chapel Hill developed a robust, scalable, and inconspicuous way to fool facial recognition algorithms into not recognizing a person.

It’s gotten so good at tricking the system that the researchers made a serious suggestion to the TSA: Since facial recognition is already being used in high-security public places like airports, they’ve asked the TSA to consider requiring people to remove physical artifacts—hats, jewelry, and of course eyeglasses—before facial recognition scans.

It’s a similar concept to how UC Berkeley researchers fooled facial recognition technology into thinking a glasses-wearer was someone else, but in that study, they toyed with the AI algorithm to “poison” it. In this new paper, the researchers don’t fiddle with the algorithm they’re trying to fool at all. Instead, they rely on manipulation of the glasses to fool the system. It’s more like the 3D-printed adversarial objects developed by MIT, which tricked AI into thinking a turtle was a gun by adjusting a few pixels on an image of a turtle. Only this time, it’s tricking the algorithm into thinking one person is another, or not a person at all.

Making your own pair of these would be tricky: This group used a white box method of attack, which means they knew the ins and outs of the algorithm they were trying to fool. But if someone wanted to mass-produce these for the savvy privacy nut or malicious face-hacker, they’d have a nice little business on their hands. In the hypothetical surveillance future, of course.

Ming Lin Named Chair of UMD Department of Computer Science

December 14, 2017
Date:
Thursday, December 14, 2017

When AI Supplies the Sound in Video Clips, Humans Can’t Tell the Difference

December 13, 2017

Which of these video soundtracks are real and which are generated by a machine?

Machine learning is changing the way we think about images and how they are created. Researchers have trained machines to generate faces, to draw cartoons, and even to transfer the style of paintings to pictures. It is just a short step from these techniques to creating videos in this way, and indeed this is already being done.

All that points to a way of creating virtual environments entirely by machine. That opens all kinds of possibilities for the future of human experience.

But there is a problem. Video is not just a visual experience; generating realistic sound is just as important. So an interesting question is whether machines can convincingly generate the audio component of a video.

Today we get an answer thanks to the work of Yipin Zhou and pals at the University of North Carolina at Chapel Hill and a few buddies at Adobe Research. These guys have trained a machine-learning algorithm to generate realistic soundtracks for short video clips.

Indeed, the sounds are so realistic that they fool most humans into thinking they are real. You can take a test yourself here to see if you can tell the difference.

h

Can you tell which of these video clips have real sound and which are computer generated?

The team take the standard approach to machine learning. Algorithms are only ever as good as the data used to train them, so the first step is to create a large, high-quality annotated data set of video examples.

The team create this data set by selecting a subset of clips from a Google collection called Audioset, which consists of over two million 10-second clips from YouTube that all include audio events. These videos are divided into human-labeled categories focusing on things like dogs, chainsaws, helicopters, and so on

To train a machine, the team must have clips in which the sound source is clearly visible. So any video that contains audio from off-screen events is unsuitable. The team filters these out using crowdsourced workers from Amazon’s Mechanical Turk service to find clips in which the audio source is clearly visible and dominates the soundtrack.

That produced a new data set with over 28,000 videos, each about seven seconds in length, covering 10 different categories.

Next, the team used these videos to train a machine to recognize the waveforms associated with each category and to reproduce them from scratch using a neural network called SampleRNN.

Finally, they tested the results by asking human evaluators to rate the quality of the sound accompanying a video and to determine whether it is real or artificially generated.

The results suggest that machines can become pretty good at this task. “Our experiments show that the generated sounds are fairly realistic and have good temporal synchronization with the visual inputs,” say Zhou and co.

And human evaluators seem to agree. “Evaluations show that over 70% of the generated sound from our models can fool humans into thinking that they are real,” say Zhou and co.

That’s interesting work that paves the way for automated sound editing. A common problem in videos is that extraneous noise from an off-screen source can ruin a clip. So having a way to automatically replace the sound with a realistic machine-generated alternative will be useful.

And with Adobe’s involvement in this research, it may not be long before we see this kind of capability in commercial video editing software.

Ref: arxiv.org/abs/1712.01393 : Visual to Sound: Generating Natural Sound for Videos in the Wild

IEEE TryCybSI Partners on Why Active Learning is Key for Mastering Cybersecurity (Q&A with Prof. Fabian Monrose)

December 6, 2017

TOPICS:  

In this interview, they discuss why cybersecurity education is changing and how they’re using Riposte in the classroom. (See a video demonstration of Riposte here.)

Question: When you look at all of the factors that affect cybersecurity, where would you rank the skills gap/talent shortage?

Monrose: Although cybersecurity is multidisciplinary, it requires a pretty solid foundation in computer science. You need a good grasp of operating systems, networking, compilers, and so on before you can start to specialize in security.

Jan and I have been co-teaching a class at UNC for the past few years. For the most part, the gap we’ve seen involves students not getting hands-on exposure to computer science basics. So, by the time they end up in a security class, we have to reteach them the fundamentals that they should have mastered in their foundational courses.

It’s not that those courses didn’t introduce them to the basics. Instead, it’s that within the confines of a semester, it’s very difficult for them to have gotten hands-on exercises involving those basics. The “active learning exercises” that we have been putting together are key for solidifying concepts that they learned from textbooks. Jan and I realized that unless we addressed this gap, it was very difficult to do the types of things we wanted to do, even in an introductory level systems security course.

Question: A challenge-based approach seems like an ideal way to learn and master cybersecurity skills. In fact, in your presentation at IEEE SecDev 2017, you quoted Manson and Pike: “Educating a cybersecurity professional is similar to training a pilot, an athlete, or a doctor. Time spent on the task for which the person is being prepared [for] is critical for success.” So why didn’t cybersecurity take that approach earlier when so many other professions have shown its benefits?

Monrose: The field has been moving so quickly. As a result, the learning curve is not only steep, but it’s a very long road. So you really have to spend time on the task if you want to succeed in this area.

I do believe there’s been a shift in how we teach cybersecurity, with more universities like ours trying to address this by giving students more exposure to these hands-on types of exercises. I’m not sure what the aha moment was that caused that shift about five years ago, but it’s pretty consistent across most tier-one universities now.

Werner: There are many different areas in cybersecurity, and it’s difficult to cover it all. The hands-on approach gives students an excellent idea of what is there, and it invites them to explore more on their own. Most importantly, it gives them the very solid foundation to actually get into that.

Monrose: At UNC, some students consider the computer security courses to be the pinnacle of the classes they take in their final year. That’s because there’s now a much greater appreciation that to excel as a cybersecurity professional, you really have to get the foundations right.

I think that mindset change might have come about because we saw more people focusing on operational security, particularly due to the large number of security breaches in the news. Students are often interested in understanding why the state of cybersecurity is so poor, and to understand that, they really need exposure to the practical aspects of computer security.

If we give students these types of experiences, then they’ll do far better off as practitioners regardless of whether they go on to specialize in computer security. We were very happy to see that Brian Kirk and Rob Cunningham had the IEEE TryCybSI project trying to move the field in that direction. It became very natural for us to work on this problem together.

Question: Some hackers have a computer science background and have gone into hacking for reasons such as financial gain. Others are people who learned hacking on their own, such as the script kiddies. Is there anything we can glean from how they’re learning about the vulnerabilities that could be applied to the way the good guys are trained?

Werner: When I started my cybersecurity education quite some time ago, the amount of resources available was very limited: for the most part, magazines like Phrack were the only source of security information. Now there is a large body of literature on cybersecurity, but it still takes a lot of time and effort to deep-dive and figure out how the tools are working.

When people decide to get into computer security, there are many skills that don’t appear to be directly related. This is the script-kiddie phase, when one uses the tools found online without a good understanding of their inner workings. Hopefully, this phase would be short lived, and the learner would then seek a better understanding of the foundations and start building tools of their own. The drive to understand the software, diving deep to figure out how and why things are working the way they are, is a good quality for someone who wants to succeed in the field.

Question: Give us an overview of Riposte. For example, how does it work, and why is its approach particularly effective for learning cybersecurity fundamentals? How has Riposte evolved?

Monrose: Initially the course was structured around a small set of exercises that students would do at their own pace under a two- or three-week deadline. We were looking at how well they improved with each assignment. We noticed that once some of these assignments had a challenge around them, the students starting to become more engaged and a lot more creative.

In Riposte V2, we started to incorporate more of an adversarial setting, where every assignment has an attacker and a defender. Sometimes the defender was other students in the class. Other times the students were adversaries, and the instructors were the defenders.

Also, sometimes students were competing not only with one another, but against automated clients we built that would perform some of the same tasks. This approach forced them to work together to fight a common adversary. Those automated clients were designed to cheat, which forced students to figure out how to defeat an adversary who doesn’t play by any rules and thus starts with the upper hand.

Question: How are people who have learned about cybersecurity in a challenge-based environment different and better from those who haven’t?

Monrose: We’ve definitely seen a tremendous improvement in the students’ ability to solve unstructured tasks after they had done a hands-on exercise in that particular subject. Time and time again, we heard from students that the challenge-based exercises really forced them to understand their own technical limitations and find ways to effectively solve the challenges on their own.

For example, one learner noted “[t]his class was by far the best computer science class that I’ve ever taken. I’ve never had a class in which the projects are so practical and applicable, the results so rewarding … The assignments were an exciting and frustrating puzzle, and though they took an enormous amount of time to complete and sometimes had me on an emotional roller coaster for days, they challenged me in ways that really improved my programming skills and forced me to think outside of the box.”

Another stated “[t]he hands-on labs provided a unique opportunity to explore learned material rather than to simply read about it. The number of hours I spent outside of class was largely due to my fascination with some of the assignments. I spent ridiculously more time than I had to on them (mostly having fun with them).”

He left his Durham startup for a Ph.D. Now he’s going back to work – for a new company.

November 29, 2017
BY ZACHERY EANES

NOVEMBER 29, 2017 09:05 AM

Berg wins 2017 Helmholtz Prize

November 3, 2017

The International Conference on Computer Vision (ICCV) is the top international Computer Vision event, comprising the main ICCV conference and several co-located workshops and short courses. At ICCV 2017 in Venice, Italy, Professor Alex Berg won the Helmholtz Prize for his work in ICCV 2003, “Recognizing action at a distance.”

The Helmholtz Prize, formerly the “Test of Time Award”, is awarded biennially to recognize ICCV papers from at least ten years prior with significant impact on computer vision research. Berg’s paper was one of seven recognized for 2017. Berg’s co-authors on the 2003 paper were Alexei A. Efros, Greg Mori, and Jitendra Malik.

The awarded paper can be read online from Berg’s home page. For previous winners of the Helmholtz Prize, visit the ICCV Wikipedia page.

Tech companies look to universities for talent in artificial intelligence

November 1, 2017

Tech companies are pursuing artificial intelligence projects more than ever, and they’re looking at universities to recruit their new talent.

According to Dinesh Manocha, a computer science professor at UNC, artificial intelligence is an old field that has been around for more than 50 years. However, he said in an email that recent technology breakthroughs have made new and exciting applications of AI a possibility.

Manocha said these developments include increased voice recognition, automatic recognition of images and natural language processing. He said there is strong interest in developing personal robots that can perform daily chores at home, as well as semi-autonomous or autonomous cars.

According to Manocha, developments in AI and machine learning are what make products like Siri, Amazon Echo, Google Home and Google Voice work.

Morgan Vickery, a UNC junior computer science major, said in an email AI can go much further than just natural language processing — reaching into realms such as game development, education, finance, industry, medicine, costumer service and transportation. She said every industry and company can benefit from the incorporation of AI.

“AI has the potential to improve company efficiency, lowers physical risk to workers, lower costs and create employment opportunities,” Vickery said.

Manocha said the leading tech companies are short of talent in AI and related areas, and so they are heavily recruiting students with a strong background in this area. He said many professors are giving up their academic jobs to join the tech industry.

Luke Zettlemoyer, an AI professor at the University of Washington, is one professor who chose to turn down a job offer as a research scientist at Google. Instead, he will continue teaching AI and running a research group at the Allen Institute for Artificial Intelligence.

He said in an email his current setup allows him to keep teaching and doing research with graduate students, which he enjoys.

“It is true that some really great faculty will leave for industry — which is a shame for the students,” Zettlemoyer said. “But it also provides new opportunities for others to get hired into faculty positions and drive the next round of innovative research and teaching.”

Zettlemoyer said universities can really benefit from the industry’s impact, such as students going on to get great jobs.

According to Manocha, tech companies are already coming to UNC for AI talent.

“Many of our graduate students are heavily recruited and paid high six-digit packages,” he said.  “For example, six of my former PhD and postdocs are working in (the) Autonomous Car industry, including large companies such as Google/Waymo, Uber and many startups.”

Vickery said she is seeking jobs related to virtual, mixed and augmented reality within serious games and game development. According to Vickery, serious games are those that are used to train and teach, such as the military simulating combat situations through virtual reality.

“Computer science is already a lucrative field of study, and the demand for engineers has skyrocketed,” Vickery said. “There is such value in being technologically literate, and our society is so reliant on technology to simply function that it would be silly not to see the value in studying it.”

@keely_hendricks 

state@dailytarheel.com