Skip to main content

Back to TCSDLS page

12 January 2015

Speaker: William T. Freeman, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology
Title: The World of Tiny Motions
Host School: Duke
Host: Carlo Tomasi


We have developed a “motion microscope” to visualize small motions by synthesizing a video with the desired motions amplified. The project began as an algorithm to amplify small color changes in videos, allowing color changes from blood flow to be visualized. Modifications to this algorithm allow small motions to be amplified in a video. I’ll describe the algorithms, and show color-magnified videos of adults and babies, and motion-magnified videos of throats, pipes, cars, smoke, and pregnant bellies. These algorithms are being used in biological, civil, and mechanical engineering applications.

Having this tool led us to explore other vision problems involving tiny motions. I’ll describe recent work in analyzing fluid flow and depth by exploiting small motions in video or stereo video sequences caused by refraction of turbulent air flow (joint work with the authors below and Tianfan Xue, Anat Levin, and Hossein Mobahi). We have also developed a “visual microphone” to record sounds by watching objects, like a bag of chips, vibrate (joint with the authors below and Abe Davis and Gautam Mysore).

Collaborators: Michael Rubinstein, Neal Wadhwa, and co-PI Fredo Durand.


William T. Freeman is Professor and Associate Department Head of Electrical Engineering and Computer Science at MIT, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) there.

His current research interests include machine learning applied to computer vision, Bayesian models of visual perception, and computational photography. He received outstanding paper awards at computer vision or machine learning conferences in 1997, 2006, 2009 and 2012, and test-of-time awards for papers from 1990 and 1995. Previous research topics include steerable filters and pyramids, orientation histograms, the generic viewpoint assumption, color constancy, computer vision for computer games, and belief propagation in networks with loops.

He is active in the program or organizing committees of computer vision, graphics, and machine learning conferences. He was the program co-chair for ICCV 2005, and for CVPR 2013.

26 January 2015 This talk has been canceled due to inclement weather

Speaker: Vijay Kumar, School of Engineering & Applied Sciences, University of Pennsylvania
Title: Aerial Robot Swarms
Host School: UNC
Host: Ming Lin


Autonomous micro aerial robots can operate in three-dimensional, indoor and outdoor environments, and have applications to search and rescue, first response and precision farming. Dr. Kumar will describe the challenges in developing small, agile robots and the algorithmic challenges in the areas of (a) control and planning, (b) state estimation and mapping, and (c) coordinating large teams of robots.


Vijay Kumar is the UPS Foundation Professor with appointments in the Departments of Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering.

Kumar’s group works on creating autonomous ground and aerial robots, designing bio-inspired algorithms for collective behaviors, and on robot swarms. They have won many best paper awards at conferences, and group alumni are leaders in teaching, research, business and entrepreneurship. Kumar is a fellow of ASME and IEEE and a member of the National Academy of Engineering.

Vijay Kumar has held many administrative positions in the School of Engineering and Applied Science, including director of the GRASP Laboratory, chair of Mechanical Engineering and Applied Mechanics, and the position of the Deputy Dean. He served as the assistant director of robotics and cyber physical systems at the White House Office of Science and Technology Policy.

2 February 2015

Speaker: David A. Bader, College of Computing, Georgia Institute of Technology
Title: Massive-Scale Streaming Analytics
Host School: UNC
Host: Ashok Krishnamurthy


Emerging real-world graph problems include: detecting community structure in large social networks; improving the resilience of the electric power grid; and detecting and preventing disease in human populations. Unlike traditional applications in computational science and engineering, solving these problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for additional research on scalable algorithms and development of frameworks for solving these problems on high performance computers, and the need for improved models that also capture the noise and bias inherent in the torrential data streams. In this talk, the speaker will discuss the opportunities and challenges in massive data-intensive computing for applications in computational science and engineering.


David A. Bader is a full professor and chair of the School of Computational Science and Engineering, College of Computing at Georgia Institute of Technology and Executive Director of High Performance Computing. His research is supported through highly-competitive research awards, primarily from NSF, NIH, DARPA, and DOE, and his main areas of research are in parallel algorithms, combinatorial optimization, massive-scale social networks, and computational biology and genomics. He received his Ph.D. from The University of Maryland, is a Fellow of the IEEE and AAAS, a National Science Foundation CAREER Award recipient, and has received numerous industrial awards from IBM, NVIDIA, Intel, Cray, Oracle/Sun Microsystems, and Microsoft Research. He serves as a board member of the Computing Research Association (CRA), on the NSF Advisory Committee on Cyberinfrastructure, on the Council on Competitiveness High Performance Computing Advisory Committee, on the IEEE Computer Society Board of Governors, and on the Steering Committees of the IPDPS and HiPC conferences and is the editor-in-chief of IEEE Transactions on Parallel and Distributed Systems (TPDS). Dr. Bader is a leading expert on multicore, manycore, and multithreaded computing for data-intensive applications such as those in massive-scale graph analytics and has co-authored over 130 articles in peer-reviewed journals and conferences.

16 February 2015

Speaker: Virgil D. Gligor, Department of Electrical and Computer Engineering, Carnegie Mellon University
Title: Dancing with the Adversary: A Tale of Wimps and Giants
Host School: NCSU
Host: William Enck


A system without accurate and complete adversary definition cannot possibly be insecure. Without such definitions, (in)security cannot be measured, risks of use cannot be accurately quantified, and recovery from penetration events cannot have lasting value.  Conversely, accurate and complete definitions can help deny the adversary any attack advantage over a system defender and, at least in principle, secure system operation can be achieved.  In this talk, I argue that although the adversary’s attack advantage cannot be eliminated in large commodity software (i.e., for “giants”), it can be rendered ineffective for small software components with rather limited function and high-assurance layered security properties, which are isolated from giants; i.e., for “wimps.” However, isolation cannot guarantee wimps’ survival in competitive markets, since wimps trade basic system services to achieve small attack surfaces, diminish adversary capabilities, and weakened attack strategies.  To survive, secure wimps must use services of, or compose with, insecure giants. This appears to be “paradoxical:” wimps can counter all adversary attacks, but only if they use adversary-vulnerable services from which they have to defend themselves.

In this talk I will illustrate the design of a practical system that supports wimp composition with giants, and extend the wimp-giant metaphor to security protocols in networks of humans and computers where compelling (e.g., free) services, possibly under the control of an adversary, are offered to unsuspecting users.  These protocols produce value for participants who cooperate. However, they allow malicious participants to harm honest ones and corrupt their systems by employing deception and scams. Yet these protocols have safe states whereby a participant can establish (justified) beliefs in the adversary’s (perhaps temporary) honesty. However, reasoning about such states requires techniques from other fields, such as behavioral economics, rather than traditional security and cryptography.


Virgil D. Gligor received his B.Sc., M.Sc., and Ph.D. degrees from the University of California at Berkeley. He taught at the University of Maryland between 1976 and 2007, and is currently a Professor of Electrical and Computer Engineering at Carnegie Mellon University and co-Director of CyLab, the University’s laboratory for information security, privacy and dependability. Over the past forty years, his research interests ranged from access control mechanisms, penetration analysis, and denial-of-service protection to cryptographic protocols and applied cryptography. He was a consultant to Burroughs Corporation, IBM, Microsoft and SAP. Gligor was an editorial board member of several ACM and IEEE journals and the Editor in Chief of the IEEE Transactions on Dependable and Secure Computing. He received the 2006 National Information Systems Security Award jointly given by NIST and NSA, the 2011 Outstanding Innovation Award of the ACM SIG on Security Audit and Control, and the 2013 Technical Achievement Award of the IEEE Computer Society.

16 March 2015

Speaker: Neeraj Suri, Department of Computer Science (Informatik), Technische Universitat Darmstadt
Title: Kicking and Fixing Software
Host School: NCSU
Host: Mladen Vouk


The perpetual elusiveness of correct-by-design software fosters the need of techniques to “find and fix” software deficiencies whether they arise from design, operational or deliberate instances. With the intent of post-design software fixes, the talk ruminates on the fun, value and science of experimental techniques to “kick n’ fix” software.


Suri holds the TUD Chair Professorship on “Dependable Systems & Software” at TU Darmstadt, Germany and is also with the University of Texas at Austin. Following his PhD work at UMass-Amherst, he has held both industry and academic positions at Allied-Signal/Honeywell Research and at Boston University (Saab Endowed Chair Professor). He has received trans-national funding from the EC, German DFG/BMBF/DAAD/Loewe, SSF/VINNOVA, NSF/DARPA/ONR/AFOSR, NASA, Microsoft, IBM, Hitachi, Saab, Volvo, Daimler, GM and others. He is a recipient of the NSF CAREER, Microsoft and IBM Faculty Awards. Suri’s professional services span associate Editor in Chief for IEEE TDSC, editorial boards for IEEE TSE, TPDS, ACM Computing Surveys, IEEE Security & Privacy and others. He has PC-chaired multiple dependability conferences: DSN, ICDCS, SRDS, HASE, ISAS, DATE, WORDS, RAF among others. He serves on advisory boards for Microsoft and multiple other US/EU/Asia industry and university boards. Suri chaired the IEEE Technical Committee on Dependability and Fault Tolerance, and it’s Steering Committee. Additional professional details appear at:

30 March 2015

Speaker: Leslie Greengard, Courant Institute of Mathematical Sciences, New York University
Title: Fast and Accurate Physical Modeling in Complex Geometry
Host School: Duke
Host: Xiaobai Sun


During the last few decades, fast algorithms have brought a variety of large-scale modeling tasks within practical reach. This is particularly true of integral equation approaches to electromagnetics, acoustics, gravitation, elasticity, and fluid dynamics. The practical application of these methods, however, requires analytic representations that lead to well-conditioned linear systems, techniques for error estimation that permit robust mesh refinement, and implementations on high-performance computing platforms. I will give an overview of recent progress in these areas with a particular emphasis on wave scattering problems in complex geometry.


Dr. Leslie F. Greengard is an American mathematician, physician and computer scientist. He is co-inventor of the fast multipole method (FMM) in 1987, recognized as one of the top-ten algorithms of the 20th century by the Institute of Electrical and Electronics Engineers.

Greengard holds an M.D. and Ph.D. in computer science from Yale University and has been on the faculty of the Courant Institute of Mathematical Sciences at New York University since 1989, where he is professor of mathematics and computer science. From 2006 to 2011, Greengard served as director of Courant. He is a member of both the National Academy of Sciences and the National Academy of Engineering, and is the founding director of the Simons Center for Data Analysis. This Center is committed to analyzing large-scale, rich data sets and to developing innovative mathematical methods to examine such data.

6 April 2015

Speaker: Thomas W. Reps, Computer Sciences Department, University of Wisconsin-Madison, and GrammaTech, Inc.
Title:  Automating Abstract Interpretation
Host School: UNC
Host: Jan Prins


Unfortunately, the problem of determining whether a program is correct is undecidable. Program-analysis and verification tools sidestep the tar-pit of undecidability by working on an abstraction of a program, which over-approximates the behavior of the original program. The theory underlying this approach is called abstract interpretation. Abstract interpretation provides a way to obtain information about the possible states that a program reaches during execution, but without actually running the program on specific inputs. Instead, it explores the program’s behavior for all possible inputs, thereby accounting for all possible states that the program can reach. Operationally, one can think of abstract interpretation as running the program “in the aggregate”. That is, rather than executing the program on ordinary states, the program is executed on abstract states, which are
finite-sized descriptors that represent collections of states.

However, there is a glitch: abstract interpretation has a well-deserved reputation of being a kind of “black art”, and consequently difficult to work with. This talk will describe a twenty-year quest to address this issue by raising the level of automation in abstract interpretation. I will present several different approaches to creating correct-by-construction analyzers. Somewhat surprisingly, this research has recently allowed us to establish connections between our problem and several other areas of computer science, including machine learning, knowledge compilation, data integration, and constraint programming.

(Joint work with Aditya Thakur and a variety of students over several years.)


Thomas Reps is the J. Barkley Rosser Professor of Computer Science and holds the Rajiv and Ritu Batra Chair at the University of Wisconsin. He has worked on many aspects of automated program analysis and is co-founder and president of GrammaTech, a leading developer of software-assurance and software security tools.