Skip to main content

20 SEPTEMBER 2010
Speaker: David Parkes, School of Engineering and Applied Science, Harvard University
Title: Incentive Engineering in the Internet Age
Host School: Duke
Duke Host: Vincent Conitzer (conitzer at cs.duke.edu)
NCSU Host:
UNC Host: 
Ketan Mayer-Patel (kmp at cs.unc.edu)

Youtube video of talk

Abstract:
Mechanism design provides a formalism within which to understand the possible and the impossible when designing multi-agent systems with private information and rational agents. In introducing computational considerations, we have gained some understanding of how to reconcile new tensions that arise. Today, we see a thirst for practical, engineered incentive mechanisms to deploy across the myriad of multi-user systems enabled by the Internet. I will highlight some of the new challenges that this presents, in moving from isolated events to continual processes, from simple models to complex, multi-faceted agent models, and in enabling new kinds of computational and coordination processes.

Biography:
David C. Parkes is Gordon McKay Professor of Computer Science in the School of Engineering and Applied Sciences at Harvard University. He was the recipient of the NSF Career Award, the Alfred P. Sloan Fellowship, the Thouron Scholarship, the Harvard University Roslyn Abramson Award for Teaching and named as one of Harvard Class of 2010 Favorite Professors. Parkes received his Ph.D. degree in Computer and Information Science from the University of Pennsylvania in 2001, and an M.Eng. (First class) in Engineering and Computing Science from Oxford University in 1995. At Harvard, Parkes founded the Economics and Computer Science research group and teaches classes in artificial intelligence, machine learning, optimization, multi-agent systems, and topics at the intersection between computer science and economics. Parkes has served as Program Chair of ACM EC’07 and AAMAS’08 and General Chair of ACM EC’10, served on the editorial board of Journal of Artificial Intelligence Research, and currently serves as Editor of Games and Economic Behavior and on the boards of Journal of Autonomous Agents and Multi-agent Systems and INFORMS Journal of Computing. His research interests include computational mechanism design, electronic commerce, stochastic optimization, preference elicitation, market design, bounded rationality, computational social choice, networks and incentives, multi-agent systems, crowd-sourcing and social computing.

http://www.eecs.harvard.edu/~parkes/
18 OCTOBER 2010
Speaker: Elisa Bertino, Professor, Department of Computer Sciences, Purdue University and Research Director of CERIAS
Title: The Challenge of Assuring Data Trustworthiness
Host School: NCSU
NCSU Host: Munindar P. Singh (singh at cs.ncsu.edu)
UNC Host: 
Mike Reiter (reiter at cs.unc.edu)
Duke Host:
 Landon Cox (lpcox at cs.duke.edu)

Youtube video of talk

Abstract:
Today, more than ever, there is a critical need for organizations to share data within and across the organizations so that analysts and decision makers can analyze and mine the data, and make effective decisions. However, in order for analysts and decision makers to produce accurate analysis and make effective decisions and take actions, data must be trustworthy. Therefore, it is critical that data trustworthiness issues, which also include data quality, provenance and lineage, be investigated for organizational data sharing, situation assessment, multi-sensor data integration and numerous other functions to support decision makers and analysts. The problem of providing trustworthy data to users is an inherently difficult problem that requires articulated solutions combining different methods and techniques. In the talk we will first elaborate on the data trustworthiness challenge and discuss a trust fabric framework to address this challenge. The framework is centered on the need of trustworthiness and risk management for decision makers and analysts and includes four key components: identity management, usage management, provenance management and attack management. We will then present an initial approach for assess the trustworthiness of streaming data and discuss open research directions.

Biography:
Elisa Bertino is professor of Computer Science at Purdue University and serves as Research Director of the Center for Education and Research in Information Assurance and Security (CERIAS) and as interim director of CyberCenter. Previously she was a faculty member at Department of Computer Science and Communication of the University of Milan where she directed the DB&SEC laboratory. She has been a visiting researcher at the IBM Research Laboratory (now Almaden) in San Jose, at the Microelectronics and Computer Technology Corporation, at Rutgers University, at Telcordia Technologies. Her main research interests include security, privacy, digital identity management systems, database systems, and distributed systems. She serves (has served) on the editorial boards of several scientific journals, incuding IEEE Internet Computing, IEEE Security&Privacy, ACM Transactions on Information and System Security, ACM Transactions on the Web.

Elisa Bertino is a Fellow of IEEE and a Fellow of ACM. She received the 2002 IEEE Computer Society Technical Achievement Award for “For outstanding contributions to database systems and database security and advanced data management systems” and the 2005 IEEE Computer Society Tsutomu Kanai Award “For pioneering and innovative research contributions to secure distributed systems”.

http://homes.cerias.purdue.edu/~bertino/

 

22 NOVEMBER 2010
Speaker: S. Keshav, Professor and Canada Research Chair in Tetherless Computing in the Cheriton School of Computer Science at the University of Waterloo
Title: How the Internet can Green the Grid
Host School: UNC
UNC Host: Kevin Jeffay (jeffay at cs.unc.edu)
Duke Host: 
Bruce Maggs (bmm at cs.duke.edu)
NCSU Host:

Youtube video of talk

Abstract:
Several powerful forces are gathering to make fundamental and irrevocable changes to the century-old grid. The next-generation grid, often called the `smart grid,’ will feature distributed energy production, vastly more storage, tens of millions of stochastic renewable-energy sources, and the use of communication technologies both to allow precise matching of supply to demand and to incentivize appropriate consumer behavior. These changes will have the effect of reducing energy waste and reducing the carbon footprint of the grid, making it `smarter’ and `greener.’ In this talk, I will demonstrate that the concepts and techniques pioneered by the Internet, the fruit of four decades of research in this area, are directly applicable to the design of a smart, green grid. This is because both the Internet and the electrical grid are designed to meet fundamental needs, for information and for energy, respectively, by connecting geographically dispersed suppliers with geographically dispersed consumers.

Keeping this and other similarities (and fundamental differences, as well) in mind, I propose several specific areas where Internet concepts and technologies can contribute to the development of a smart, green grid.

(joint work with Catherine Rosenberg, University of Waterloo)

Biography:
S. Keshav is a Professor and Canada Research Chair in Tetherless Computing at the School of Computer Science, University of Waterloo, Canada and the Editor of ACM SIGCOMM Computer Communication Review. Earlier in his career he was a researcher at Bell Labs and an Associate Professor at Cornell. He is the author of a widely used graduate textbook on computer networking.

He has been awarded the Director’s Gold Medal at IIT Delhi, the Sakrison Prize at UC Berkeley, the Alfred P. Sloan Fellowship, a Best Student Paper award at ACM SIGCOMM, a Best Paper award at ACM MOBICOM, and two Test-of-Time awards from ACM SIGCOMM. He is a co-founder of three startups: Ensim Corporation, GreenBorder Technologies, and Astilbe Networks. His current interests are in the use of tetherless computing for rural development, and for gaining efficiency in energy generation, transmission, and consumption.

Keshav received a B.Tech from the Indian Institute of Delhi in 1986 and a Ph.D. from the University of California, Berkeley, in 1991, both in Computer Science.

http://blizzard.cs.uwaterloo.ca/keshav/wiki/index.php/Main_Page

 

24 JANUARY 2011
SpeakerTakeo Kanade, Computer Science, CMU
Title: Complete Tracking of a Large Number of Migrating and Proliferating Cells in Time-Lapse Microscopy Imagery
Host School: UNC
UNC Host: Jan-Michael Frahm (jmf at cs.unc.edu)
Duke Host:
 Carlo Tomasi (tomasi at cs.duke.edu)
NCSU Host:
 Emerson Murphy-Hill (emerson at csc.ncsu.edu)

Youtube video of talk

Abstract:
As a computer vision researcher, I believe that the advanced technologies of image motion analysis have great opportunities to help rapid advancement of biological discovery and its transition into new clinical therapies. In collaboration with biomedical engineers, my group has been developing a system for analyzing a time-lapse microscope-image sequence, typically from a phase-contrast or differential interference contrast (DIC) microscope that can precisely and individually track a large number of cells, while they undergo migration (translocation), mitosis (division), and apoptosis (death), and could construct complete cell lineages (mother-daughter relations) of the whole cell population.  Such a capability of high-throughput spatiotemporal analysis of cell behaviors allows for “engineering individual cells” – directing the migration and proliferation of tissue cells in real time in Tissue Engineering for seeding and culturing cells with hormones to induce growth of tissue.

The low signal-to-noise ratio of microscopy images, high and varying densities of cell cultures, topological complexities of cell shapes, and occurrences of cell divisions, touching and overlapping pose significant challenges to existing image-based tracking techniques.  I will present the challenges, results, and excitement of the new application area of motion image analysis.

Biography:
Takeo Kanade is the U. A. and Helen Whitaker University Professor of Computer Science and Robotics and the director of Quality of Life Technology Engineering Research Center at Carnegie Mellon University. He is also the director of the Digital Human Research Center in Tokyo, which he founded in 2001. He received his Doctoral degree in Electrical Engineering from Kyoto University, Japan, in 1974. After holding a faculty position in the Department of Information Science, Kyoto University, he joined Carnegie Mellon University in 1980, where he was the Director of the Robotics Institute from 1992 to 2001.

Dr. Kanade works in multiple areas of robotics: computer vision, multi-media, manipulators, autonomous mobile robots, medical robotics and sensors. He has written more than 350 technical papers and reports in these areas, and holds more than 30 patents. He has been the principal investigator of more than a dozen major vision and robotics projects at Carnegie Mellon.

Dr. Kanade has been elected to the National Academy of Engineering; the American Academy of Arts and Sciences; a Fellow of the IEEE; a Fellow of the ACM, a Founding Fellow of American Association of Artificial Intelligence (AAAI).  The awards he has received include the Franklin Institute Bower Prize, Okawa Award, C&C Award, Joseph Engelberger Award, IEEE Robotics and Automation Society Pioneer Award, and IEEE PAMI Azriel Rosenfeld Lifetime Accomplishment Award.

http://www.ri.cmu.edu/person.html?person_id=136

14 FEBRUARY 2011
Speaker: Arie E. Kaufman, Computer Science, Stony Brook University (SUNY)
Title: Virtual Colonoscopy and Computer-Aided Detection of Colon Cancer
Host School:
 UNC
UNC Host: 
Ming Lin, (lin at cs.unc.edu)
NCSU Host: 
Christopher Healey, (healey at csc.ncsu.edu)
Duke Host: 
Rachael Brady, (rbrady at duke.edu)

Youtube video of talk

Abstract:
Virtual colonoscopy (VC) is rapidly gaining popularity and is poised to become the procedure of choice in lieu of the conventional optical colonoscopy (OC) for mass screening for colon polyps – the precursor of colorectal cancer. VC combines computed tomography (CT) scanning and volume visualization technology. The patient abdomen is imaged in a few seconds by a CT scanner. A 3D model of the colon is then reconstructed from the CT scan by automatically segmenting the colon and employing “electronic cleansing” for computer-based removal of the residual material. The physician interactively navigates through the virtual colon using volume rendering, and employs customized tools for 3D measurements, “virtual biopsy” to interrogate suspicious regions, and “painting” to support 100% inspection of the colon surface. Unlike OC, VC is patient friendly, fast, non-invasive, more accurate, and cost-effective procedure for mass screening for colorectal cancer.  Our VC further incorporates a novel pipeline for computer-aided detection (CAD) of polyps. It automatically detects polyps by integrating volume rendering, conformal colon flattening, clustering, and “virtual biopsy” analysis. Along with the reviewing physician, CAD provides a second pair of “eyes” for locating polyps. We also explore immersive VC in a virtual reality environment.

Biography:
Arie Kaufman is the Chair of the Computer Science Department, Director of the Center of Visual Computing (CVC), Chief Scientist of the Center of Excellence in Wireless and Information Technology (CEWIT), and Distinguished Professor of Computer Science and Radiology at Stony Brook University. He has conducted research for over 35 years in visualization, graphics, user interfaces and VR and their applications, primarily in biomedicine. He is an IEEE Fellow, an ACM Fellow, and recipient of IEEE Visualization Career Award (‘05). He was the founding Editor-in-Chief of IEEE Transactions on Visualization & Computer Graphics (TVCG), 1995-98. He has been co-founder, papers/ program co-chair, and steering committee member of IEEE Visualization Conferences and chair and director of IEEE CS Technical Committee on Visualization & Graphics (VGTC). He received BS (‘69) in Math/Physics from Hebrew University, MS (‘73) in Computer Science from Weizmann Institute, and PhD (‘77) in Computer Science from Ben-Gurion University, Israel.

http://www.cs.sunysb.edu/~ari/

21 MARCH 2011
Speaker:
 Ian Akyildiz , Elec. and Comp. Engineering, Georgia Tech
Title: Nanonetworks: A New Frontier in Communications
Host School: NCSU
NCSU Host: George Rouskas (rouskas at ncsu.edu)
Duke Host: Romit Roy Choudhury (romit.rc at duke.edu)
UNC Host: 
Ketan Mayer-Patel (kmp at cs.unc.edu)

Youtube video of talk

Abstract:
Nanotechnology is enabling the development of devices in a scale ranging from one to a few one hundred nanometers. Nanonetworks, i.e., the interconnection of nano-scale devices, are expected to expand the capabilities of single nano-machines by allowing them to cooperate and share information. Traditional communication technologies are not directly suitable for nanonetworks mainly due to the size and power consumption of existing transmitters, receivers and additional processing components. All these define a new communication paradigm that demands novel solutions such as nano-transceivers, channel models for the nano-scale, and protocols and architectures for nanonetworks. In this talk, first the state-of-the-art in nano-machines, including architectural aspects, expected features of future nano-machines, and current developments are presented for a better understanding of the nanonetwork scenarios. Moreover, nanonetworks features and components are explained and compared with traditional communication networks. Novel nano-antennas based on nano-materials as well as the terahertz band are investigated for electromagnetic communication in nanonetworks. Furthermore, molecular communication mechanisms are presented for short-range networking based on ion signaling and molecular motors, for medium-range networking based on flagellated bacteria and nanorods, as well as for long-range networking based on pheromones and capillaries. Finally, open research challenges such as the development of network components, molecular communication theory, and new architectures and protocols, which need to be solved in order to pave the way for the development and deployment of nanonetworks within the next couple of decades are presented.

Biography:
http://users.ece.gatech.edu/~ian/
28 MARCH 2011
Speaker: Rastislav Bodik , Computer Science, UC Berkeley
Title: Automatic Programming Revisited
Host School: NCSU
NCSU Host:
 Frank Mueller, (mueller at ncsu.edu)
Duke Host: 
Dan Sorin, (sorin at ee.duke.edu)
UNC Host:
 Ketan Mayer-Patel (kmp at cs.unc.edu)

Youtube video of talk

Abstract:
Why is it that Moore’s Law hasn’t yet revolutionized the job of the programmer?   Compute cycles have been harnessed in testing, model checking, and autotuning but programmers still code with bare hands. Can their cognitive load be shared with a computer assistant? Automatic programming of the 80’s failed relying on too much AI.  Later, synthesizers succeeded in deriving programs that were superbly efficient, even surprising, but these synthesizers first had to be formally taught considerable human insight about the domain. Using examples from algorithms, frameworks, and GPU programming, I will describe how the emerging synthesis community rethought automatic programming.  The first innovation is to abandon automation, focusing instead on the intriguing new problem of how the human should communicate his incomplete ideas to her computerized algorithmic assistant, and how the assistant should talk back.  As an example, I will describe programming with angelic oracles. The second line of innovation changes the algorithmics.  Here, we have replaced deductive logic with constraint solving.  Indeed, new synthesis is to its classical counterpart what model checking is to verification, and enjoys similar benefits: because algorithmic synthesis relies more on compute cycles and less on a formal expert, it is easier to adapt the synthesizer to a new domain. I will conclude by explaining how all this will enable end users to program their browsers, spreadsheets, without ever seeing a line of code.

Biography:
http://www.cs.berkeley.edu/~bodik/

 

11 APRIL 2011
Speaker: Sarita V. Adve , Computer Science, University of Illinois
Title: Rethinking Parallel Languages and Hardware
Host School: Duke
Duke Host: 
Alvin R. Lebeck, (alvy at cs.duke.edu)
UNC Host:
 Ketan Mayer-Patel, (kmp at cs.unc.edu)
NCSU Host:
 Xiaohui Helen Gu, (gu at csc.ncsu.edu)

Youtube video of talk

Abstract:
The era of parallel computing for the masses is here, but writing correct parallel programs remains difficult. Aside from a few domains, most parallel programs are written using shared-memory. The memory model, which specifies the meaning of shared variables, is at the heart of this programming model. Unfortunately, it has involved a tradeoff between programmability and performance, and has arguably been one of the most challenging and contentious areas in both hardware architecture and programming language specification. Recent broad community-scale efforts have finally led to a convergence in this debate, with popular languages such as Java and C++ and most hardware vendors publishing compatible memory model specifications. Although this convergence is a dramatic improvement, it has exposed fundamental shortcomings in current popular languages and systems that thwart safe and efficient parallel computing.

I will discuss the path to the above convergence, the hard lessons learned, and their implications. A cornerstone of this convergence has been the view that the memory model should be a contract between the programmer and the system – if the programmer writes disciplined (data-race-free) programs, the system will provide high programmability (sequential consistency) and performance. I will discuss why this view is the best we can do with current popular languages, and why it is inadequate moving forward, requiring rethinking popular parallel languages and hardware. In particular, I will argue that (1) parallel languages should not only promote high-level disciplined models, but they should also enforce the discipline, and (2) for scalable and efficient performance, hardware should be co-designed to take advantage of and support such disciplined models. I will describe the Deterministic Parallel Java language and DeNovo hardware projects at Illinois as examples of such an approach.

This talk draws on collaborations with many colleagues over the last two decades on memory models (in particular, a CACM’10 paper with Hans-J. Boehm) and with faculty, researchers, and students from the DPJ and DeNovo projects.

Biography:
http://rsim.cs.illinois.edu/~sadve/
25 APRIL 2011
Speaker: Umesh Vazirani , Computer Science, UC Berkeley
Title: A Computational Perspective on Quantum Physics 
Host School: 
Duke
Duke Host: 
Kamesh Munagala(kamesh at cs.duke.edu)
UNC Host: 
Sanjoy Baruah   (baruah at cs.unc.edu)
NCSU Host: 
Matt Stallmann  (matt_stallmann at ncsu.edu)

Youtube video of talk

Abstract:
Obviously quantum computation cannot have any impact in the real world until a working quantum computer is built. Or can it? In this talk I will speak about several areas of potential impact. The first is post-quantum cryptography. We may end up switching from RSA to a quantum resistant crypto-system long before a quantum computer is built. The second is a new class of efficient classical algorithms for simulating (certain types of) quantum systems. The last relates to the foundations of quantum physics. Specifically, do the aspects of multi-particle quantum physics that lead to exponential speedups in quantum algorithms, also limit our ability to fully experimentally verify the validity of multi-particle quantum physics?

The talk is aimed at a general audience, and will not assume a background in quantum physics.

Biography:
http://www.cs.berkeley.edu/~vazirani/