Skip to main content

8 SEPTEMBER 2008
Speaker: Jayadev Misra, Professor and Schlumberger Centennial Chair in Computer Sciences, University of Texas at Austin
Title: Structured Wide-Area Programming
Host School: NCSU
Duke Host: Landon Cox (lpcox at cs.duke.edu)
UNC Host: 
Jasleen Kaur (jasleen at cs.unc.edu)
NCSU Host: Munindar Singh (singh at cs.ncsu.edu)

Youtube video of talk

Abstract
Internet today provides a wide range of services associated with web sites; examples include getting a stock quote, making an airline reservation, compressing a file or inverting a matrix. Each service may be likened to a basic operation in a computer, the internet computer. An application is a program written over the basic services, i.e., an orchestration of the services. This research is directed toward designing, implementing and studying an appropriate model of orchestration that would allow us to develop wide-area applications succinctly.

Just as structured programming gave programmers effective tools to organize the control flow of sequential programs, our research introduces mechanisms to organize the communication, synchronization and coordination in programs that run on wide-area networks. We have developed a programming model, called Orc, for structured wide-area programming. Orc includes constructs to orchestrate the concurrent invocation of services to achieve a goal — while managing time-outs, priorities, and failure of sites or communication. The talk will give an introduction to Orc, and some of the ongoing research on enhancing the model.

The Orc web page is at http://orc.csres.utexas.edu

Biography
Jayadev Misra is a professor and holder of the Schlumberger Centennial chair in Computer Sciences at the University of Texas at Austin. He has been the past editor of several journals including: Computing Surveys, Journal of the ACM, Information Processing Letters and the Formal Aspects of Computing. He is the author of two books, Parallel Program Design: A Foundation, Addison-Wesley, 1988, co-authored with Mani Chandy, and A Discipline of Multiprogramming, Springer-Verlag, 2001. Misra is a fellow of ACM and IEEE; he held the Guggenheim fellowship during 1988-1989. He was the Strachey lecturer at Oxford University in 1996, and he held the Belgian FNRS International Chair of Computer Science in 1990.

Misra’s research interests are in the area of concurrent programming, with emphasis on rigorous methods to improve the programming process. He is currently spearheading an effort, jointly with Tony Hoare, to establish a grand challenge project to automate large-scale program verification.

 

15 SEPTEMBER 2008
Speaker: Hector Garcia-Molina, Leonard Bosack and Sandra Lerner Professor in the Departments of Computer Science and Electrical Engineering at Stanford University
Title: Flexible Recommendations in CourseRank
Host School: Duke
Duke Host: Shivnath Babu (shivnath at cs.duke.edu)
UNC Host: 
Ketan Mayer-Patel (kmp at cs.unc.edu)
NCSU Host: Kemafor Anyanwu (kogan at ncsu.edu)

Abstract
CourseRank is a course planning tool we have developed for Stanford students, and is already in use by most undergraduates. For CourseRank, we have developed a “flexible recommendations” engine for defining recommendation strategies as high level workflows. By selecting a workflow and providing parameters (e.g., a filter condition for biology classes), students can receive personalized recommendations that better suit their needs. In this talk I will give an overview of CourseRank and its recommendation engine.

Biography
Hector Garcia-Molina is the Leonard Bosack and Sandra Lerner Professor in the Departments of Computer Science and Electrical Engineering at Stanford University, Stanford, California. He was the chairman of the Computer Science Department from January 2001 to December 2004. From 1997 to 2001 he was a member the President’s Information Technology Advisory Committee (PITAC). From August 1994 to December 1997 he was the Director of the Computer Systems Laboratory at Stanford. From 1979 to 1991 he was on the faculty of the Computer Science Department at Princeton University, Princeton, New Jersey. His research interests include distributed computing systems, digital libraries and database systems. He received a BS in electrical engineering from the Instituto Tecnologico de Monterrey, Mexico, in 1974. From Stanford University, Stanford, California, he received in 1975 a MS in electrical engineering and a PhD in computer science in 1979. He holds an honorary PhD from ETH Zurich (2007). Garcia-Molina is a Fellow of the Association for Computing Machinery and of the American Academy of Arts and Sciences; is a member of the National Academy of Engineering; received the 1999 ACM SIGMOD Innovations Award; is on the Technical Advisory Board of DoCoMo Labs USA, Yahoo Search & Marketplace; is a Venture Advisor for Diamondhead Ventures, and is a member of the Board of Directors of Oracle and Kintera.

 

22 SEPTEMBER 2008
Speaker: William Swartout, Director of Technology for the Institute for Creative Technologies and Research Professor of Computer Science at the University of Southern California
Title: Toward the Holodeck: Integrating Graphics, Artificial Intelligence, Entertainment and Learning
Host School: NCSU
Duke Host: Vincent Conitzer (conitzer at cs.duke.edu)
UNC Host: 
Fred Brooks (brooks at cs.unc.edu)
NCSU Host: Michael Young (young at cs.ncsu.edu)

Youtube video of talk

Abstract
Using the Holodeck from Startrek as our inspiration, researchers at the USC Institute for Creative Technologies have been pushing back the boundaries of the possible with the goal of creating immersive experiences so compelling that people will react to them as if they were real. In this talk I will describe our research in photo-real computer graphics, interactive virtual humans, immersive virtual reality and computer-based tutoring that moves us closer to realizing the vision of the Holodeck. I will also discuss how entertainment content in the form of engaging stories and characters can heighten these experiences, and how such experiences can be used for learning.

Biography
William Swartout is Director of Technology for USC’s Institute for Creative Technologies (ICT) and a research professor of computer science at USC. He received his Ph.D. and M.S. in computer science from MIT and his bachelor’s degree from Stanford University.

Dr. Swartout has been involved in the research and development of AI systems for over 30 years. His particular research interests include virtual humans, explanation and text generation, knowledge acquisition, knowledge representation, knowledge sharing, education, intelligent agents and the development of new AI architectures.

As Director of Technology at the ICT, Dr. Swartout provides overall direction for the ICT’s research programs. He led the Mission Rehearsal Exercise project, which created an immersive virtual reality environment in which trainees interact with computer generated virtual humans. This project received awards for outstanding innovation in modeling and simulation from the NTSA and has received other awards including first place for innovative application of agent technology at the 2001 International Conference on Autonomous Agents.

Dr. Swartout is a Fellow of the American Association for Artificial Intelligence (AAAI), has served on the Board of Councilors of the AAAI and is past chair of the Special Interest Group on Artificial Intelligence (SIGART) of the Association for Computing Machinery (ACM). His is a member of the US Joint Forces Command Transformation Advisory Group, the Board on Army Science and Technology of the National Academies, and a past member of the Air Force Scientific Advisory Board.

 

3 NOVEMBER 2008
Speaker: Madhu Sudan, MIT CSAIL
Title: Communicating Computers and Computing Communicators: A need for a new unifying theory
Host School: UNC
Duke Host: John Reif
UNC Host: 
Sanjoy Baruah (baruah at cs.unc.edu)
NCSU Host: Munindar P. Singh

Youtube video of talk

Abstract
The theories of computing (Turing, ~1930s) and communication (Shannon, Hamming ~1940s) have had a profound impact of the development of the two fields and the resulting technologies have drastically altered our lives today. Part of the success of the two theories can be attributed to a clean separation of the computing elements from the communicating elements. Today, however, communication and computing are coming ever closer together, often leaving the human out of the loop. This merger is posing new
challenges, definitional and algorithmic, to the theory of communication. In this talk I will describe some of the concrete challenges that we have looked at. I will also describe our attempts at modelling these problems and, in some cases, describe some preliminary
solutions.

Biography
Madhu Sudan received his Bachelor’s degree from the Indian Institute of Technology at New Delhi in 1987 and his Ph.D. from the University of California at Berkeley in 1992. From 1992-1997 he was a Research Staff Member at IBM’s Thomas J. Watson Research Center. In 1997, he moved to MIT where he is now the Fujitsu Professor
of Electrical Engineering and Computer Science, and as Associate Director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). He was a Fellow at the Radcliffe Institute for Advanced Study from 2003-2004, and a Guggenheim Fellow from 2005-2006.

Madhu Sudan’s research interests include computational complexity theory, algorithms and coding theory. He is best known for his works on probabilistic checking of proofs, and on the design of list-decoding algorithms for error-correcting codes. He has served on numerous program committees for conferences in theoretical computer science, and was the program committee chair of the IEEE Conference on Computational Complexity ’01, and the IEEE Symposium on Foundations of Computer Science ’03. He is the chief editor of Foundations and Trends in Theoretical Computer Science, a new journal publishing surveys in the field. He is currently a member of the editorial boards of the Journal of the ACM and the SIAM Journal on Computing. Previously he served on the boards of the SIAM Journal on Discrete Mathematics, Information and Computation, and the IEEE Transactions on Information Theory.

In 2002, Madhu Sudan was awarded the Nevanlinna Prize, for outstanding contributions to the mathematics of computer science, at the International Congress of Mathematicians in Beijing. Madhu Sudan’s other awards include the ACM Doctoral Dissertation Award (1992), the IEEE Information Theory Society Paper Award (2000) and the Godel Prize (2001), Distinguished Alumnus Award of the University of California at Berkeley (2003), and Distinguished Alumnus Award of the Indian Institute of Technology at Delhi (2004).

 

17 NOVEMBER 2008
Speaker: Andrew McCallum, Associate Professor of Computer Science, University of Massachusetts at Amherst
Title: 
Information Extraction, Data Mining and Joint Inference
Host School: Duke
Duke Host: 
Alexander Hartemink (amink at cs.duke.edu)
UNC Host: 
Wei Wang (weiwang at cs.unc.edu)
NCSU Hosts:

Youtube video of talk

Abstract
Although information extraction and data mining appear together in many applications, their interface in most current systems would better be described as serial juxtaposition than as tight integration. Information extraction populates slots in a database by identifying relevant subsequences of text, but is usually not aware of the emerging patterns and regularities in the database.  Data mining methods begin from a populated database, and are often unaware of where the data came from, or its inherent uncertainties.  The result is that the accuracy of both suffers, and accurate mining of complex text sources has been beyond reach.

In this talk I will describe work in probabilistic models that perform joint inference across multiple components of an information processing pipeline in order to avoid the brittle accumulation of errors.  The need for joint inference appears not only in extraction and data mining, but also in natural language processing, computer vision, robotics and elsewhere.  I will argue that joint inference is the fundamental issue in articificial intelligence.

After briefly introducing conditional random fields, I will describe recent work in information extraction, entity resolution and alignment that use joint inference, stochastic approximations, weighted first-order logic and other methods of probabilistic programming. I’ll close with a demonstration of Rexa.info, a new research paper search engine that leverages these techniques.

Joint work with colleagues at UMass: Charles Sutton, Aron Culotta, Khashayar Rohanemanesh, Greg Druck, Ben Wellner, Michael Hay, Xuerui Wang, David Mimno, Pallika Kanani, Kedare Bellare, Michael Wick, Rob Hall and Gideon Mann.

Biography
Andrew McCallum is an Associate Professor and Director of the Information Extraction and Synthesis Laboratory in the Computer Science Department at University of Massachusetts Amherst.  He has published over 100 papers in many areas of AI, including natural language processing, machine learning, data mining and reinforcement learning, and his work has received over 10,000 citations.  He received his PhD from University of Rochester in 1995 with Dana Ballard and a postdoctoral fellowship from CMU with Tom Mitchell and Sebastian Thrun.  Afterward he worked in an industrial research lab, where he spearheaded the creation of CORA, an early research paper search engine that used machine learning for spidering, extraction, classification and citation analysis.  In the early 2000’s he was Vice President of Research and Development at at WhizBang Labs, a 170-person start-up company that used machine learning for information extraction from the Web.  He is the recipient of two NSF ITR awards, the UMass NSM Distinguished Research Award, the UMass Lilly Teaching Fellowship, and the IBM Faculty Partnership Award.  He was the Program Co-chair for the International Conference on Machine Learning (ICML) 2008, and a member of the boards of the International Machine Learning Society, the CRA Community Computing Consortium and the editorial board of the Journal of Machine Learning Research.  For the past ten years, McCallum has been active in research on statistical machine learning applied to text, especially information extraction, co-reference, document classification, clustering, finite state models, semi-supervised learning, and social network analysis. New work on search and bibliometric analysis of open-access research literature can be found at http://rexa.info.  McCallum’s web page:http://www.cs.umass.edu/~mccallum.

 

24 NOVEMBER 2008
Speaker: Ken Birman, Professor of Computer Science, Cornell University
Title: Live Distributed Objects
Host School: UNC
Duke Host: Xiaowei Yang (xwy at cs.duke.edu)
UNC Host: 
Mike Reiter (reiter at cs.unc.edu)
NCSU Host:

Youtube video of talk

Abstract
Although we’ve been building distributed systems for decades, it remains remarkably difficult to get them right. Distributed software is hard to design and the tools available to developers have lagged far behind the options for building and debugging non-distributed programs targeting desktop environments. At Cornell, we’re trying to change this dynamic. The first part of this talk will describe “Live Distributed Objects,” a new and remarkably easy way to create distributed applications, with little or no programming required. Supporting these kinds of objects forced us to confront a number of scalability, security and performance questions not addressed by prior research on distributed computing platforms. The second part of the talk will look at Cornell’s Quicksilver system and the approach it uses to solve these problems.

Biography
Ken Birman is Professor of Computer Science at Cornell University. He currently heads the QuickSilver project, which is developing a scalable and robust distributed computing platform. Previously he worked on fault-tolerance, security, and reliable multicast. In 1987 he founded a company, Isis Distributed Systems, which developed robust software solutions for stock exchanges, air traffic control, and factory automation. The author of several books and more than 200 journal and conference papers, Dr. Birman was Editor in Chief of ACM Transactions on Computer Systems from 1993-1998 and is a Fellow of the ACM.

 

26 JANUARY 2009
Speaker: Mario Gerla, Professor, UCLA, Network Research Lab
Title: 
Vehicular Urban Sensing: Dissemination and Retrieval
Host School: 
NCSU
NCSU Host: 
Injong Rhee (rhee at eos.ncsu.edu)
Duke Host: 
Jeff Chase (chase at cs.duke.edu)
UNC Host:
 Jasleen Kaur (jasleen at cs.unc.edu)

Youtube video of talk

Abstract
There has been growing interest in vehicle to vehicle networking for a broad range of applications ranging from safe driving to advertising, commerce and games. One emerging application is urban surveillance. Vehicles monitor the environment, classify the events, e.g., license plate readings, and exchange metadata with neighbors in a peer-to-peer fashion, creating a totally distributed index of all the events to be accessed by mobile users. For instance, the Department of Transportation extracts traffic congestion statistics; the Department of Health monitors pollutants, and; Law Enforcement Agents carry out forensic investigations. Mobile, vehicular sensing differs significantly from conventional wireless sensing. The vehicles have no strict limits on battery life, processing power and storage capabilities. Moreover they can generate an enormous volume of data, making conventional sensor harvesting inadequate. In this talk we first review popular V2V applications and then introduce MobEyes, a middleware solution that diffuses data summaries to create a distributed index of the sensed data. We discuss the challenges of designing and maintain such a system, from information dissemination to harvesting, routing and privacy.

Biography
Dr. Mario Gerla is Professor in the Computer Science Department at UCLA. Dr. Gerla received his Engineering degree from the Politecnico di Milano, Italy, in 1966 and the M.S. and Ph.D. degrees from UCLA in 1970 and 1973. He became IEEE Fellow in 2002. At UCLA, he was part of a small team that developed the early ARPANET protocols under the guidance of Prof. Leonard Kleinrock. He worked at Network Analysis Corporation, New York, from 1973 to 1976, transferring the ARPANET technology to several Government and Commercial Networks. He joined the Faculty of the Computer Science Department at UCLA in 1976, where he is now Professor. At UCLA he has designed and implemented some of the most popular and cited network protocols for ad hoc wireless networks including distributed clustering, multicast (ODMRP and CODECast) and transport (TCP Westwood) under DARPA and NSF grants. He has lead the $12M, 6 year ONR MINUTEMAN project, designing the next generation scalable airborne Internet for tactical and homeland defense scenarios. He is now leading two advanced wireless network projects under ARMY and IBM funding. In the commercial network scenario, with NSF and Industry sponsorship, he has led the development of vehicular communications for safe navigation, urban sensing and location awareness. A parallel research activity covers personal P2P communications including cooperative, networked medical monitoring (see www.cs.ucla.edu/NRL for recent publications).

 

9 FEBRUARY 2009
Speaker: Sanjeev Arora, Professor of Computer Science, Princeton University
Title:Semidefinite programming and approximation algorithms: A survey of recent results
Host School: Duke
Duke Host: Kamesh Munagala (kamesh at cs.duke.edu)
UNC Host: 
Sanjoy Baruah (baruah at cs.unc.edu)
NCSU Hosts: Rudra Dutta (dutta at csc.ncsu.edu)

Youtube video of talk

Abstract
Computing approximately optimal solutions is an attractive way to cope with NP-hard optimization problems. In the past decade or so, semidefinite programming or SDP (a form of convex optimization that generalizes linear programming) has emerged as a powerful tool for designing such algorithms, and the last few years have seen a
profusion of results (worst-case algorithms, average case algorithms, impossibility results, etc).

This talk will be a survey of this area and these recent results. We will see that analysing semidefinite program draws upon ideas from a variety of other areas, and has also led to new results in mathematics. At the end we will touch upon work that greatly improves
the running time of SDP-based algorithms, making them potentially quite practical.

The survey will be essentially self-contained.

Biography
Sanjeev Arora is Professor of Computer Science at Princeton University and works in computational complexity theory, approximation algorithms for NP-hard problems, geometric algorithms, and probabilistic algorithms. He has received the ACM Doctoral
Dissertation Award, the SIGACT-EATCS Goedel Prize, and the Packard Fellowship.

 

23 MARCH 2009
Speaker: Marc Levoy, Professor, Computer Science and Electrical Engineering, Stanford University.
Title: Light field photography and microscopy
Host School: UNC
Duke Host:
UNC Host: 
Henry Fuchs (fuchs at cs.unc.edu)
NCSU Host:

Youtube video of talk

Abstract
Light field photography is a technique for recording light intensity as a function of position and direction in a 3D scene. Unlike conventional photographs, light fields permit manipulation of viewpoint and focus after the imagery has been recorded. At Stanford we have built a number of devices for capturing light fields, including (1) an array of 128 synchronized video cameras, (2) a handheld camera in which a microlens array has been inserted between the main lens and sensor plane, and (3) a microscope in which a similar microlens array has been inserted at the intermediate image plane.

The third device permits us to capture light fields of microscopic biological (or industrial) objects in a single snapshot. Although diffraction limits the product of spatial and angular resolution in these light fields, we can nevertheless produce useful perspective flyarounds and 3D focal stacks from them. Since microscopes are inherently orthographic devices, perspective flyarounds represent a new way to look at microscopic specimens. Focal stacks are not new, but manual techniques for capturing them are time-consuming and hence not applicable to moving or light-sensitive specimens. Applying 3D deconvolution to these focal stacks, we can produce a set of cross sections, which can be visualized using volume rendering. Ours is the first technology (of which we are aware) that can produce volumetric models from a single photograph.

In this talk, I will describe a prototype light field microscope and show perspective views, focal stacks, and reconstructed volumes for a variety of biological specimens. I will also survey some promising directions for this technology. For example, by introducing a second microlens array and a video projector, we can control the light field arriving at a specimen as well as the light field leaving it. Potential applications of this idea include microscope scatterometry – measuring reflectance as a function of incident and reflected angle, and “designer illumination” – illuminating one part of a microscopic object while avoiding illuminating another.

Biography
Marc Levoy is a Professor of Computer Science and (jointly) Electrical Engineering at Stanford University. He received a Bachelor’s and Master’s in Architecture from Cornell University in 1976 and 1978, and a PhD in Computer Science from the University of North Carolina at Chapel Hill in 1989. In the 1970’s Levoy worked on computer animation, developing an early computer-assisted cartoon animation system. This system was used by Hanna-Barbera Productions from 1983 until 1996 to produce The Flintstones, Scooby Doo, and other shows. In the 1980’s Levoy worked on volume rendering, a family of techniques for displaying sampled three-dimensional functions, for example computed tomography (CT) or magnetic resonance (MR) data. In the 1990’s he worked on technology and algorithms for digitizing three-dimensional objects. This led to the Digital Michelangelo Project, in which he and a team of researchers spent a year in Italy digitizing the statues of Michelangelo using laser scanners. His current interests include light field sensing and display, computational photography, and applications of computer graphics in microscopy and biology. Awards: Charles Goodwin Sands Medal for best undergraduate thesis (1976), National Science Foundation Presidential Young Investigator (1991), ACM SIGGRAPH Computer Graphics Achievement Award (1996), ACM Fellow (2007). Recent professional service: Papers Chair of SIGGRAPH 2007.

 

6 APRIL 2009

Speaker: David Patterson, Director, U.C. Berkeley Parallel Computing Laboratory (Par Lab) and Director, U.C. Berkeley Reliable Adaptive Distributed systems Laboratory (RAD Lab)
Title:The Parallel Revolution Has Started: Are You Part of the Solution or Part of the Problem?
Host School: UNC
Duke Host: Alvy Lebeck (alvy at cs.duke.edu)
UNC Host: 
Dinesh Manocha (dm at cs.unc.edu)
NCSU Host: Vincent Freeh (vin at cs.ncsu.edu)

Youtube video of talk

Abstract 
This talk will explain
* Why the La-Z-Boy era of programming is over
* The implications to the IT industry if the parallel revolution should fail
* The opportunities and pitfalls of this revolution
* What Berkeley is doing to try to be near the forefront of this revolution
Power to the (manycore) processors!

(If time is available, the talk will explain a new intuitive visual performance model called “Roofline,” which will appear in April 2009 CACM.)

Biography 
David Patterson was the first in his family to graduate from college and he enjoyed it so much that he didn’t stop until he received a PhD from UCLA in 1976. He then headed north to U.C. Berkeley. He spent 1979 at DEC working on the VAX minicomputer, which inspired him and his colleagues to later develop a Reduced Instruction Set Computer (RISC) and then Sun Microsystems to recruit him in 1984 to start the SPARC architecture. In 1987, Patterson and colleagues tried building dependable storage systems from the new PC disks. This led to Redundant Array of Inexpensive Disks (RAID). He spent 1989 working on the CM-5 supercomputer. Patterson and colleagues later tried building a supercomputer using standard desktop computers and switches. The resulting Network of Workstations (NOW) project led to cluster technology used by many Internet services. In the past, he served as Chair of Berkeley’s CS Division, Chair of the Computer Research Association, and President of the Association for Computing Machinery.

All this resulted in 200 papers, 5 books, and about 30 of honors, some shared with friends, including election to the National Academy of Engineering, the National Academy of Sciences, and the Silicon Valley Engineering Hall of Fame. He was named Fellow of the Computer History Museum and both AAAS organizations. From the ACM, where as a fellow, he received the SIGARCH Eckert-Mauchly Award, the SIGMOD Test of Time Award, the Distinguished Service Award, and the Karlstrom Outstanding Educator Award. He is also a fellow at the IEEE, where he received the Johnson Information Storage Award, the Undergraduate Teaching Award, the Mulligan Education Medal. Finally, he shared the IEEE the von Neumann Medal and the NEC C&C Prize with his textbook co-author, John Hennessy.