Skip to main content

19 NOVEMBER 2007
Speaker: Jonathan Shewchuk, Associate Professor in Computer Science, University of California at Berkeley
Title: Tetrahedral Meshes with Good Dihedral Angles
Host School: UNC
Duke Host: Carlo Tomasi (tomasi at cs.duke.edu
UNC Host: 
Jack Snoeyink (snoeyink at cs.unc.edu)
NCSU Host: Christopher Healey (healey at csc.ncsu.edu)

Youtube video of talk

Abstract
A central tool in scientific computing and computer animation is the finite element method, whose success depends on the quality of the meshes used to model the complicated underlying geometries.  We develop two new methods for creating high-quality tetrahedral meshes:  one with guaranteed good dihedral angles, and one that in practice produces far better dihedral angles than any prior method.  The isosurface stuffing algorithm fills an isosurface with a uniformly sized tetrahedral mesh whose dihedral angles are bounded between 10.7 degrees and 165 degrees.  The algorithm is whip fast, numerically robust, and easy to implement because, like Marching Cubes, it generates tetrahedra from a small set of precomputed stencils.  Our angle bounds are guaranteed by a computer-assisted proof.  Our second contribution is a mesh improvement method that uses optimization-based smoothing, topological transformations, and vertex insertions and deletions to achieve extremely high quality tetrahedra.

Biography
Jonathan Shewchuk is an Associate Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley.  He is best known for his Triangle software for high-quality triangular mesh generation, which won the 2003 James Hardy Wilkinson Prize in Numerical Software, and his “Introduction to the Conjugate Gradient Method Without the Agonizing Pain.”

 

14 JANUARY 2008
Speaker: Éva Tardos, Professor and Chair, Department of Computer Science, Cornell University
Title: Games in Networks
Host School: Duke
Duke Host: Kamesh Munagala
UNC Host:
 Jack Snoeyink (snoeyink at cs.unc.edu)
NCSU Hosts:

Youtube video of talk

Abstract
Many large networks operate and evolve through interactions of large numbers of diverse participants. Such networks play a fundamental role in many domains, ranging from communication networks to social networks. In light of these competing forces, it is surprising how efficient these networks are. It is an exciting challenge to understand the success of these networks in game theoretic terms: what principles of interaction lead selfish participants to form such efficient networks? In this talk we present a number of network formation and routing games. We focus on simple games that have been analyzed in terms of the efficiency loss that results from selfishness. In each setting our goal is to quantify the degradation of quality of solution caused by the selfish behavior of users, comparing the selfish outcome to a centrally designed optimum, or comparing outcomes with different levels of cooperation.

Biography
Éva Tardos is a Jacob Gould Schurman Professor of Computer Science at Cornell University. She was born and educated in Hungary, and received her Ph.D. at Eötvös University in Budapest, Hungary in 1984. After teaching at Eötvös and the MIT, she joined Cornell in 1989. She is currently the chair of the Computer Science department at Cornell. She is a member of the National Academy of Engineering, American Academy of Arts and Sciences, an ACM Fellow, INFORMS fellow, was a Guggenheim Fellow, a Packard Fellow, a Sloan Fellow; an NSF Presidential Young Investigator; and has received the Fulkerson Prize in 1988, and the Dantzig prize in 2006. She is the editor editor-in-Chief of SIAM Journal of Computing, and editor of several other journals including Journal of the ACM, and Combinatorica.

Tardos’s research interest is algorithms, and algorithmic game theory. Her work focuses on the design and analysis of efficient methods for combinatorial-optimization problems on graphs or networks. She is mostly interested in fast combinatorial algorithms that provide provably optimal or close-to-optimal results. She is most known for her work on network-flow algorithms, approximation algorithms for network flows, cut, and clustering problems. Her recent work focuses on algorithmic game theory, an emerging new area of designing systems and algorithms for selfish users.

 

11 FEBRUARY 2008
Speaker: Kathy Yelick, Professor, Electrical Engineering and Computer Sciences Department, UC Berkeley and National Energy Research Scientific Computing Center, Lawrence Berkeley National Lab
Title: Programming Models for Petascale
Host School: NCSU
Duke Host: Xiaobai Sun
UNC Host: 
Ketan Mayer-Patel (kmp at cs.unc.edu)
NCSU Host: Xiaosong Ma (ma at cs.ncsu.edu)

Youtube video of talk

Abstract
Petascale systems will soon be available to the computational science community at multiple sites. These systems will represent a variety of architectural models, but with one common component, which is an increasing reliance on multicore technology as the building block for these machines. This sea change towards on-chip parallelism brings into question the message-passing programming model that has dominated high-end programming for the past decade. In this talk I will describe an alternative to message passing called Partitioned Global Address Space (PGAS) languages, describe some of the performance and scaling advantages, and also advocate a new notion of high end programming in which software is designed up-front to be adaptable to current and future systems.

Biography
Katherine Yelick is the Director of the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and a Professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley. She is the author or co-author of two books and more than 85 refereed technical papers on parallel languages, compilers, algorithms, libraries, architecture, and storage. She co-invented the UPC and Titanium languages and demonstrated their applicability across architectures through the use of novel runtime and compilation methods. She also developed techniques for self-tuning numerical libraries, such as the OSKI sparse matrix library, which automatically adapt the code to machine properties. Her work includes performance analysis and modeling as well as optimization techniques for memory hierarchies, multicore processors, communication libraries, and processor accelerators. She earned her Ph.D. in Electrical Engineering and Computer Science from MIT and has been a professor of Electrical Engineering and Computer Sciences at UC Berkeley since 1991 with a joint research appointment at Berkeley Lab since 1996. She has received multiple research awards, as well as teaching awards from both UC Berkeley and MIT.

 

18 FEBRUARY 2008
Speaker: Frances Allen, IBM Fellow Emerita
Title: 
Compilers and Multi-Core Computing Systems
Host School: UNC
Duke Host: 
Alvy Lebeck
UNC Host: 
Diane Pozefsky (pozefsky at cs.unc.edu)
NCSU Hosts:

Youtube video of talk

Abstract
Multi-core computers are ushering in a new era of parallelism everywhere. As more cores (and parallelism) are added, the potential performance of the hardware will continue to increase. But how will users and applications take advantage of all the parallelism? To address this question the speaker will first give her personal, historical perspective on languages and compilers for high performance systems then discuss the challenges and opportunities of universal parallelism.

Biography
Fran Allen is an IBM Fellow Emerita at the T. J. Watson Research Laboratory with a specialty in compilers and program optimization for high performance computers. This work led to Allen being named the recipient of ACM’s 2006 Turing Award “For pioneering contributions to the theory and practice of optimizing compiler techniques that laid the foundation for modern optimizing compilers and automatic parallel execution.”

She is a member of the American Philosophical Society and the National Academy of Engineers, and is a Fellow of the American Academy of Arts and Sciences, ACM, IEEE, and the Computer History Museum. She has served on numerous national technology boards including CISE at the National Science Foundation and the CSTB for the National Research Council. Her many awards and honors include honorary doctorates from the University of Alberta (1991), Pace University (1999), and the University of Illinois at Urbana (2004).

Fran is an active mentor, advocate for technical women in computing, environmentalist, and explorer.

25 FEBRUARY 2008
Speaker: Joseph M. Hellerstein, Professor, EECS Computer Science Division, UC Berkeley
Title: Declarative Networking: What is Next
Host School: Duke
Duke Host: Shivnath Babu
UNC Host: 
Wei Wang (weiwang at cs.unc.edu)
NCSU Host:

Youtube video of talk

Abstract
Declarative languages allow programmers to say “what” they want, without worrying over the details of “how” to achieve it. These kinds of languages revolutionized data management decades ago (SQL, spreadsheets), but have had limited success in other aspects of computing. The story seems to be changing in recent years, however. One new chapter is work that my colleagues and I have been pursuing on the design and implementation of declarative languages and runtime systems for network protocol specification. Distributed Systems and Networking appear to be surprisingly natural domains for declarative specifications, and — given recent interest in revisiting Internet
Architecture from scratch — these domains are ripe for a new programming methodology. The results of our first phase of research have been exciting: we have implemented complex networking infrastructure in 100x less code than traditional implementations, and our programs often match very closely (sometimes line-for-line) with psuedocode published by protocol inventors. As the work on core declarative networking has matured, a number of groups have begun pursuing related applications for declarative languages, including our own emerging work on hybrid protocol synthesis, distributed Machine Learning, and language metacompilation, as well as initial work by others on replication systems, security, distributed debugging, consensus protocols, and modular robotics. This talk will introduce the concepts of Declarative Networking, the state of the research agenda today, and new directions being pursued.

Biography
Joseph M. Hellerstein is a Professor of Computer Science at the University of California, Berkeley, whose research focuses on data management and networking. His work has been recognized via awards including an Alfred P. Sloan Research Fellowship, MIT Technology Review’s inaugural TR100 list, and two ACM-SIGMOD “Test of Time” awards. Key ideas from his research have been incorporated into commercial and open-source database software released by IBM, Oracle, and PostgreSQL. He has also held industrial posts including Director of Intel Research Berkeley, and Chief Scientist of Cohera Corporation.

 

CANCELLED — 17 MARCH 2008
Speaker: Fred Schneider, Professor, Computer Science, Cornell University
Title: Revisiting Independence for Trustworthiness
Host School: NCSU
Duke Host: Landon Cox (lpcox at cs.duke.edu
UNC Host: 
Rob Fowler
NCSU Hosts: Douglas Reeves (reeves at eos.ncsu.edu)
Peng Ning (pning at ncsu.edu),
Ting Yu (tyu at ncsu.edu)
Purushothaman Iyer (purush at ncsu.edu)
Annie Anton (aianton at mindspring.com)

Abstract
A trustworthy service must tolerate attacks and failures. Replication is the standard approach for implementing integrity and availability, but independence assumptions for replicas are easily invalidated by attacks. Research directed at bridging the gap will be discussed. In particular, engineering issues will be explored in connection with trustworthy distributed services that have been prototyped. And foundational issues, including a characterization of what abstract classes of attacks can be tolerated, will also be discussed.

Biography
Fred B. Schneider is a professor at Cornell’s Computer Science Department and chief scientist of the TRUST (Team for Research in Ubiquitous Secure Technology) NSF Science and Technology Center (a collaboration of U C Berkeley, Carnegie-Mellon, Cornell, Stanford, and Vanderbilt).

Schneider has a B.S. from Cornell (’75), an M.S. and Ph.D. (’78) from Stony Brook University, and a D.Sc. [honoris causa] from the University of Newcastle-upon-Tyne (’03). He is a fellow of AAAS and ACM, a senior member of IEEE, and was named Professor-at-Large at University of Tromso (Norway) in 1996.

Schneider is author of the graduate text ‘On Concurrent Programming’ and is co-author (with David Gries) of the undergraduate text ‘A Logical Approach to Discrete Math’, in addition to having chaired the National Research Council’s study committee on information systems trustworthiness and edited its final report ‘Trust in Cyberspace’.

Co-managing editor of Springer-Verlag’s Texts and Monographs in Computer Science, Schneider is also associate editor-in-chief of ‘IEEE Security and Privacy’, and serves on several other journal editorial boards. He is a member of industrial technical advisory boards for FAST Search and Transfer and for Fortify Software, and he chairs Microsoft’s Trustworthy Computing Academic Advisory Board. Schneider serves on the National Research Council’s CSTB, the NIST Information Security and Privacy Board, the CRA board of directors, and the CCC Council.

Schneider’s research concerns problems associated with making distributed and concurrent systems trustworthy. His early work was in formal methods and methodologies for concurrent programming and in protocols for fault-tolerance. More recently, his attention has turned to topics in computer security.

 

7 APRIL 2008
Speaker: Russell H. Taylor, Professor and Director, CISST ERC, Johns Hopkins University
Title: Medical Robotics and Computer-Integrated Surgery
Host School: UNC
Duke Host: Bruce Donald 
UNC Host: 
Ming Lin (lin at cs.unc.edu)
NCSU Host: Robert Fornaro (fornaro at ncsu.edu)

Youtube video of talk

Abstract
For pdf of abstract please visit here

Biography
Russell H. Taylor received a B.E.S. degree from The Johns Hopkins University in 1970 and a Ph.D. in Computer Science from Stanford in 1976. He joined IBM Research in 1976, where he developed the AML robot language. Following a two year assignment in Boca Raton, he managed robotics and automation technology research activities at IBM Research from 1982 until returning to full time technical work in late 1988. From March 1990 to September 1995, he was manager of Computer Assisted Surgery. In September 1995, Dr. Taylor moved to Johns Hopkins University as a Professor of Computer Science. He is currently a Professor of Computer Science, with joint appointments in Radiology and Mechanical Engineering and is Director of the NSF Engineering Research Center for Computer-Integrated Surgical Systems and Technology at Johns Hopkins. His research interests include robot systems, programming languages, model-based planning, and (most recently) the use of imaging, model-based planning, and robotic systems to augment human performance in surgical procedures. He is Editor Emeritus of the IEEE Transactions on Robotics and Automation, a Fellow of the IEEE, and a member of various honorary societies, panels, editorial boards, and program committees.

 

CANCELLED–Monday, April 14, 2008
Speaker:
 David Patterson, Professor of Computer Science, UC Berkeley
Title: The Parallel Computing Landscape: A Berkeley View 2.0
Host School: UNC
Duke Host: Alvy Lebeck
UNC Host: Dinesh Manocha (dm at cs.unc.edu)
NCSU Host:

Abstract
In December 2006 we published a broad survey of the issues for the whole field concerning the multicore/manycore sea change (see view.eecs.berkeley.edu). We view the ultimate goal as being able to productively create efficient and correct software that smoothly scales when the number of cores per chip doubles biennially. This talk covers the specific research agenda that a large group of us at Berkeley are going to follow (see parlab.eecs.berkeley.edu).

To take a fresh approach to the longstanding parallel computing problem, our research agenda will be driven by compelling applications developed by domain experts. Historically, past efforts to resolve these challenges have often been driven “bottom-up” from the hardware, with applications an afterthought. We will focus on exciting new applications that need much more computing horsepower to run well rather than on legacy programs that already run well on today’s computers. Our applications are in the areas of personal health, image retrieval, music, speech understanding, and browsers.

The development of parallel software is the heart of the research agenda. The task will be divided into two layers: an efficiency layer that aims at low overhead for 10 percent of the best programmers, and a productivity layer for the rest of the programming community–including domain experts–that reuses the parallel software developed at the efficiency layer. Key to this approach is a layer of libraries and programming frameworks centered around the 13 computational bottlenecks (“dwarfs”) that we identified in the Berkeley View report. We will also create a Composition and Coordination Language to make it easier to compose these components. Finally, we will rely on autotuning to map the software efficiently to a particular parallel computer. Past attempts have often relied on a single programming abstraction and language for everyone and on parallelizing compilers.

The role of the operating systems and the architecture in this project is to support software and applications in achieving the ultimate goal, rather than the conventional approach of fixing the environment in which parallel software must survive. Examples include primitives like thin hypervisors and libraries for the operating system and hardware support for partitioning and fast barrier synchronization.

We will prototype the hardware of the future using field programmable gate arrays (FPGAs), which we believe are fast enough to be interesting to parallel software researchers yet flexible enough to “tape out” new designs every day while being cheap enough that university researchers can afford to construct systems containing hundreds of processors. This prototyping infrastructure is called RAMP (Research Accelerator for Multiple Processors), which is being developed by a consortium of universities and companies (see ramp.eecs.berkeley.edu).

Biography
David Patterson has been Professor of Computer Science at the University of California, Berkeley since 1977, after receiving his A.B. (1969), M.S. (1970), and Ph.D. (1976) from UCLA. He is one of the pioneers of RISC, RAID, and NOW, which are widely used. Past chair of the Computer Science Department at U.C. Berkeley and the Computing Research Association, he was elected President of the Association for Computing Machinery (ACM) for 2004 to 2006 and served on the Information Technology Advisory Committee for the U.S. President (PITAC) from 2003 to 2005.

He co-authored five books, including two on computer architecture with John L. Hennessy: Computer Architecture: A Quantitative Approach (4 editions, latest is ISBN 0-12-370490-1) and Computer Organization and Design: the Hardware/Software Interface (3 editions; latest is ISBN 1-55860-604-1). They have been widely used as textbooks for graduate and undergraduate courses since 1990.

His work has been recognized by about 25 awards for research, teaching, and service, including Fellow of ACM and IEEE and election to the National Academy of Engineering. In 2005 he shared Japan’s Computer & Communication award with Hennessy and was named to the Silicon Valley Engineering Hall of Fame. In 2006 he was elected to the American Academy of Arts and Sciences and the National Academy of Sciences and he received the Distinguished Service Award from the Computing Research Association. In 2007 he was named a Fellow of the Computer History Museum and a Fellow of the American Association for the Advancement of Science.

David Patterson’s current projects are the RAD Lab: Reliable Adaptive Distributed systems, RAMP: Research Accelerator for Multiple Processors, and The Berkeley View on Parallel Computing Research.

 

21 APRIL 2008
Speaker: Wayne Grover, Professor, Dept. of Electrical and Computer Engineering, University of Alberta
Title: Transport network survivability and p-Cycles
Host School: NCSU
Duke Host: Jeff Chase (chase at cs.duke.edu)
UNC Host: 
Jasleen Kaur (jasleen at cs.unc.edu)
NCSU Hosts: Rudra Dutta (dutta at csc.ncsu.edu)

Youtube video of talk

Abstract
This lecture will initially discuss the nature and importance of the transport network survivability problem in general and discuss the key concepts of most of the currently known approaches to the problem. This will include automatic protection switching systems, line and path-switched rings, distributed span and path mesh restoration, and shared-backup path protection. The relatively new concept of p-Cycles will then be treated in some depth. This will include some of the interesting history of p-cycles, the operational concept for span and path-protecting p-cycles, the basic design problem for p-cycles, and the surprising combination of attractive features that p-cycles provide. Recent research on multiple failure survivability, multiple service priorities, dynamic provisioning, node protection, network evolution, design methods and self-organization will be discussed in closing.

Biography
Wayne Grover holds a B.Eng. (EE) from Carleton University, Ottawa, an M.Sc.(EE Science) from the University of Essex, England, and a Ph.D. (EE) from the University of Alberta. Dr. Grover has issued patents on 26 topics each issued in several countries, 58 journal publications, three book chapters and over 100 technical reports, seminars, and conference papers. In August 2003 his book Mesh-based Survivable Networks: Options and Strategies for Optical, MPLS, SONET and ATM Networking was published by Prentice-Hall (841 pages plus web-based appendices).

Three of his research papers, and his Ph.D. thesis on Self-Healing Networks, have become “highly cited” in different technical areas but he is most widely recognized for work in restorable network design and operation, including Sonet, ATM, DWDM and IP/MPLS networks. Following his decade-long development and advocacy of the concepts of self-healing and self-organizing transport networks, he is considered as a founding inventor in this field. In 1999 he received the IEEE Baker Prize Paper Award for his paper “Self-organizing Broadband transport networks” in the Oct. 1997 IEEE Proceedings.

Other research contributions are in the areas of high-speed synchronization, precise time transfer, wireless traffic analysis, rate-adaptive subscriber loops, radio-location in wireless systems and availability analysis of transport networks. Dr. Grover was an NSERC E.W.R. Steacie Fellow for 2001-2002. Previously he was the McCalla Professor in Engineering and recipient of the Martha Cook-Piper Research Prize (both at U.of A.), the “Smart City” Award (City of Edmonton) and a Technology Commercialization Award from TRLabs (1997) for the licensing of technology to industry. In 2002 he was made an IEEE Fellow (“for contributions to survivable and self-organizing broadband transport networks),” and Fellow of the Engineering Institute of Canada. In 2003 he is serving as General Chair for the 4th International Workshop on Design of Reliable Networks (DRCN 2003), Banff, Alberta, October 19-22, 2003.