Back to TCSDLS page

28 September 2015

Johannes GehrkeSpeaker: Johannes Gehrke, Distinguished Engineer, Microsoft & Tisch University Professor, Cornell University
Title: Deferring Transactions for Fun and Profit
Host School: Duke
Location: LSRC D106
Host: Ashwin Machanavajjhala (ashwin at cs.duke.edu)

Abstract

Transactions have for decades provided the gold standard for writing data-driven applications. I will describe two scenarios where we want transactions, but where we can seemingly not really achieve them. In our first model, we want to make joint travel arrangements or jointly enroll in classes, i.e., two or more people want to cooperate to select a seat or a class. But a transaction gives the illusion of having the database by itself, preventing information flow. We show how to slightly change transactions to enable efficient cooperation.

In our second model, we can avoid cross-data center latencies when committing transactions in a distributed or replicated database system, such as across two data centers in Europe and the US. In our new model, we allow sites to be inconsistent during execution, as long as this inconsistency is bounded, thus avoiding costly round-trips.

This talk describes research done at Cornell University.

Biography

Johannes Gehrke is a Distinguished Engineer at Microsoft working on Delve, the Office Graph, and Big Data and Data Science in Office 365. Until 2015, Johannes was the Tisch University Professor in the Department of Computer Science at Cornell University where he graduated 24 PhD students. Johannes received an NSF Career Award, a Sloan Fellowship, a Humboldt Research Award, the 2011 IEEE Computer Society Technical Achievement Award, the 2011 Blavatnik Award from the New York Academy of Sciences, and he is an ACM Fellow. He co-authored the undergraduate textbook Database Management Systems (McGrawHill (2002), currently in its third edition), used at universities all over the world. Johannes was co-Chair of SIGKDD 2004, VLDB 2007, ICDE 2012, SOCC 2014, and ICDE 2015.

19 October 2015

Vijay KumarSpeaker: Vijay Kumar, Nemirovsky Family Dean of Penn Engineering, University of Pennsylvania
Title: Aerial Robot Swarms
Host School: UNC
Location: SN011
Host: Ming Lin (lin at cs.unc.edu)

Abstract

Autonomous micro aerial robots can operate in three-dimensional, indoor and outdoor environments, and have applications to search and rescue, first response and precision farming. Dr. Kumar will describe the challenges in developing small, agile robots and the algorithmic challenges in the areas of (a) control and planning, (b) state estimation and mapping, and (c) coordinating large teams of robots.

Biography

VIJAY KUMAR is the Nemirovsky Family Dean of Penn Engineering with appointments in the Departments of Mechanical Engineering and Applied MechanicsComputer and Information Science, and Electrical and Systems Engineering at the University of Pennsylvania.

Dr. Kumar received his Bachelors of Technology degree from the Indian Institute of Technology, Kanpur and his Ph.D. from The Ohio State University in 1987. He has been on the Faculty in the Department of Mechanical Engineering and Applied Mechanics with a secondary appointment in the Department of Computer and Information Science at the University of Pennsylvania since 1987.

Dr. Kumar served as the Deputy Dean for Research in the School of Engineering and Applied Science from 2000-2004. He directed the GRASP Laboratory, a multidisciplinary robotics and perception laboratory, from 1998-2004. He was the Chairman of the Department of Mechanical Engineering and Applied Mechanics from 2005-2008. He then served as the Deputy Dean for Education in the  School of Engineering and Applied Science from 2008-2012.

Dr. Kumar is a Fellow of the American Society of Mechanical Engineers (2003), a Fellow of theInstitution of Electrical and Electronic Engineers (2005) and a member of the National Academy of Engineering (2013).

Dr. Kumar’s research interests: are in robotics, specifically multi-robot systems, and micro aerial vehicles. He has served on the editorial boards of the IEEE Transactions on Robotics and Automation, IEEE Transactions on Automation Science and Engineering, ASME Journal of Mechanical Design, the ASME Journal of Mechanisms and Robotics and the Springer Tract in Advanced Robotics (STAR).

He is the recipient of the 1991 National Science Foundation Presidential Young Investigator award, the 1996 Lindback Award for Distinguished Teaching (University of Pennsylvania), the 1997 Freudenstein Award for significant accomplishments in mechanisms and robotics, the 2012 ASME Mechanisms and Robotics Award, the 2012 IEEE Robotics and Automation Society Distinguished Service Award , a 2012 World Technology Network Award, and a 2014 Engelberger Robotics Award. He has won best paper awards at DARS 2002, ICRA 2004, ICRA 2011, RSS 2011, and RSS 2013, and has advised doctoral students who have won Best Student Paper Awards at ICRA 2008, RSS 2009, and DARS 2010.

16 November 2015

Rajit ManoharSpeaker: Rajit Manohar, Professor, Cornell University
Title: Large-scale Neuromorphic Systems
Host School: UNC
Location: SN011
Host: Montek Singh (montek at cs.unc.edu)

Abstract

VLSI scaling has led to the advent of massively parallel computing architectures as a way to circumvent the limitations of conventional approaches to improving performance and power. This can be seen in the architecture of general-purpose processors and in the advent of general-purpose graphics processing units. Standard architectures face algorithmic limitations to parallelism that limits scaling. Neuromorphic computing provides a promising alternative as a natively parallel computing paradigm for implementing real-time processing for vision and speech. We present our recent experiences with the design of TrueNorth, a 4096-core massively parallel neuromorphic computing substrate that was developed through a deep collaboration with IBM research.

Biography

Rajit Manohar is Professor of Electrical and Computer Engineering and a Stephen H. Weiss Presidential Fellow at Cornell. He received his B.S. (1994), M.S. (1995), and Ph.D. (1998) from Caltech. He has been on the Cornell faculty since 1998 and the Cornell Tech faculty since 2012, where his group conducts research on self-timed systems. He is the recipient of an NSF CAREER award, seven best paper awards, seven teaching awards, and was named to MIT technology review’s top 35 young innovators under 35 for contributions to low power microprocessor design. His work includes the design and implementation of a number of self-timed VLSI chips including the first high-performance asynchronous microprocessor, the first microprocessor for sensor networks, the first asynchronous dataflow FPGA, the first radiation hardened SRAM-based FPGA, and the first predictable large-scale neuromorphic architecture. He has served as the Associate Dean for Research and Graduate studies in Engineering, the Associate Dean for Academic Affairs at Cornell Tech, and the Associate Dean for Research at Cornell Tech. He founded Achronix Semiconductor to commercialize high-performance asynchronous FPGAs.

30 November 2015

Jason EisnerSpeaker: Jason Eisner, Professor, Johns Hopkins University
Title: Probabilistic Inference on Strings
Host School: UNC
Location: SN011
Host: Alex Berg (aberg at cs.unc.edu)

Abstract

Natural language processing must sometimes consider the internal structure of words, e.g., in order to understand or generate an unfamiliar word.  Unfamiliar words are systematically related to familiar ones due to linguistic processes such as morphology, phonology, abbreviation, copying error, and historical change.

We will show how to build joint probability models over many string-valued random variables. In general, our models assume that the strings are generated by some random process. By reconstructing the steps that may have given rise to our observations, we can predict unobserved strings, or predict the relationships among the observed strings. However, this reconstruction can be computationally hard (indeed undecidable). We outline approximate algorithms based on Markov chain Monte Carlo, expectation propagation, and dual decomposition. We give results on some NLP tasks.

Biography

Jason Eisner is Professor of Computer Science at Johns Hopkins University, where he is also affiliated with the Center for Language and Speech Processing, the Machine Learning Group, the Cognitive Science Department, and the national Center of Excellence in Human Language Technology. His goal is to develop theprobabilistic modeling, inference, and learning techniques needed for a unified model of all kinds of linguistic structure. His 90+ papers have presented various algorithms for parsing, machine translation, and weighted finite-state machines; formalizations, algorithms, theorems, and empirical results in computational phonology; and unsupervised or semi-supervised learning methods for syntax, morphology, and word-sense disambiguation. He is also the lead designer of Dyna, a new declarative programming language that provides an infrastructure for AI research. He has received two school-wide awards for excellence in teaching.

1 February 2016

Alex AikenSpeaker: Alex Aiken, Alcatel-Lucent Professor, Stanford University
Title: Legion: Programming Heterogeneous, Distributed Parallel Machines
Host School: NCSU
Location: 3211 EB2
Host: Frank Mueller (fmuelle at ncsu.edu)

Abstract

Programmers tend to think of parallel programming as a problem of dividing up computation, but often the most difficult part is the placement and movement of data, especially in heterogeneous, distributed machines with deep memory hierarchies. Legion is a programming model and runtime system for describing hierarchical organizations of both data and computation at an abstract level. A separate mapping interface allows programmers to control how data and computation are placed onto the actual memories and processors of a specific machine. This talk will present the design of Legion, the novel issues that arise in both the design and in our implementation, and experience with applications, including S3D, a turbulent combustion simulation.

Biography

Alex Aiken is the Alcatel-Lucent Professor and current chair of the Computer Science department at Stanford. Alex received his Bachelors degree in Computer Science and Music from Bowling Green State University in 1983 and his Ph.D. from Cornell University in 1988. Alex was a Research Staff Member at the IBM Almaden Research Center (1988-1993) and a Professor in the EECS department at UC Berkeley (1993-2003) before joining the Stanford faculty in 2003. His research interest is in areas related to programming languages. He is an ACM Fellow, a recipient of Phi Beta Kappa’s Teaching Award, and a former National Young Investigator.

15 February 2016

Jon KleinbergSpeaker: Jon Kleinberg, Tisch University Professor, Cornell University
Title: Social Phenomena in Global Networks
Host School: Duke
Location: LSRC D106
Host: Pankaj Agarwal (pankaj at cs.duke.edu)

Abstract

With an increasing amount of social interaction taking place in the digital domain, and often in public on-line settings, we are accumulating enormous amounts of data about phenomena that were once essentially invisible to us: the collective behavior and social interactions of hundreds of millions of people, recorded at unprecedented levels of scale and resolution. Analyzing this data computationally offers new insights into the design of on-line applications, as well as a new perspective on fundamental questions in the social sciences. We will review some of the basic issues around these developments; these include the problem of designing information systems in the presence of complex social feedback effects, and the emergence of a growing research interface between computing and the social sciences, facilitated by the availability of large new datasets on human interaction.

Biography

Jon Kleinberg is the Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University. His research focuses on issues at the interface of networks and information, with an emphasis on the social and information networks that underpin the Web and other on-line media. He is a member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Science; and he is the recipient of research fellowships from the MacArthur, Packard, Simons, and Sloan Foundations, as well as awards including the Nevanlinna Prize, the Harvey Prize, the Newell Award, the ACM SIGKDD Innovation Award, and the ACM-Infosys Foundation Award in the Computing Sciences.

22 February 2016

Pieter AbbeelSpeaker: Pieter Abbeel, Associate Professor, University of California, Berkeley
Title: Making Robots Learn
Host School: Duke
Location: LSRC D106
Host: George Konidaris (gdk at cs.duke.edu)

Abstract

Programming robots remains notoriously difficult. Equipping robots with the ability to learn would by-pass the need for what often ends up being time-consuming, task-specific programming. In this talk I will describe the ideas behind two promising types of robot learning: First I will discuss apprenticeship learning, in which robots learn from human demonstrations, and which has enabled autonomous helicopter aerobatics, knot tying, basic suturing, and cloth manipulation. Then I will discuss deep reinforcement learning, in which robots learn through their own trial and error, and which has enabled learning locomotion as well as a range of assembly and manipulation tasks.

Biography

Pieter Abbeel (Associate Professor, UC Berkeley EECS) works in machine learning and robotics, in particular his research is on making robots learn from people (apprenticeship learning) and how to make robots learn through their own trial and error (reinforcement learning). His robots have learned: advanced helicopter aerobatics, knot-tying, basic assembly, and organizing laundry. He has won various awards, including best paper awards at ICML and ICRA, the Sloan Fellowship, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP) award, the Office of Naval Research Young Investigator Program (ONR-YIP) award, the DARPA Young Faculty Award (DARPA-YFA), the National Science Foundation Faculty Early Career Development Program Award (NSF-CAREER), the MIT TR35, the IEEE Robotics and Automation Society (RAS) Early Career Award, and the Dick Volz Best U.S. Ph.D. Thesis in Robotics and Automation Award.

29 February 2016

Michael WellmanSpeaker: Michael Wellman, Professor, University of Michigan
Title: Artificial Intelligence and Real Economics
Host School: NCSU
Location: 3211 EB2
Host: Jon Doyle (Jon_Doyle at ncsu.edu)

Abstract

The field of artificial intelligence is enjoying renewed attention today, based on its pervasive influence on a wide range of technologies, and promise for transformative effects in the near-term future. Coincident with the economic impact of AI in this century has been an increasing embrace of economic reasoning in the design and analysis of AI systems. Developments in algorithmic game theory and mechanism design are shaping how autonomous software agents interact in the networked economies, just as such agents are proliferating in key sectors like Internet advertising and financial trading. Large-scale computing infrastructure enabling much of this activity can also be harnessed in service of reasoning in a principled way about the resulting complex strategic environments. I illustrate this through a large-scale simulation approach to analyze the implications of algorithmic and high-frequency trading in financial markets.

Biography

Michael P. Wellman is the Lynn A. Conway Collegiate Professor of Computer Science & Engineering at the University of Michigan. He received a PhD from the Massachusetts Institute of Technology in 1988 for his work in qualitative probabilistic reasoning and decision-theoretic planning. For the past 20+ years, his research has focused on computational market mechanisms and game-theoretic reasoning methods, with applications in electronic commerce, finance, and other domains of strategic decision making. Wellman previously served as Chair of the ACM Special Interest Group on Electronic Commerce (SIGecom), and as Executive Editor of the Journal of Artificial Intelligence Research. He is a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery, and 2014 recipient of the SIGAI Autonomous Agents Research Award.

11 April 2016

Onur MutluSpeaker: Onur Mutlu, Dr. William D. and Nancy W. Strecker Early Career (Associate) Professor, Carnegie-Mellon University
Title: Rethinking Memory System Design for Data-Intensive Computing
Host School: NCSU
Location: 3211 EB2
Host: Xipeng Shen (xshen5 at ncsu.edu)

Abstract

The memory system is a fundamental performance and energy bottleneck in almost all computing systems. Recent system design, application, and technology trends that require more capacity, bandwidth, efficiency, and predictability out of the memory system make it an even more important system bottleneck. At the same time, DRAM and flash technologies are experiencing difficult technology scaling challenges that make the maintenance and enhancement of their capacity, energy-efficiency, and reliability significantly more costly with conventional techniques. In this talk, we examine some promising research and design directions to overcome challenges posed by memory scaling. Specifically, we discuss three key solution directions: 1) enabling new memory architectures, functions, interfaces, and better integration of the memory and the rest of the system, 2) designing a memory system that intelligently employs multiple memory technologies and coordinates memory and storage management using non-volatile memory technologies, 3) providing predictable performance and QoS to applications sharing the memory/storage system. If time permits, we might also briefly touch upon our ongoing related work in combating scaling challenges of NAND flash memory. An accompanying paper can be found here.

Biography

Onur Mutlu is the Strecker Early Career Professor at Carnegie Mellon University. His broader research interests are in computer architecture, systems and bioinformatics. He is especially interested in interactions across domains, between applications, system software, compilers, and microarchitecture, with a major current focus on memory systems. He obtained his PhD and MS in ECE from the University of Texas at Austin and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor. Prior to Carnegie Mellon, he worked at Microsoft Research, Intel Corporation, and Advanced Micro Devices. He received the IEEE Computer Society Young Computer Architect Award, Intel Early Career Faculty Award, faculty partnership awards from various companies, and a healthy number of best paper or “Top Pick” paper recognitions at various computer systems and architecture venues. His computer architecture course lectures and materials are freely available on YouTube. For more information, please see his webpage.