Packard Fellowship Grant

January 8, 2015

Principal Investigator: Marc Pollefeys
Funding Agency: The Davd and Lucille Packard Foundation
Agency Number: 2005-29100

Abstract

 

The Packard Fellowship Program was designed to strengthen university-based science and engineering programs by supporting innovative researchers early in their careers. Each year, the David and Lucile Packard Foundation selects 16 fellows to receive $625,000 over five years to support their research. The program funds research in a broad range of disciplines, including astronomy; biology; chemistry; computer, earth and ocean science; mathematics; physics and all branches of engineering.

Deformable Knee Atlas with Probabilistic Weight Bearing Cartilage Regions

January 8, 2015

Principal Investigator: Marc Niethammer
Funding Agency: Duke University
Agency Number: 391-9926

Abstract
We will develop a method for atlas-driven segmentation and assessment of knee cartilage consisting of a probabilistic shape model, a multi-label segmentation strategy, and a method for atlas-building in the knee joint. This work will be an important step towards comprehensive automatic analysis of MR images of the knee. Methods developed will be applicable to image data acquired by the Osteoarthritis Initiative (OAI). The proposed research will significantly contribute to enhance the robustness of computational analysis tools available for osteoarthritis research.

Collaborative Systems: Visualizing and Exploring High-dimensional Data

January 8, 2015

Principal Investigator: Leonard McMillan, Wei Wang
Funding Agency: National Science Foundation
Agency Number: IIS-0534580

Abstract
Visualizing and Exploring High-dimensional Data High-throughput experiments have revolutionized many areas of scientific endeavor. In contrast to traditional experimental methods, they generate vast amounts of data, which is only approachable through computer-aided data analysis. The analysis is complicated by the data’s high-dimensionality and large volume. We propose to develop a new tool for interactively exploring inter-data relationships within such data sets. Our tool provides an aid to scientists prior to applying traditional offline analysis techniques such as clustering, segmentation, and classification. We avoid the problems of representing high-dimensional data directly by only considering the relationships between data points with multiple, scalar-valued dissimilarity measures. The user can interactively mix various measures and visualize how they influence the clustering of data points. This aids in the process of selecting appropriate weighing factors between incompatible features and noisy measurements. Furthermore, it allows the scientist to explore hypotheses and incorporate their own knowledge to drive traditional unsupervised datamining algorithms in sensible directions. A key component of our tool is its ability to interactively explore parameter spaces that combine various attributes of high-dimensional data points. We provide two alternate views of the high dimensional data sets. Our dissimilarity-matrix view offers insights into the size,compactness, separation, and relative proximity of clusters. An alternate point-cloud view provides a 3-D projection of the high-dimensional source data which best preserves the distance between points. This view excels in communicating the flow and migrations of points from one cluster to another as weighting parameters are tuned. It also allows the user to probe and interact with the data, including such tasks as hand clustering the data, and examining particular points. The problem with both dissimilarity matrices and MDS is that they do not easily scale to large data sets. We have developed a prototype of an approximate dissimilarity matrix visualization and a fast MDS algorithm that are targeted at interactive display rates and large data sets. Our methods are orders of magnitude smaller and faster than previous published methods on similarly sized data sets. Our goal is to provide visualizations of cluster formation and migration as the contributions of various data set features are interactively modified. We have conducted pilotstudies of our approach to multi-experiment gene-expression data and SNP-phenotype correlation assays. In the future, we plan to add more visualization features to our system and support alternative non-linear dimensionality reductions methods. Intellectual merits of the proposed research: The visualization tools proposed will assist scientist in many disciplines, including biologists in studying of gene function, medical doctors in comprehending disease susceptibility, chemists in developing candidate drugs, and high-energy physics in analyzing the data generated by particle accelerators. Furthermore, by applying newly developed dimensionality reduction methods to complex real-world problems will also help to evaluate and refine these new methods, making them more effective and efficient. Broader impacts of the proposed research: The proposed research requires knowledge of computer science, biology, chemistry and physics. Such interdisciplinary integration is crucial for the scientific development of future computer scientist. This research effort will be integrated into our graduate and undergraduate instruction. The PIs will offer classes in the related topics of visualization and bioinformatics, where tols will aid students in comprehending abstract concepts and data relations. We will engage to outreach programs to attract underrepresented minorities to the field of computer science, and provide research opportunities undergraduate minorities to stimulate an interest in grauate study.

Multiresolution Algorithms for Processing Giga-Models: Real-time visualization, reasoning and interaction

January 8, 2015

Principal Investigator:Ming Lin
Funding Agency:U.S. Army Research Office
Agency Number:W911NF-06-1-0355

Abstract
Virtual prototypes of complex systems is often used to reduce design time, lower production cost, perform “virtual rehearsal”, and rapid visualization of intricate structures of varying scale: from nanometersized objects including nanoscale robots, storage devices and nanowheels, to large man-made computeraided design (CAD) structures including unmanned aerial vehicles, tanks and combat vehicles, underwater robots, and powerplants composed of millions of parts. In this proposal, we often refer to the virtual prototypes of these extremely large and massive systems as “giga-models”. To help fully realize the potential of virtual environments, the proposed research targets the solution of several important and well-defined problems, including proximity queries, physical simulation, and interactive display. The underlying theme of the proposed research is design of novel algorithms and systems based on the “multiresolution” framework, i.e. describing geometry, spatial arrangements, numerics, and physical simulation across different scales. With the increasing complexity of large systems, this approach could potentially offer a robust and efficient solution that scales up to large size problems and adequately models the mutual interaction among multiple entities in complex mechanical, physical or biological systems. It will allow the designers, engineers to rapidly validate the existing design or explore new design options. It can also enable the commanders and the soldiers on the battle field to quickly visualize formation and interact with various entities in the “virtual battleground”. SCIENTIFIC MERIT: This research is expected to lay the scientific foundation for human centric, simulation-based virtual environments. We will address key issues in the realization of visualization, modeling and simulation techniques for handling giga-models. These include new level-of detail representations and novel multiresolution algorithms for interactive display, proximity query, and physics-based imulation and manipulation of massive CAD models. The new algorithms and public-domain libraries could also offer undamental advances for other research areas, including robotics and automation, computer graphics and virtual environments, rapid prototyping of nano-structures, etc. The PIs have led research on proximity queries, interactive display, and physically-based modeling; also released several public domain libraries that are widely used and have been incorporated into commercial products. However, the previous work only addresses aspects of the problems that are restricted to modestly complex environments and does not scale well to massive models. The proposed project focuses on developing scalable solutions and represents a very significant and multi-dimensional extension with new and substantial challenges, but not without the considerable foundation built upon the PIs’ prior research. DoD/ARMY RELEVANCE: The proposed research could lead to a wide application of multiresolution algorithms for design, testing, training, and mission planning. It has the potential of making a significant impact on design prototyping, system testing personnel training, and virtual mission rehearsal. The algorithms and the resulting software can enable rapid prototyping of complex mechanical systems and reduce high costs and time loss due to poor design decisions. Our rendering and interaction algorithms are useful for interactive display of large CAD models and battlefield visualization, either for training or strategic planning. We will also develop multi-resolution representations to accelerate Computer Generated Forces and integrate them into the OneSAF Objective System. We will collaborate closely with researchers and developers at ARL and RDECOM.

DURIP: High-Performance Many-Core Clusters for Modeling and Simulation

January 8, 2015

Principal Investigator: Ming Lin, Dinesh Manocha
Funding Agency: U.S. Army Research Office
Agency Number: W911NF-08-1-0480

Abstract
We are requesting equipment to support development of desktop and portable high-performance clusters of multi-core and many-core processors for modeling and simulation. They will be used for interactive display and interaction with massive datasets, computer-generated forces, real-time ray tracing for RFpropagation and physically-based simulation. In particular, we propose to acquire: • Ten high-end, multi-core workstations with high-end graphics processors and giga-bytes of memory. This would provide us a high performance platform of 80 cores altogether. • Ten mobile workstations with multi-core CPUs and high-end portable graphics card. This would provide a portable platform with 20 cores. • High-resolution displays. • High speed inter-connect technologies. We would use these workstations to develop two HPC clusters: desktop and mobile. These clusters would be used for our Army and DoD funded research on modeling, simulation, and handling of complex datasets. We would also develop scalable algorithms for line-of-sight, ray tracing, route planning and collision avoidance algorithms on these clusters and evaluate their performance within OneSAF simulation systems as well as RF propagation. The major research projects supported by this equipment are: • Computer-generated force simulation systems • Physically-based simulation • Real-time ray tracing for visual, aural and RF propagation • High-resolution displays • Real-time walkthroughs of CAD and Urban Environments We are also closely collaborating with many DOD research labs (including ARL, PEO STRI and RDECOM) and transferring some of the algorithmic and software technology developed as part of these research projects. The requested instrumentation will provide a major upgrade to our current facilities and is also critical for our collaboration and technology transfer to DOD labs and organizations. The equipment will be actively used by more than 15 faculty members from Computer Science, Mathematics, Physics, etc. and 75 graduate students. Some of this equipment will also be made available for classroom instruction and shared research laboratories.

Physically-Inspired Modeling for Haptic Rendering

January 8, 2015

Principal Investigator: Ming Lin
Funding Agency: National Science Foundation
Agency Number: CCF-0429583

Abstract
The sense of touch is one of the most important sensory channels and is used for object identification, data manipulation and concept exploration. Therefore, force feedback via haptic (touch-enabled) devices offers many possibilities for enhanced human-computer interaction. Extending the frontier of visual computing, this project proposes to develop physically-inspired modeling and simulation techniques for high-fidelity haptic display that will augment visual display.

The proposed research will be driven by two target applications in science and education: nanomanipulation and haptic painting. Both applications will also be used to enhance science and art education at middle and high schools.

This research is expected to lay the scientific foundation for an emerging paradigm of physically-based haptic interaction with virtual environments. It includes new algorithmic insights, efficient computational methodology, and system integration for two challenging applications. The underlying representations, algorithms and software systems for fast contact computation, interactive modeling of flexible objects, multi-level optimization, use of programmable graphics hardware, simulation acceleration techniques, and VR device augmentation will also offer fundamental advances for virtual environments, physically-based modeling, and scientific visualization.

By extending the frontier of high-fidelity haptic rendering, the proposed research can develop a significant augmentation to existing graphical display and scientific visualization. Furthermore, each proposed application has the potential of making a considerable impact on its own.

Collaborative Research:CRI:CRD Synthetic Traffic Generation Tools and Resources: A Community Resource for Experimental Networking Research

January 8, 2015

Principal Investigator: Kevin Jeffay, Don Smith
Funding Agency: National Science Foundation
Agency Number: CNS-0709081

Abstract
Networking research has long relied on simulation as the primary vehicle for demonstrating the effectiveness of proposed protocols and mechanisms. Typically, one simulates network hardware and software in software using, for example, the widely used ns-2 simulator [3]. Experimentation proceeds by simulating the use of the network by a given population of users using applications such as ftp or web browsers. Synthetic workload generators are used to inject data into the network according to a model of how the applications or users behave. In order to perform realistic network simulations, one needs a traffic generator that is capable of generating realistic synthetic traffic in a closed-loop fashion that “looks like” traffic found on an actual network. Unfortunately, the networking community suffers from a lack of validated tools and models suitable for synthetic traffic generation. As a result, all too often, networking technology is evaluated using ad hoc workloads with an unknown relationship traffic seen on real links. Intellectual Merit: This project is a collaborative effort to develop a synthetic traffic generation resource for the experimental networking research community. The resource consists of (1) synthetic traffic generators for the ns-2, ns-3, and GTNets software simulators, and Linux and BSD-based testbeds, (2) a repository of datasets to be used by the traffic generators to generate traffic that is statistically equivalent to traffic found on a variety of network links including campus networks, wide-area backbone networks, corporate intranets, wireless networks, etc, and (3) a set of traffic analysis tools to enable researchers to generate empirical models of traffic on network links of interest and to use these models to drive the synthetic traffic generation process. Broader Impact: The goal of the resource is to enable researchers to perform experiments using traffic that is either statistically equivalent to that found on actual network links or deviates from such traffic in controlled and meaningful ways. In this manner we hope to improve the state of networking research by enabling more realistic and more controlled experimentation. The project combines the previous work of the PIs on the Swing [X], Harpoon [Y], and tmix [Z] traffic generation systems. We will develop a common architectural platform for synthetic traffic generation that integrates these existing systems into a common framework and implement instances of the architecture for the aforementioned simulators and operating systems. To ensure we are meeting the needs of networking researchers we will establish an advisory board consisting of the implementers and supporters of the major software simulators and testbed platforms. In addition, we will propose to host a workshop at SIGCOMM in 2008 on synthetic traffic generation principles.

SBIR Deployable Intelligent Projection Systems for Training (Phase 1)

January 8, 2015

Principal Investigator: Henry Fuchs, Greg Welch
Funding Agency: Renaissance Science Corporation
Agency Number: P1107

Abstract
Renaissance Sciences Corporation, the University of North Carolina at Chapel Hill, and Night Readiness, LLC propose to develop new “intelligent” projector units (IPUs) that cooperate to create a single seamless wide-area (panoramic) image as part of a deployable visual training system for multiple viewers. The IPUs outwardly look like conventional digital projectors, but when casually arranged together in a set that projects in a common direction, will automatically learn their respective geometric and photometric relationships, and then continually and automatically estimate and correct geometric and photometric errors to maintain a single, seamless, high-fidelity image for multiple distinct viewers. This concept is depicted in Figure 1 (sketch by Andrei State). Currently available projector technologies do not offer user-friendly, multi-tiled displays that self-calibrate and compensate for errors resulting from using multiple projectors. Instead, detailed and expensive manual labor is needed to achieve such display arrangements, which is not only inefficient, but also cost-prohibitive when needed in deployable and changing settings.

3D Imaging and Mapping Technologies for Autonomous Robotic Exploration

January 8, 2015

Principal Investigator: Jan-Michael Frahm
Funding Agency: Texas A&M University
Agency Number: S100013

Abstract
In this project we developed a path planning for robots to automatically reconstruct an unknown environment. Moreover we developed methods to allow a swarm of robots to use the previously known 3D models to perform vision-based localization for navigation. All developed methods are aiming at real-time performance to enable live operation.

Visual Navigation for Humanoid Robot

January 8, 2015

Principal Investigator: Jan-Michael Frahm
Funding Agency: Honda Research Institute USA, Inc
Agency Number: N/A

Abstract
The goal of this project is to use computer vision techniques to develop a state-of-the-art visual navigation system for humanoid robots. In a first phase, we propose to perform feature-based self-localization and sparse 3D map building using ASIMO stereo camera. In a second phase, we propose to do incremental dense 3D reconstruction of ASIMO’s vicinity by using dense stereo matching algorithms to obtain a more complete representation of the environment suitable for interaction, as well as dealing with dynamic obstacles.