Skip to main content

 

Following is a complete list of doctoral graduates of the Department of Computer Science, with their dissertation titles. Graduates of other departments or schools, whose primary adviser was a member of the Department of Computer Science, are also listed.

Advisors

Advisors names are given in parentheses. Advisor affiliations are given if the advisor was not on the UNC Department of Computer Science faculty.

Abstracts

Clicking on the author’s name will take you to a separate page containing dissertation abstracts, where available.

Availability

UNC Libraries: Except where noted, all of the dissertations listed here are available from the libraries at UNC-Chapel Hill. Those from the most recent year will not immediately be available, however.

  • ProQuest database
  • Microfiche copies: Microforms Section of Davis Library (basement)
  • Bound copies: Kenan Science Library Annex
  • Non-circulating copies: North Carolina Collection, Wilson Library

A B C D E F G H I J K L

Ph.D. Graduates

A

Abram, Gregory D. (1986)

“Parallel Image Generation with Anti-Aliasing and Texturing”
Under the direction of Henry Fuchs
Department of Computer Science Technical Report TR87-005
Electronic copy available

This dissertation explores the use of an analytic visibility algorithm for the high-speed generation of anti-aliased images with textures. When visibility is known analytically before any sampling takes place, low-pass filtering for anti-aliasing can be done to arbitrary accuracy at a cost proportionate to the output resolution. Furthermore, since the filtering and sampling processes can be expressed as a set of integrations over the image place, the filtering process can be decomposed into a set of sums of integrations over each visible surface in the image place, allowing the rendering of each visible surface to be done in parallel using an image buffer to accumulate the results. Thus, analytic visibility can serve as the basis for high-speed, high-quality image synthesis. In this dissertation, algorithms for computing analytic visibility and for producing filtered renderings from the resulting visible surfaces are presented. In order to provide real-time performance, these algorithms have been designed for parallel execution on simple concurrent structures. Architectures based on these algorithms are presented and simulated to produce expected execution times on a set of test images.

Ackermann Jr., Arthur F. (1972)

“Toward A Programming Language for Writing and Checking Mathematical Discourses”
Under the direction of Donald F. Stanat

In section 1.1, I begin by hypothesizing some desirable features for an interactive computer system for mathematicians. The requirements for one such feature, proof-checking, are discussed in section 1.2. It is argued that the traditional systems of mathematical logic must be modified before they can provide a suitable basis for writing and checking mathematical discourses [Newsom] in a “natural” language. In chapter 2 an initial candidate for a suitable logical language is discussed and its syntax and computer processing requirements are specified. In appendix 3 a complete discourse on a finite geometry is developed in this language to illustrate its strengths and weaknesses. In addition to providing for higher-order variables with restricted domains, the language utilizes a generalization of the natural deduction notion of tautological consequence. This generalization permits the proof writer to make any inference which can be validated by considering the meaning of the terms used. Such a rule is called a “semantic inference” rule. Operationally, in a computer proof-checker an automatic theorem-prover would be used to validate a semantic inference by attempting to prove a corresponding theorem from stated axioms and implied definitions. A first-order resolution theorem-proving program, PROV, was obtained from John K. Dixon at the NIH Heuristics Laboratory and attempts were made to check a few of the simpler inferences made in the proofs in the finite geometry discourse. These results are reported in chapter 4, and a complete, annotated copy of PROV is given in appendix 4. The text of this dissertation was prepared using the text-editing program FORMAT [Ferns] in combination with SN0B0L4 [Griswold] programs for creating special graphics by overprinting [Ackermann1].

Ackerman, Jeremy D. (2002)

“Application of Augmented Reality to Laparoscopic Surgery”
Under the direction of Henry Fuchs
Electronic copy available

The usefulness and feasibility of an augmented reality visualization system for laparoscopic surgery is examined. This technology could enable physicians to see data, such as medical images, as well as the patient’s 3-D anatomy, in the context of the patient rather than viewing a 2-D video monitor. Three new results will be presented:

  1. A study whose chief findings were that augmented reality improves performance for medical students and that exposure to augmented reality may improve later performance under laparoscopic visualization for both medical students and practicing surgeons.
  2. An image processing method that can acquire three-dimensional structure inside the highly specular environment of the abdomen will be shown.
  3. A design for a high-speed depth-extracting laparoscope that can be miniaturized and be made operating room compatible will be shown.

Laparoscopic surgery is a minimally invasive abdominal surgery that is performed through small incisions. Surgeons view the procedure on a video monitor that shows a camera’s view from inside the abdomen through a small telescope inserted through an incision. Laparoscopy is associated with a loss of natural hand-eye coordination because the work is viewed indirectly on a video screen. Augmented reality may improve performance by restoring natural hand-eye coordination during this type of procedure. A study of 40 medical students and 8 surgeons was conducted to compare performance of a surgically relevant task under laparoscopic and simulated augmented reality visualization. The study uses simulated augmented reality, a technique in which an idealized augmented reality system is created using physical mock-ups, to explore the utility of future augmented reality systems. A proof-of-concept real-time range acquiring laparoscope has been constructed.  The system uses a high-speed implementation of structured light depth extraction.  It uses specialized hardware and algorithms. Results from preliminary studies on biological tissues show the ectiveness of this approach.

Ahn, Ilsoo (1986)

“Performance Modeling and Access Methods for Temporal Database Management Systems”
Under the direction of Richard T. Snodgrass
Department of Computer Science Technical Report TR86-018
Electronic copy available

Conventional database storing only the latest snapshot lack the capability to record and process time-varying aspects of the real world. The need for temporal support has been recognized for over ten years, and recent progress in secondary storage technology is making such support practically feasible. There are three distinct kinds of time in databases: transaction time, valid time, and user-defined time. Depending on the capability to support either or both of transaction time and valid time, databases are classified into four types: snapshot, rollback, historical, andtemporal. Each of the four types has different semantics and different implementation issues. Database systems with temporal support maintain history data on line together with current data, which causes problems in terms of both space and performance. This research investigates the temporally partitioned store to provide fast response for various temporal queries without penalizing conventional non-temporal queries. The current store holds current data and possibly some history data, while the history store contains the rest. The two stores can utilize different storage formats, and even different storage media, depending on the individual data characteristics. Various issues on the temporally partitioned store were studied, and several formats for the history store were investigated. To analyze the performance of TQuel queries on various access methods, four models forming a hierarchy were developed: one each foralgebraic expressions, database/relations, access paths, and storage devices. The model of algebraic expressions maps the algebraic expression to the file primitive expression, based on information represented by the model of database/relations. The model of access paths maps the file primitive expression to the access path expression, which is converted to the access path cost by the model of storage devices. As a test-bed to evaluate the access methods and the models, a prototype of a temporal DBMS was built by modifying a snapshot DBMS. The prototype was used to identify problems with conventional access methods and also to provide performances data to check the analysis results from the models. Reverse chaining, among the temporally partitioned storage structures, was incorporated in the prototype to enhance its performance.

Ahuja, Vijay (1976)

“Exposure of Routed Networks to Deadlock”
Under the direction of Victor L. Wallace

The deadlock problem is addressed for a class of systems called ‘routed networks’. A routed network is a store and forward communication network where all message transmissions follow predefined routes through the network. An approach is developed which permits the subgraphs of the network graph to be checked for presence of a deadlock. The approach rests on the fact that a deadlock in a subgraph exists if and only if a complete weighted matching exists in an appropriately defined bipartite graph. An algorithm for determining whether a deadlock can occur is presented and it is shown to be computationally attractive for reasonably sized networks. The algorithm is applied to resolve this question for some examples of routed networks adapted from the ARPA network.

Aikat, Jay (2010)

“An Investigation of the Effects of Modeling Application Workloads and Path Characteristics on Network Performance”
Under the direction of Kevin Jeffay
Electronic copy available

Network testbeds and simulators remain the dominant platforms for evaluating networking technologies today. Central to the problem of network emulation or simulation is the problem modeling and generating realistic, synthetic Internet traffic as the results of such experiments are valid to the extent that the traffic generated to drive these experiments accurately represents the traffic carried in real production networks. Modeling and generating realistic Internet traffic remains a complex and not wellunderstood problem in empirical networking research. When modeling production network traffic, researchers lack a clear understanding about which characteristics of the traffic must be modeled, and how these traffic characteristics affect the results of their experiments. In this dissertation, we developed and analyzed a spectrum of empirically-derived traffic models with varying degrees of realism. For TCP traffic, we examined several choices for modeling the internal structure of TCP connections (the pattern of request/response exchanges), and the round trip times of connections. Using measurements from two different production networks, we constructed nine different traffic models, each embodying different choices in the modeling space, and conducted extensive experiments to evaluate these choices on a 10Gbps laboratory testbed. As a result of this study, we demonstrate that the old adage of “garbage-in-garbage-out” applies to empirical networking research. We conclude that the structure of traffic driving an experiment significantly affects the results of the experiment. And we demonstrate this by showing the effects on four key network performance metrics: connection durations, response times, router queue lengths, and number of active connections in the network.

Airey, John M. (1990)

“Increasing Update Rates in the Building Walkthrough System with Automatic Model-Space Subdivision and Potentially Visible Set Calculations”
Under the direction of Frederick P. Brooks, Jr.
Department of Computer Science Technical Report TR90-027
Electronic copy available

Pre-processing some building models can radically reduce the number of polygons processed during interactive building walkthroughs. New model-space subdivision and potentially visible set (PVS) calculation techniques, used in combination, reduce the number of polygons processed in a real building model by an average factor of 30, and a worst case factor of at least 3.25. A method of recursive model-space subdivision using binary space partitioning is presented. Heuristics are developed to guide the choice of splitting planes. The spatial subdivisions resulting from binary space partitioning are called cells. Cells correspond roughly to rooms. An observer placed in a cell may see features exterior to the cell through transparent portions of the cell boundary called portals.Computing the polygonal definitions of the portals is cast as a problem of computing a set difference operation, union, intersection and difference, on co-planar sets of polygons is presented with an emphasis on handling real-world data. Two different approaches to computing the PVS for a cell are explored. The first uses point sampling and has the advantage that it is easy to trade time for results, but has the disadvantage of under-estimating the PVS. The second approach is to analytically compute a conservative over-estimation of the PVS using techniques similar to analytical shadow computation. An implementation of the Radiosity lighting model is described along with the issues involved in combining it with the algorithms described in this dissertation.

Alexander, Geoffrey D. (1995)

“Proving First-Order Equality Theorems with Hyper-Linking”
Under the direction of David A. Plaisted
Department of Computer Science Technical Report TR96-019
Electronic copy available

Lee and Plaisted recently developed a new automated theorem proving strategy called hyper-linking. As part of his dissertation, Lee developed a round-by-round implementation of the hyper-linking strategy, which competes well with other automated theorem provers on a wide range of theorem proving problems. However, Lee’s round-by-round implementation of hyper-linking is not particularly well suited for the addition of special methods in support of equality. In this dissertation, we describe, as alternative to the round-by-round hyper-linking implementation of Lee, a smallest instance firstimplementation of hyper-linking which addresses many of the inefficiencies of round-by-round hyper-linking encountered when adding special methods in support of equality. Smallest instance first hyper-linking is based on the formalization of generating smallest clauses first, a heuristic widely used in other automated theorem provers. We prove both the soundness and logical completeness of smallest instance first hyper-linking and show that it always generates smallest clauses first under a large class of ordering on clauses. We discuss which of the refinements used by Lee in round-by-round hyper-linking are also applicable to smallest instance first hyper-linking. We add equality support to smallest instance first hyper-linking using unfailing completion, narrowing, and Brand’s equality transformation in a method we call unfailing completion equality support. We show the soundness and logical completeness of smallest instance first hyper-linking with unfailing completion equality support. We also show that Brand’s equality transformation is unnecessary for the case where all positive equational literals occur in equational Horn clauses. In particular, this means that for pure equational problems, smallest instance first hyper-linking with unfailing completion equality support simply reduces to an application of unfailing completion. We conclude the dissertation by describing our preliminary implementation of smallest instance first hyper-linking. We present a comparison of our implementation with that of Lee’s on the 2,655 problems in the TPTP Problem Library of Sutcliffe and Suttner.

Aliaga, Daniel G. (1999)

“Automatically Reducing and Bounding Geometric Complexity by Using Images”
Under the direction of Anselmo A. Lastra
Electronic copy available

Large and complex 3D models are required for computer-aided design, architectural visualizations, flight simulation, and many types of virtual environments. Often, it is not possible to render all the geometry of these complex scenes at interactive rates, even with high-end computer graphics systems. This has led to extensive work in 3D model simplification methods. We have been investigating dynamically replacing portions of 3D models with images. This approach is similar to the old stage trick of draping cloth backgrounds in order to generate the illusion of presence in a scene too complex to actually construct on stage. An image (or backdrop) once created, can be rendered in time independent of the geometric complexity it represents. Consequently, significant frame rate increases can be achieved at the expense of storage space and visual quality. Properly handling these last two tradeoffs is crucial to a fast rendering system. In this dissertation, we present a preprocessing and run-time algorithm for creating a constant-frame-rate rendering system that replaces selected geometry with images enhanced with depth at each pixel. First, we describe the preprocessing algorithm to automatically bound the geometric complexity of arbitrary 3D models by using images. Second, we explore two general approaches to displaying images, surrounded by geometry, at run time. Third, we present a system tailored to architectural models.

Allen, B. Danette (2007)

“Hardware Design Optimization for Human Motion Tracking Systems”
Under the direction of Greg Welch
Electronic copy available

A key component of any interactive computer graphics application is the system for tracking user or input device motion. An accurate estimate of the position and/or orientation of the virtual world tracking targets is critical to effectively creating a convincing virtual experience. Tracking is one of the pillars upon which a virtual reality environment is built and it imposes a fundamental limit on how real the “reality” of Virtual Reality can be. Whether working on a new or modified tracking system, designers typically begin the design process with requirements for the working volume, the expected user motion, and the infrastructure. Considering these requirements they develop a candidate design that includes one or more tracking mediums (optical, acoustic, etc.), associated source/sensor devices (hardware), and an algorithm (software) for combining the information from the devices. They then simulate the candidate system to estimate the performance for some specific motion paths. Thus the predictions of such traditional simulations typically include the combined effect of hardware and algorithm choices, but only for the chosen motion paths. Before tracker algorithm selection, and irrespective of the motion paths, it is the choice and configuration of the source/sensor devices that are critical to performance. The global limitations imposed by these hardware design choices set a limit on the quantity and quality of the available information (signal) for a given system configuration, and they do so in complex and sometimes unexpected ways. This complexity often makes it difficult for designers to predict or develop intuition about the expected performance impact of adding, removing, or moving source/sensor devices, changing the device parameters, etc. This research introduces a stochastic framework for evaluating and comparing the expected performance of sensing systems for interactive computer graphics. Incorporating models of the sensor devices and expected user motion dynamics, this framework enables complimentary system- and measurement-level hardware information optimization, independent of algorithm and motion paths. The approach for system-level optimization is to estimate the asymptotic position and/or orientation uncertainty at many points throughout a desired working volume or surface, and to visualize the results graphically. This global performance estimation can provide both a quantitative assessment of the expected performance and intuition about how to improve the type and arrangement of sources and sensors, in the context of the desired working volume and expected scene dynamics. Using the same model components required for these system-level optimization, the optimal sensor sampling time can be determined with respect to the expected scene dynamics for measurement-level optimization. Also presented is an experimental evaluation to support the verification of asymptotic analysis of tracking system hardware design along with theoretical analysis aimed at supporting the validity of both the system- and measurement-level optimization methods. In addition, a case study in which both the system- and measurement-level optimization methods to a working tracking system is presented. Finally, Artemis, a software tool for amplifying human intuition and experience in tracking hardware design is introduced. Artemis implements the system-level optimization framework with a visualization component for insight into hardware design choices. Like fluid flow dynamics, Artemis examines and visualizes the “information flow” of the source and sensor devices in a tracking system, affording interaction with the modeled devices and the resulting performance uncertainty estimate.

Amburn, Elton P. (1994)

“Development and Evaluation of an Air-to-Air Combat Debriefing System Using a Head-Mounted Display”
Under the direction of Frederick P. Brooks, Jr.
Department of Computer Science Technical Report TR94-068
Electronic copy available

The United States Air Force Red Flag exercise is the premier training experience for fighter pilots. An instrumented range to the north of Nellis AFB, Nevada provides information about aircraft in a Red Flag exercise. The Red Flag Measurement and Debriefing System transmits messages on the high-activity aircraft at a rate of 10 messages per second. These messages contain data such as position, orientation, pilot actions, and aerodynamic variables. This research created and evaluated a computer system for replay and evaluation of Red Flag air-to-air combat training data. This system can display the air combat data either on a workstation console (a 19-inch CRT) or in a head-mounted display (HMD). A human-performance experiment compared the effectiveness of replaying air combat data using these two display options. Experienced fighter pilots from the 422nd Test and Evaluation Squadron, 57th Test Group, Nellis Air Force Base, Nevada served as subjects for the experiment. Using a computer system to replay and evaluate this type of data is not new; however, using a head-mounted display and evaluating its effectiveness is new. Quantitative and qualitative data were collected in the human- performance experiment. The quantitative data were collected from probe questions asked during mission replay. The answers to probe questions were used to compute sensitivity as a measure of the effectiveness of the display types. There was no statistically significant difference between the sensitivity of a subject when using the HMD or the 19-inch CRT. However, there was a trend in the subject’s performance which favored the HMD. The qualitative data was quite clear and showed a statistically significant difference in the preference of the 19-inch CRT over the HMD.

Ammirato, Philip (2019)

Recognizing Fine-Grained Object Instances for Robotics Applications

Under the direction of Alex Berg

State-of-the-art object recognition systems, and computer vision methods in general, are getting better and better at a variety of tasks. Many of these techniques focus on general object categories, like person or bottle, and use many hundreds or thousands of training examples per category. Some domains, such as robotics, require recognition of fine-grained object instances, such as the 16oz. bottle of coke, and only have access to limited training data.

In this work we look at how to build data and adapt successful computer vision techniques for fine-grained object recognition tasks. Our Active Vision Dataset is the first deep-learning scale dataset for instance detection and motion simulation using real images. The Target Driven Instance Detection method uses a siamese convolutional neural network to focus on specific fine-grained object instances, and has the capacity to generalize to instances outside its training set.  Our SymGan method used the Generative Adversarial Network paradigm to train a 3D object orientation predictor that is robust to object symmetries.

Antani, Lakulish (2013)

“Interactive Sound Propogation Using Precomputation and Statistical Approximations”
Under the direction of Dinesh Manocha

Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques–Ambient Reverberance, and Aural Proxies–to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

Arthur, Kevin (2000)

“Effects of Field of View on Performance with Head-Mounted Displays”
Under the direction of Frederick P. Brooks Jr.
Department of Computer Science Technical Report TR00-019
Electronic copy available

The field of view (FOV) in most head-mounted displays (HMDs) is no more than 60 degrees wide–far narrower than our normal FOV of about 200 degrees wide. This mismatch arises mostly from the difficulty and expense of building wide-FOV HMDs. Restricting a person’s FOV, however, has been shown in real environments to affect people’s behavior and degrade task performance. Previous work in virtual reality too has shown that restricting FOV to 50 degrees or less in an HMD can degrade performance. I conducted experiments with a custom, wide-FOV HMD and found that performance is degraded even at the relatively high FOV of 112 degrees, and further at 48 degrees. The experiments used a prototype tiled wide-FOV HMD to measure performance in VR at up to 176 degrees total horizontal FOV, and a custom large-area tracking system to establish new findings on performance while walking about a large virtual environment. FOV was significant in predicting performance of two tasks: searching for and locating a target by turning one’s head, and walking through a simple maze-like environment while avoiding walls. Wide FOV (112 degrees or greater) was especially important for the walking task; for it, performance at 112 degrees was 23% less than at 176 degrees. At 48 degrees, performance was 31% less than at 176 degrees. For the search task, performance at 112 degrees was 12% less than at 176 degrees. At 48 degrees, performance was 24% less than at 176 degrees. Additional analyses of the data show trends that suggest future investigation. Restricting FOV appears to decrease the user’s sense of presence, as measured by a questionnaire. VR sickness, also measured by questionnaire, increased with successive exposures to our system within an hour-long session, but stayed at relatively low levels. FOV appears to alter the occurrence of some sickness symptoms, but the data are inconclusive on whether FOV predicts total sickness. I performed additional measures and analyses, including tests of postural instability, distance memory, spatial memory, head-movement behavior, and comparisons with other HMDs and with real-world performance.

Austin Jr., Joseph H. (1973)

“Formal Models of Binding Processes in Control Programs”
Under the direction of Victor L. Wallace

Formal models of control programs are developed for two purposes: (1) to provide a rigorous definition of the concept of a control program, and (2) to formalize the semantics of control programs as a basis for control program programming languages. It is shown that operating system semantics can be modeled as a rewriting system over graphs. The states of the system, understood as configurations of resources, are modeled by graphs, and the operations or events of the system, understood as changes of the configuration, are modeled by rewriting rules on the graphs. Applications of the model to major areas of concern to control program design are then discussed:

  1. Hierarchic system design is expressed as (a) elaboration and abstraction of a model and (b) as a relation between two models whereby one is said to be the realization of the other.
  2. Synchronization, sharing, and multiplexing are modeled in terms of the macro-concept of a generalized queue (ordered set) and queuing discipline. Thus the concept of macro-definitions within the model is shown to greatly simplify the description of typical control programs.
  3. Protection and “interrupt” handling are shown to be easily described by the model.

Finally, the model is related to the Petri-net model of asynchronous systems. It is suggested by informal illustration that extension of the Petri-net model to include flow of resources can provide an intuitive overview of the interrelationships of the configuration transitions. It is concluded that the models provide: (a) a rationally-derived definition of the control program that distinguishes it from other types of programs, (b) a syntactic representation of control program semantics suitable as a basis for a table-driven interpreter, and (c) a methodology for control program design, based on hierarchic elaboration and the resource-flow view of interrelated concurrent processes.

Aylward, Stephen R. (1997)

“Continuous Mixture Modeling via Goodness-of-Fit Cores”
Under the direction of James M. Coggins
Electronic copy available

This dissertation introduces a new technique for the automated specification of continuous Gaussian mixture models (CGMMs). This approach is ideal for representing distributions that arise in a variety of problems including the driving problem of this dissertation, representing the distribution of the intensities associated with tissues in medical images. Gaussian mixture models are population density representations formed by the weighted linear combination of multiple, multivariate Gaussian component distributions. Finite Gaussian mixture models are formed when a finite number of discrete component distributions are used. CGMMs are formed when multiple continua of component distributions are used. The approach to CGMM specification developed here is based on an original structure, the Gaussian goodness-of-fit (GGoF) core. GGoF functions quantify how well a Gaussian having a particular mean and covariance represents the local distribution of samples. GGoF cores capture the continua of Gaussians which well represent a sampled population; they define a CGMM. In this dissertation, Monte Carlo simulations are used to evaluate the robustness of a variety of GGoF functions and binning methods for the specifications of GGoF cores. The log likelihood ratio GGoF function is shown to produce the most accurate and consistent GGoF cores. In generalized projective Gaussian distributions, similar feature vectors, when projected onto a subset of basis feature vectors, have a Gaussian distribution. In this dissertation, Monte Carlo simulations and ROC analysis are used to demonstrate that for such distributions CGMMs defined using GGoF cores produce accurate and consistent labelings compared to K-means and finite Gaussian mixture models. Additionally, CGMMs do not suffer from the problems associated with determining an appropriate number of components, initializing the component parameters, or iteratively converging to a solution. Generalized projective Gaussian distributions exist in a variety of real-world data. The applicability of CGMM via GGoF cores to real-world data is demonstrated through the accurate modeling of tissues in an inhomogeneous magnetic resonance image. The extension of CGMM via GGoF cores to high-dimensional data is demonstrated through the accurate modeling of a sampled trivariate anisotropic generalized projective Gaussian distribution.

Azuma, Ronald T. (1995)

“Predictive Tracking for Augmented Reality”
Under the direction of Gary Bishop
Department of Computer Science Technical Report TR95-007
Electronic copy available

In Augmented Reality systems, see-through Head-Mounted Displays (HMDs) superimpose virtual three-dimensional objects on the real world. This technology has the potential to enhance a user’s perception of and interaction with the real world. However, many Augmented Reality applications will not be accepted unless virtual objects are accurately registered with their real counterparts. Good registration is difficult, because of the high resolution of the human visual system and its sensitivity to small differences. Registration errors fall into two categories: static errors, which occur even when the user remains still, and dynamic errors caused by system delays when the user moves. Dynamic errors are usually the largest errors. This dissertation demonstrates that predicting future head locations is an effective approach for significantly reducing dynamic errors. This demonstration is performed in real time with an operational Augmented Reality system. First, evaluating the effect of prediction requires robust static registration. Therefore, this system uses a custom optoelectronic head-tracking system and three calibration procedures developed to measure the viewing parameters. Second, the system predicts future head positions and orientations with the aid of inertial sensors. Effective use of these sensors requires accurate estimation of the varying prediction intervals, optimization techniques for determining parameters, and a system built to support real-time processes. On average, prediction with inertial sensors is 2 to 3 times more accurate than prediction without inertial sensors and 5 to 10 times more accurate than not doing any prediction at all. Prediction is most effective at short prediction intervals, empirically determined to be about 80 milliseconds or less. An analysis of the predictor in the frequency domain shows the predictor magnifies the signal by roughly the square of the angular frequency and the prediction interval. For specified head-motion sequences and prediction intervals, this analytical framework can also estimate the maximum possible time-domain error and the maximum tolerable system delay given a specified maximum time-domain error. Future steps that may further improve registration are discussed.

B

Babich, Wayne A. (1977)

“High Level Data Flow Analysis Using a Parse Tree Representation of the Program”
Under the direction of Medhi Jazayeri

Data flow analysis is a technique for collecting information about the flow of data within computer programs. It is used by optimizing compilers in order to insure the correctness of optimizations as well as by certain program debugging aids. This dissertation introduces an iterative technique, called the method of attributes, for performing global data flow analysis using a parse tree representation of the source program. The technique is based on a variant of attribute grammars, and permits most control flow structures found in modern algorithmic languages, including unconstrained GOTO’s. It is applied to analysis of live variables and to analysis of available expressions; in both cases, time required for analysis of GOTO- less programs is linear in program size (with a small constant of linearity), regardless of the nesting depth of program loops. This dissertation also describes the importance of producing data flow analysis information “on demand.” The demand problem is the problem of supplying a single piece of data flow information upon request from an optimizer or other processor. The method of attributes is applied to demand analysis of live variables. KEY PHRASES: Data flow analysis; Live variables; Dead variables; Available expressions; Attribute grammars; Demand analysis; Method of attributes; Program optimization; Program annotation. Computing Reviews Categories 4.12, 5.24

Bajura, Michael A. (1997)

“Merging Real and Virtual Environments with Video See-Through Head-Mounted Displays”
Under the direction of Henry Fuchs
Department of Computer Science Technical Report TR98-036
Electronic copy available

This dissertation explores the problem of inserting virtual (computer generated) objects into natural scenes in video-based augmented-reality (AR) systems. The augmented-reality system problems addressed are the shorter-term goal of making synthetic objects appear to be more stable and registered and the longer-term goal of presenting proper occlusion and interaction cues between real and synthetic objects. These results can be achieved in video-based AR systems by directly processing video images of the natural scene in real-time. No additional hardware or additional tracking mechanisms are needed above those currently used in augmented-reality systems. The implications of this work suggest a different approach to the design of AR tracking systems in which the user’s world view is used as part of the tracking process. This work shows that tracking system errors which affect the user’s world view can be corrected by measuring these errors dynamically and correcting for them in a way which is not readily apparent or objectionable to the user. This means that either cheaper tracking systems could be used or possibly even no separate tracking system in certain situations. Furthermore, the user’s world view can be used to construct a 3-D model of his environment which can be used to model object interaction and occlusion relations. This is the first step toward further work of properly shading and lighting synthetic objects inserted into natural scenes. A real-time (30 Hz) AR system is demonstrated which uses simple image feature tracking to improve registration and tracking accuracy. This is extended to a more general system which illustrates how reconstruction of natural scene geometry, correct synthetic and real object occlusion, and collision detection could be achieved in real-time. 3-D error analysis is included. A camera calibration appendix contains accuracy measurements.

Banks, David C. (1993)

“Interacting with Surfaces in Four Dimensions Using Computer Graphics”
Under the direction of Stephen M. Pizer
Department of Computer Science Technical Report TR93-011
Electronic copy available

High-speed, high-quality computer graphics enables a user to interactively manipulate surfaces in four dimensions and see them on a computer screen. Surfaces in 4-space exhibit properties that are prohibited in 3-space. For example, nonorientable surfaces may be free of self-intersections in 4-space. Can a user actually make sense of the shapes of surfaces in a larger-dimensional space than the familiar 3D world? Experiment shows he can. A prototype system called Fourphront, running on the graphics engine Pixel-Planes 5 (developed at UNC-Chapel Hill) allows the user to perform interactive algorithms in order to determine some of the properties of a surface in 4-space. This dissertation describes solutions to several problems associated with manipulating surfaces in 4-space. It shows how the user in 3-space can control a surface in 4-space in an intuitive way. It describes how to extend the common illumination models to large numbers of dimensions. And it presents visualization techniques for conveying 4D depth information, for calculating intersections, and for calculating silhouettes.

Bapat, Akash (2019)

Towards High-Frequency Tracking and Fast Edge-Aware Optimization

Under the direction of Jan Michael-Frahm

Computer vision has seen tremendous success in refashioning cameras from mere recording equipment to devices which can measure, understand, and sense the surroundings. Efficient algorithms in computer vision have now become essential for processing the vast amounts of image data generated by a multitude of devices as well as enabling real-time applications like augmented/virtual reality. This dissertation advances the state of the art for AR/VR (augmented and virtual reality) tracking systems by increasing the tracking frequency by orders of magnitude and proposes an efficient algorithm for the problem of edge-aware optimization.

AR/VR is the most natural way of interacting with computers, where the physical and digital worlds coexist. We are sitting at the cusp of a radical change in how humans perform and interact with computing, all due to affordable, accessible technology. This has been led by major technological advancements in hardware, sensors, and high-performance processors, and in the tracking, rendering, and display algorithms necessary to enable AR/VR. Humans are sensitive to small misalignments between the real and the virtual world, and tracking at kilo-Hertz frequencies becomes essential. Current vision-based systems fall short, as their tracking frequency is implicitly limited by the frame-rate of the camera.This thesis presents a prototype system which can track at orders of magnitude higher than the state-of-the-art methods using multiple commodity cameras. The proposed system exploits characteristics of the camera traditionally considered as flaws, namely rolling shutter and radial distortion. The experimental evaluation shows the effectiveness of the method for various degrees of motion.

Furthermore, edge-aware optimization is an indispensable tool in the computer vision arsenal for accurate filtering of depth-data and image-based rendering, which is increasingly being used for content creation and geometry processing for AR/VR. As applications increasingly demand higher resolution and speed, there exists a need to develop methods that scale accordingly. This dissertation proposes such an edge-aware optimization framework which is efficient, accurate, and algorithmically scales well, all of which are much desirable traits not found jointly in the state of the art. The experiments show the effectiveness of the framework in a multitude of computer vision tasks such as computational photography, stereo, and super-resolution.

Bartel, Jacob (2015)

“Predictions to Ease User’s Effort in Scalable Sharing”
Under the direction of Prasun Dewan

Significant user effort is required to choose recipients of shared information, which grows as the scale of the number of potential or target recipients increases. It is our thesis that it is possible to develop new approaches to predict persistent named groups, ephemeral groups, and response times that will reduce user effort.

We predict persistent named groups using the insight that implicit social graphs inferred from messages can be composed with existing prediction techniques designed for explicit social graphs, thereby demonstrating similar grouping patterns in email and communities. However, this approach still requires that users know when to generate such predictions. We predict group creation times based on the intuition that bursts of change in the social graph likely signal named group creation.  While these recommendations can help create new groups, they do not update existing ones. We predict how existing named groups should evolve based on the insight that the growth rates of named groups and the underlying social graph will match.

When appropriate named groups do not exist, it is useful to predict ephemeral groups of information recipients. We have developed an approach to make hierarchical recipient recommendations that groups the elements in a flat list of recommended recipients, and thus is composable with existing flat recipient-recommendation techniques. It is based on the insight that groups of recipients in past messages can be organized in a tree.

To help users select among alternative sets of recipients, we have made predictions about the scale of response time of shared information, based on the insights that messages addressed to similar recipients or containing similar titles will yield similar response times.

Our prediction approaches have been applied to three specific systems – email, Usenet and Stack Overflow – based on the insight that email recipients correspond to Stack Overflow tags and Usenet newsgroups.

We evaluated these approaches with actual user data using new metrics for measuring the differences in scale between predicted and actual response times and measuring the costs of eliminating spurious named-group predictions, editing named-group recommendations for use in future messages, scanning and selecting hierarchical ephemeral group-recommendations, and manually entering recipients.

Bastos, Rui (1999)

“Superposition Rendering: Increased Realism for Interactive Walkthrough”
Under the direction of Frederick P. Brooks, Jr.
Department of Computer Science Technical Report TR00-021
Electronic copy available

The light transport equation, conventionally known as the rendering equation in a slightly different form, is an implicit integral equation, which represents the interactions of light with matter and the distribution of light in a scene. This research describes a signals-and-systems approach to light transport and casts the light transport equation in terms of convolution. Additionally, the light transport problem is linearly decomposed into simpler problems with simpler solutions, which are then recombined to approximate the full solution. The central goal is to provide interactive photorealistic rendering of virtual environments. We show how the light transport problem can be cast in terms of signals-and-systems. The light is the signal and the materials are the systems. The outgoing light from a light transfer at a surface point is given by convolving the incoming light with the material’s impulse response (the material’s BRDF/BTDF). Even though the theoretical approach is presented in directional-space, we present an approximation in screen-space, which enables the exploitation of graphics hardware convolution for approximating the light transport equation. The convolution approach to light transport is not enough to fully solve the light transport problem at interactive rates with current machines. We decompose the light transport problem into simpler problems. The decomposition of the light transport problem is based on distinct characteristics of different parts of the problem: the ideally diffuse, the ideally specular, and the glossy transfers. A technique for interactive rendering of each of these components is presented as well as a technique for superposing the independent components in a multipass manner in real time. Given the extensive use of the superposition principle in this research, we name our approach superposition rendering to distinguish it from other standard hardware-aided multipass rendering approaches.

Bellovin, Steven M. (1982)

“Verifiably Correct Code Generation Using Predicate Transformers”
Under the direction of David L. Parnas

For about fifteen years, computer scientists have known how to prove that programs meet mathematical specifications. However, the methodology has satisfied only theoreticians; “practical” programmers have tended to reject it for a variety of reasons. Among these are the difficulty of rigorously defining the function of a program, and the fact that the proofs of correctness tend to be much longer and much more complex that the programs themselves. We have tackled this problem from two sides. First, we deal with proving compilers correct, a special class of programs for which there exists a rather simple specification of purpose. Second, although we would prefer to have programs which are guaranteed to be correct, we will accept a program where we can easily determine if the output of any given run is correct. People will generally be satisfied with a program that usually runs properly, but will always produce a message if it has not. That way, one can have a great deal of confidence in its output. Working with a compiler for Dijkstra’s language, as defined in A Discipline of Programming, and using predicate transformers for our semantic definitions, we have presented a methodology for showing that individual compilations are correct. This involves verifying formally the basic code templates and many library subroutines, and presenting a set of rules that allows one to perform a pattern match between the generated code and the standard templates. This match, which is fairly easy to perform and could in fact be done by a separate program, is our guarantee that the output of the compiler is correct. The set of rules we present includes a few restrictions on the compiler, most notably that the object code for a given statement must be uncoupled from the code for any other statement. In general, non- optimizing compilers are not affected by any of our restrictions; our DPL compiler, which was developed independently, was not changed in any way to meet our requirements.

Bennett, Eric P. (2007)

“Computational Video Enhancement”
Under the direction of Leonard McMillan
Electronic copy available

During a video, each scene element is often imaged many times by the sensor. I propose that by combining information from each captured frame throughout the video it is possible to enhance the entire video. This concept is the basis of computational video enhancement. In this dissertation, the viability of computational video processing is explored in addition to presenting applications where this processing method can be leveraged. Spatio-temporal volumes are employed as a framework for efficient computational video processing, and I extend them by introducing sheared volumes. Shearing provides spatial frame warping for alignment between frames, allowing temporally-adjacent samples to be processed using traditional editing and filtering approaches. An efficient filter-graph framework is presented to support this processing along with a prototype video editing and manipulation tool utilizing that framework. To demonstrate the integration of samples from multiple frames, I demonstrate methods for improving poorly exposed low-light videos to achieve improved results. This integration is guided by a tone- mapping process to determine spatially-varying optimal exposures and an adaptive spatio-temporal filter to integrate the samples. Low- light video enhancement is also addressed in the multispectral domain by combining visible and infrared samples. This is facilitated by the use of a novel multispectral edge-preserving filter to enhance only the visible spectrum video. Finally, the temporal characteristics of videos are altered by a computational video recompiling process. By resampling the video-rate footage, novel time-lapse sequences are found that optimize for user- specified characteristics. The resulting shorter video is a more faithful summary of the original source than a traditional time-lapse video. Simultaneously, new synthetic exposures are generated to alter the output video’s aliasing characteristics.

Bentley, Jon L. (1976)

“Divide and Conquer Algorithms for Closest Point Problems in Multidimensional Space”
Under the direction of Donald F. Stanat

The contributions contained in this dissertation can be broadly classified as falling into three areas: multidimensional algorithms, the “divide and conquer” strategy, and principles of algorithm design. Contributions to multidimensional algorithms are twofold: basic results and basic methods. The results deal with algorithms for determining properties of sets of N points in k dimensional space. Among other results it is shown that the closest pair among the N points can be found in time proportional to N log N and that the nearest neighbor to each point among the N can be found in time proportional to N(log N)^k-1 (for k > 2). The basic methods include an algorithm schema applicable to many multidimensional problems and fundamental concepts for dealing with such problems. The algorithms in this dissertation demonstrate the power of the divide and conquer strategy. The strategy is shown to be applicable to multidimensional problems. The basic technique is modified in many interesting ways to create faster algorithms. The final area to which this dissertation contributes is algorithm design. Instead of merely presenting the results herein as finished products, they are arrived at through a detailed development process. This development is one of the few written records of the development of an asymptotically fast algorithm; as such it is a suitable basis for teaching algorithm design. The general principles of algorithm design employed are enumerated.

Bera, Aniket (2017)

Interactive Tracking, Prediction, and Behavior Learning of Pedestrians in Dense Crowds

Under the direction of Dinesh Manocha

The ability to automatically recognize human motions and behaviors is a key skill for autonomous machines to exhibit to interact intelligently with a human-inhabited environment. The capabilities autonomous machines should have  include computing the motion trajectory of each pedestrian in a crowd, predicting his or her position in the near future, and analyzing the personality characteristics of the pedestrian. Such techniques are frequently used for collision-free robot navigation, data-driven crowd simulation, and crowd surveillance applications. However, prior methods for these problems have been restricted to low-density or sparse crowds where the pedestrian movement is modeled using simple motion models.

In this thesis, we present several interactive algorithms to extract pedestrian trajectories from videos in dense crowds. Our approach combines different pedestrian motion models with particle tracking and mixture models and can obtain ~20% accuracy improvement in medium-density crowds. We compute the pedestrian dynamics from these trajectories using Bayesian learning techniques and combine them with global methods for long-term pedestrian prediction in densely crowded settings. Finally, we combine these techniques with Personality Trait Theory to automatically classify the dynamic behavior or the personality of a pedestrian based on his or her movements in a crowded scene. The resulting algorithms are robust and can handle sparse and noisy motion trajectories. We demonstrate the benefits of our long-term prediction and behavior classification methods in dense crowds and highlight the benefits over prior techniques.  

We highlight the performance of our novel algorithms on three different applications. The first application is interactive data-driven crowd simulation, which includes crowd replication as well as the combination of pedestrian behaviors from different videos. Secondly, we combine the prediction scheme with proxemic characteristics from psychology and use them to perform socially-aware navigation. Finally, we present novel techniques for anomaly detection in low-to medium-density crowd videos using trajectory-level behavior learning.

Bergman, Lawrence D. (1993)

“VIEW–A System for Prototyping Scientific Visualizations”
Under the direction of Frederick P. Brooks, Jr.
Department of Computer Science Technical Report TR93-065
Electronic copy available

Different visual representations emphasize different aspects of scientific information. As a scientist searches for representations that yield the most profound insights, she wants to try a variety of representations quickly and easily. Current visualization systems emphasize the rapid application of visual representations to new datasets, rather than the rapid development of new visual representations. The dissertation addresses the question, “How can we facilitate exploratory visualization, particularly creation of new representations?” A design-process-based model is proposed. Based on the principles suggested by this model, VIEW, an interactive molecular visualization system, was developed. The VIEW system provides a two-pronged solution to the proposed problem. First, the system supplies a library of drawing tools. These tools allow the user to sketch geometric representations, specifying database parameters by interactively selecting geometric elements on-screen. Besides generating geometry, drawing tools modify color, change geometric parameters, reposition geometry, and animate groups. Second, a facility for developing new drawing tools or modifying existing ones allows the user to extend the library. Tools are specified in a C-like script language. With this language, the user defines new representations and customizes the interaction style of the tools. The system is designed to facilitate a rapid code-try-recode cycle. In addition to the interpreted language, a development environment includes a Macintosh-like editor and an interactive debugger that shows in real time the visual effects of changes to tool script. The dissertation presents the design model, describes the VIEW system, and then demonstrates the utility of this design-process-based visualization system through a series of case-studies.

Best, Andrew (2018)

Motion Planning for Multi-Agent Systems with Physics-Based and Behavior Constraints

Under the direction of Dinesh Manocha

Man-made entities and humans rely on movement as an essential form of interaction with the world.

Whether it is an autonomous vehicle navigating crowded roadways or a simulated pedestrian traversing a virtual world, each entity must compute safe, effective paths to achieve their goals. In addition, these entities, termed agents, are subject to unique physical and behavioral limitations within their environment. For example, vehicles have a finite physical turning radius and must obey behavioral constraints such as traffic signals and rules of the road. Effective motion-planning algorithms for diverse agents must account for these physical and behavioral constraints.

In this dissertation, we present novel motion-planning algorithms that capture aspects of the behavioral and physics-based constraints for virtual agents. We describe representational approaches to capture specific physical constraints on the various agents and propose abstractions to model behavior constraints affecting them. We then describe algorithms to plan motions for agents who are subject to the modeled constraints.

First, we describe a biomechanically accurate elliptical representation for virtual pedestrians; we also describe human-like movement constraints corresponding to shoulder-turning and side-stepping in dense scenes. We detail a novel motion-planning algorithm extending velocity obstacles to generate paths for hundreds of elliptical agents at interactive rates. Next, we describe a model to encode dynamics and traffic-like behavior constraints for autonomous vehicles in urban and highway environments. We describe a motion-planning algorithm to generate safe, high-speed avoidance maneuvers using a novel optimization function and modified velocity-obstacle formulation, and we present a simulation framework to evaluate driving strategies. Next, we present an approach to incorporating high-level reasoning to model the motions and behaviors of virtual agents in terms of verbal interactions with other agents or avatars. Our approach leverages natural language interaction to reduce uncertainty and generate effective plans. Finally, we describe an application of our techniques to simulate pedestrian behaviors for gathering simulated data about boarding, deplaning, and evacuating an aircraft.

Bhalachandra, Sridutt (2018)

Runtime Methods to Improve Energy Efficiency in Supercomputing Applications

Under the direction of Allan K. Porterfield & Jan Prins

Energy efficiency in supercomputing is critical to limit operating costs and carbon footprints. While the energy efficiency of future supercomputing centers needs to improve at all levels, the energy consumed by the processing units is a large fraction of the total energy consumed by High Performance Computing (HPC) systems. HPC applications use a parallel programming paradigm like the Message Passing Interface (MPI) to coordinate computation and communication among thousands of processors. With dynamically changing factors both in hardware and software affecting energy usage of processors there exists a need for power monitoring and regulation at runtime to achieve savings in energy.

This dissertation highlights an adaptive runtime framework that enables processors with core-specific power control to reduce power with little or no performance impact by dynamically adapting to workload characteristics. Two opportunities to improve energy efficiency of processors running MPI applications are identified – computational workload imbalance and waiting on memory. Monitoring of performance and power regulation is performed by the framework transparently within the MPIruntime system, eliminating the need for code changes to MPI applications. The effect of enforcing power limits (capping) processors is also investigated.

Experiments on 32 nodes (1024 cores) show that in presence of workload imbalance, the runtime reduces the frequency oncores not on the critical path, thereby reducing power and hence energy usage without deteriorating performance. Using this runtime, six MPI mini-applications and a full MPI application show an overall 20% decrease in energy use with less than 1%increase in execution time. In addition, the lowering of frequency on non-critical cores reduces run-to-run performance variation and improves performance. For the full application an average speedup of 11% is seen, while the power is lowered by about31% for an energy savings of up to 42%. Another experiment on 16 nodes (256 cores) that are power capped also shows performance improvement along with power reduction. Thus, energy optimization can also be performance optimization. For Applications that are limited by memory access times, new memory metrics introduced facilitate lowering of power by up to32% without adversely impacting performance.

Biagioni, Edoardo S. (1992)

“Scan Directed Load Balancing”
Under the direction of Gyula A. Mago and Jan F. Prins
Department of Computer Science Technical Report TR91-045
Electronic copy available

Scan Directed Load Balancing is an algorithm for dynamic load balancing on k-dimensional mesh-connected computers. In one dimension, a scan computes the load to the left of each processor and the total load. The difference between the load to the left and the average load times the number of processors to the left is the number of active data points to be shifted to balance the load. In two dimensions, scans are used to compute the load of each column of data and the load to the left of each column of processors. The difference between the load to the left and the average load times the number of processor columns to the left is the load of the data columns to be shifted by this column of processors; the computation is repeated for the rows. In more than two dimensions, the load is computed for (hyper-) planes of data and is balanced by shifting data (hyper-) planes among processor (hyper-) planes; the basic one-dimensional step of the algorithm is repeated k times. The algorithm targets applications that can be expressed as local computations on a grid and whose dynamic activity is unevenly distributed. In one such computation, Variable Conductance Diffusion, different pixels diffuse for different numbers of iterations, with those that require many iterations unevenly distributed and, in the absence of load balancing, often located on a small subset of the processors. Scan Directed Load Balancing is effective in speeding up Variable Conductance Diffusion for a class of images (512 X 512 pixel), and very effective for selected images in which only a small fraction of the pixels requires many iterations. This implementation, and that for a parallel interpreter for the functional language FFP that uses Scan Directed Load Balancing to both manage storage and balance load, are for UNC’s MasPar MP-1, with 4096 processors embedded in a 2-dimensional mesh. Scan Directed Load Balancing does not guarantee optimal load balance, but always assigns neighboring data to the same or to neighboring processors, so communication between neighboring data is local and fast.

Bishop, T. Gary (1984)

“Self-Tracker: A Smart Optical Sensor on Silicon”
Under the direction of Henry Fuchs
Department of Computer Science Technical Report TR84-002
Electronic copy available

A new system for real-time, three-dimensional computer input is described. The system will use a cluster of identical custom integrated circuits with outward looking lenses as an optical sensing device. Each custom integrated sensor chip measures and reports the shift in its one-dimensional image of the stationary room environment. These shifts will be processed by a separate general-purpose computer to extract the three-dimensional motion of the cluster. The expected advantages of this new approach are unrestricted user motion, large operating environments, capability for simultaneous tracking of several users, passive tracking with no moving parts, and freedom from electromagnetic interference. The fundamental concept for the design of the sensor chip relies on a cyclic relationship between speed and simplicity. If the frame rate is fast, the changes from image to image will be small. Small changes can be tracked with a simple algorithm. This simple algorithm can be implemented with small circuitry. The small circuitry lets a single chip hold the complete sensor, both imaging and image processing. Such implementation allows each sensor to be fast because all high-bandwidth communication is done on-chip. This cyclic relationship can spiral up as the design is iterated, with faster and simpler operation, or down, with slower and more complex operation. The system design sequence described here has been spiraling up. System, sensor, algorithm, and processor designs have each had several iterations. Most recently, a prototype sensor chip has been designed, fabricated, and tested. The prototype includes a one-dimensional camera and circuitry for image tracking that operates at 1000 to 4000 frames per second in ordinary room light. As part of this research, photosensors that operate at millisecond rates in ordinary room light with modest lenses have been designed, tested and fabricated on standard digital nMOS lines. They may be useful for designers of other integrated optical sensors.

Blate, Alex (2019)

Mitigating Real-Virtual Disparities in Illumination and Dynamic Position in Optical See-Through Augmented Reality

Under the direction of Henry Fuchs and Mary Whitton

A user of optical see-through (OST) augmented reality (AR) displays sees virtual objects optically-combined with and referenced to the user’s view of the real world. The requirements for OST AR displays are significantly more stringent than those of virtual reality (VR) displays, wherein the user can only see the VR display. One class of problems arising in OST AR involves real-virtual disparities – disparities between the real world and virtual content. The present work focuses on and takes significant steps towards addressing two such disparities: illuminationand dynamic position

Illumination:Limited display dynamic range leads to disparities between the luminosity of virtual objects with respect to their real-world surroundings. OST AR displays should support dynamic ranges commensurate with that of real-world scenes and do so without sacrificing display latency. We propose and demonstrate an implementation of a new illumination method for displays based upon binary frame spatial light modulators wherein, on a per-binary-frame basis, the luminosity and color of illumination can be chosen arbitrarily within the illuminator’s dynamic range. Using this method, the number of binary frames per integrated frame is linearin bits-per-pixel – compared to traditional illumination which is exponentialin bits-per-pixel. As such, we are able to extenddisplay’s dynamic range while reducingdisplay latency. To implement scene-adaptive illumination, the display also requires information about the luminosity of the real-world scene at various positions; to this end, we present a prototype high dynamic range position-sensitive light sensor. This sensor is used in the demonstration of an OST AR display with scene-adaptive illumination over 115dB of luminous dynamic range.

Dynamic Position: Low head tracking sample rates and/or high tracking latencies cause transient disparities between the locationsof real and virtual objects (real-virtual displacement) during user head movement (dynamic position). OST AR systems require head tracking with higher sample rates, lower latency, and greater precision than previously demonstrated. We present the design and implementation of a 6-degree-of-freedom optical head tracking instrument with, to our knowledge, the lowest latency and highest sample rate of any previously-reported tracker(s); this instrument is suitable for use in developing and studying low-latency OST AR displays and related perceptual phenomena. Additionally, we demonstrate a new latency measurement technique that directly measure real-virtual displacement caused by tracking latency. The tracking instrument outputs 50,000 pose samples per second, has motion-to-pose latency of 28 μs, and dynamic error of less than 2 arcminutes at over 500°/s of head rotation. The tracking instrument’s tracking volume, about 120x120x250mm, allows the full range of natural head movement for a seated person; extension of the tracker’s tracking volume and trade-offs between tracking volume and tracking performance (such as spatial resolution) are discussed in detail.

Block, Aaron (2008)

“Adaptive Multiprocessor Real-Time Systems”
Under the direction of James Anderson
Electronic copy available

Over the past few years, as multicore technology has become cost-effective, multiprocessor systems have become increasingly prevalent. The growing availability of such systems has spurred the development of computationally-intensive applications for which single-processor designs are insufficient. Many such applications have timing constraints; such timing constraints are often not static, but may change in response to both external and internal stimuli. Examples of such applications include tracking systems and many multimedia applications. Motivated by these observations, this dissertation proposes several different adaptive scheduling algorithms that are capable of guaranteeing flexible timing constraints on multiprocessor platforms. Under traditional task models (e.g., periodic, sporadic, etc.), the schedulability of a system is based on each task’s worst-case execution time (WCET), which defines the maximum amount of time that each of its jobs can execute. The disadvantage of using WCETs is that systems may be deemed unschedulable even if they would function correctly most of the time when deployed. Adaptive real-time scheduling algorithms allow the timing constraints of applications to be adjusted based upon runtime conditions, instead of always using fixed timing constraints based upon WCETs. While there is a substantial body of prior work on scheduling applications with static timing constraints on multiprocessor systems, prior to this dissertation, no adaptive multiprocessor scheduling approach existed that is capable of ensuring bounded “error” (where error is measured by comparison to an ideal allocation). In this dissertation, this limitation is addressed by proposing five different multiprocessor scheduling algorithms that allow a task’s timing constraints to change at runtime. The five proposed adaptive algorithms are based on different non-adaptive multiprocessor scheduling algorithms that place different restrictions on task migrations and preemptions. The relative advantages of these algorithms are compared by simulating both the Whisper human tracking system and the Virtual Exposure Camera (VEC), both of which were developed at The University of North Carolina at Chapel Hill. In addition, a feedback-based adaptive framework is proposed that not only allows timing constraints to adapt at runtime, but also detects which adaptions are needed. An implementation of this adaptive framework on a real-time multiprocessor testbed is discussed and its performance is evaluated by using the core operations of both Whisper and VEC. From this dissertation, it can be concluded that feedback and optimization techniques can be used to determine at runtime which adaptions are needed. Moreover, the accuracy of an adaptive algorithm can be improved by allowing more frequent task migrations and preemptions; however, this accuracy comes at the expense of higher migration and preemption costs, which impacts average-case performance. Thus, there is a tradeoff between accuracy and average-case performance that depends on the frequency of task migrations/preemptions and their cost.

Bokinsky, Alexandra (2003)

“Multivariate Data Visualization with Data-Driven Spots”
Under the direction of Frederick Brooks
Electronic copy available

This dissertation addresses an important problem in the visualization of scientific data: Given a family of single-valued functions Fk(x,y) in two dimensions, all sampled on a regular, finite, two-dimensional grid, i,j, devise visualizations that allow each function to be visually discriminated and studied separately and studied for comparisons and correlations with other functions in the family. I developed a technique called Data-Driven Spots (DDS), where each function Fk(x,y) is sampled, by a random collection of circular Gaussians with a uniform standard deviation, rather than presented in full. Human visual perception estimates the function’s values where it is not represented in the sample. These sampled functions, called layers, are presented overlaid. Each function is sampled and displayed in some regions where the others are not, thus each can be discriminated separately; since all are shown simultaneously, correlations can also be estimated. Layers are displayed with alpha-blending, such that each layer is distinguished by hue and/or the standard deviation of the Gaussians. Data values are multiplied by the Gaussian at each i,j grid point; the result determines the opacity of that layer at that point. Blended with a neutral gray background, each layer has color saturation at i,j proportional to the modulated data value. Since opacities are displayed, lower layers are mostly visible between the spots on upper layers. I built a DDS Toolkit that enables users to construct DDS visualizations of function families. Construction of effective DDS visualizations requires interactive exploration of visualization parameters, which the toolkit facilitates. I used the toolkit to prepare visualizations of many kinds of function families; a collection of images is presented. To evaluate the effectiveness of the DDS technique, I performed user studies. These studies showed that performance on a spatial correlation task was better for overlaid DDS images than side-by-side DDS images, that alpha-blended layers are visually discriminable in the presence of up to seven distractors, and that animation of layers with respect to each other dramatically increases their visual salience. Animation permits a Fk(x,y) to be seen at every i,j over time; human visual perception integrates non-simultaneously displayed values into a coherent understanding.

Bollella, Gregory (1997)

“Slotted Priorities: Supporting Real-Time Computing Within General-Purpose Operating Systems.”
Under the direction of Kevin Jeffay
Electronic copy available

Recent advances in network technologies, processor capabilities, and microcomputer system hardware, coupled with the explosive growth of the Internet and on-line data access, have created new demands on personal computer operating systems and hardware. In large part, these demands are for the ability to acquire, manipulate, display, and store multimedia data. The computational processes that successfully acquire and display multimedia data necessarily have deadlines. That is, the computation must be complete before a specified point in time. Currently, no general-purpose operating systems support such real-time processes. We have developed a software architecture, called slotted priorities, that defines a way to add support for real-time computation to existing general-purpose operating systems for uniprocessor machine architectures. The slotted priorities architecture shares the resources of a computing system between a general-purpose operating system and a real-time kernel. Software components called executives manage how an instance of a resource is shared. Executives ensure that the RTK can gain access to the resource at precise times. The resulting operating system will be able to guarantee that certain computations will always} complete before their stated deadline. The modifications to the general-purpose operating system are modest. The architecture is comprised of a resource model, an execution model, and a programming model. The resource model is a classification of resources according to characteristics relevant to the sharing of the resources between the real-time kernel and the general-purpose operating system. The execution model defines how real-time tasks acquire the processor. The programming model defines how programmers write and think about real-time programs for an implementation of the slotted priorities architecture. Finally, we develop a feasibility test which can determine if a set of periodic real-time threads will all meet their deadlines when executed on a system implementing this architecture. We describe an implementation of the architecture and a set of experiments that validate the implementation. Two real-time demonstration applications were built and executed on the test implementation. Results and analysis of those applications are also presented.

Borland, David M. (2007)

“Flexible Occlusion Rendering for Improved Views of Three-Dimensional Medical Images”
Under the direction of Russell M. Taylor
Electronic copy available

The goal of this work is to enable more rapid and accurate diagnosis of pathology from three-dimensional (3D) medical images by augmenting standard volume rendering techniques to display otherwise-occluded features within the volume. When displaying such data sets with volume rendering, appropriate selection of the transfer function is critical for determining which features of the data will be displayed. In many cases, however, no transfer function is able to produce the most useful views for diagnosis of pathology. FOR is an addition to standard ray cast volume rendering that modulates accumulated color and opacity along each ray upon detecting features indicating the separation between objects of the same intensity range. For contrast-enhanced MRI and CT data, these separation features are intensity peaks. To detect these peaks, a hysteresis-based method is used to reduce sensitivity to noise. To further reduce noise and enable control over the spatial scale of the features detected, a smoothed version of the original data set is used for feature detection, while rendering the original data at high resolution. Separating the occlusion feature detection from the volume rendering transfer function enables robust occlusion determination and seamless transition from occluded views to non-occluded views of surfaces during virtual flythroughs. FOR has been applied to virtual arthroscopy of joint from MRI data. For example, when evaluating a shoulder joint, a view showing the entire shoulder socket is the most useful view for rapid evaluation. However, such views are impossible using standard volume rendering, as the material in the humeral head will occlude the shoulder socket from this viewpoint. FOR automatically culls the occluding material of the humeral head, enabling display of the entire shoulder socket. FOR has also been successfully applied to virtual ureteroscopy of the renal collecting system from CT data, and knee fracture visualization from CT data.

Bowen, Chris (2018)

Robots that Learn and Plan: Unifying Robot Learning and Motion Planning for Generalized Task Execution

Under the direction of Ron Alterovitz

Robots have the potential to assist people with a variety of everyday tasks, but to achieve that potential robots require software capable of planning and executing motions in cluttered environments.  To address this, over the past few decades, roboticists have developed numerous methods for planning motions to avoid obstacles with increasingly stronger guarantees, from probabilistic completeness to asymptotic optimality. Some of these methods have even considered the types of constraints that must be satisfied to perform useful tasks, but these constraints must generally be manually specified.  In recent years, there has been a resurgence of methods for automatic learning of tasks from human-provided demonstrations. Unfortunately, these two fields, task learning and motion planning, have evolved largely separate from one another, and the learned models are often not usable by motion planners.   

        In this thesis, I aim to bridge the gap between robot task learning and motion planning by employing a learned task model that can subsequently be leveraged by an asymptotically-optimal motion planner to autonomously execute the task. First, I show that application of a motion planner enables task performance while avoiding obstacles not present in the demonstrations and extend this to dynamic environments by replanning at reactive rates.  Second, I generalize the model estimation method to accommodate time-invariant model parameters, allowing more information to be gleaned from the demonstrations. Third, I describe a more principled approach to temporal registration for such learning methods that mirrors the ultimate integration with a motion planner and often reduces the number of demonstrations required. Finally, I extend this framework to the domain of mobile manipulation.  I empirically evaluate each of these contributions on multiple household tasks using the Aldebaran Nao, Rethink Robotics Baxter, and Fetch mobile manipulator robots to show that these approaches improve task execution success rates.

Brandenburg, Bjoern B. (2011)

“Scheduling and Locking in Multiprocessor Real-Time Operating Systems”
Under the direction of James H. Anderson
Electronic copy available

With the widespread adoption of multicore architectures, multiprocessors are now a standard deployment platform for (soft) real-time applications. This dissertation addresses two questions fundamental to the design of multicore-ready real-time operating systems: (1) Which scheduling policies offer the greatest flexibility in satisfying temporal constraints; and (2) which locking algorithms should be used to avoid unpredictable delays? With regard to Question 1, LITMUSRT, a real-time extension of the Linux kernel, is presented and its design is discussed in detail. Notably, LITMUSRT implements link-based scheduling, a novel approach to controlling blocking due to non-preemptive sections. Each implemented scheduler (22 configurations in total) is evaluated under consideration of overheads on a 24-core Intel Xeon platform. The experiments show that partitioned earliest-deadline first (EDF) scheduling is generally preferable in a hard real-time setting, whereas global and clustered EDF scheduling are effective in a soft real-time setting. With regard to Question 2, real-time locking protocols are required to ensure that the maximum delay due priority inversion can be bounded a priori. Several spinlock- and semaphore-based multiprocessor real-time locking protocols for mutual exclusion (mutex), reader-writer (RW) exclusion, and k-exclusion are proposed and analyzed. A new category of RW locks suited to worst-case analysis, termed phase-fair locks, is proposed and three efficient phase-fair spinlock implementations are provided (one with few atomic operations, one with low space requirements, and one with constant RMR complexity). Maximum priority-inversion blocking is proposed as a natural complexity measure for semaphore protocols. It is shown that there are two classes of schedulability analysis, namely suspensionoblivious and suspension-aware analysis, that yield two different lower bounds on blocking. Five asymptotically optimal locking protocols are designed and analyzed: a family of mutex, RW, and k-exclusion protocols for global, partitioned, and clustered scheduling that are asymptotically optimal in the suspension-oblivious case, and a mutex protocol for partitioned scheduling that is asymptotically optimal in the suspension-aware case. A LITMUSRT-based empirical evaluation is presented that shows these protocols to be practical.

Britton, Edward G. (1977)

“A Methodology for the Ergonomic Design of Interactive Computer Graphic Systems, and its Application to Crystallography”

Under the direction of Frederick P. Brooks, Jr. This dissertation presents a methodology for the ergonomic design, or human engineering, of interactive computer graphic systems. It proposes human-factors principles based on linguistic and psychological considerations, and proposes an order for designing the features of a system. As a vehicle for developing this methodology, we designed and built a system, described herein, with which biochemists can determine the structures of macromolecules. Crystallographers have used this over one thousand hours and published chemical results based on their work. We assert that the ease with which crystallographers use this system indicates that our methodology for ergonomic design is useful.

Broadhurst, Robert Elijah (2008)

“Compact Appearance in Object Populations Using Quantile Function Based Distribution Families”
Under the direction of Stephen Pizer
Electronic copy available

Statistical measurements of the variability of probability distributions are important in many image analysis applications. For instance, let the appearance of a material in a picture be represented by the distribution of its pixel values. It is necessary to model the variability of these distributions to understand how the appearance of the material is affected by viewpoint, lighting, or scale changes. In medical imaging, an organ’s appearance varies not only due to the parameters of the imaging device but also due to changes in the organ, either within a patient day to day or between patients. Classical statistical techniques can be used to study distribution variability given a distribution representation for which the variation is linear. For many distributions relevant to image analysis, standard representations are either too constrained or have nonlinear variation, in which case classical linear multivariate statistics are not applicable. This dissertation presents general, non-parametric representations of a variety of distribution types, based on the quantile function, for which a useful class of variability is linear. A key result is that principal component analysis can be used to efficiently parameterize their variability, ie, construct a distribution family. The quantile function framework is applied to two driving problems in this dissertation: (1) the statistical characterization of the texture properties of materials for classification, and (2) the statistical characterization of the appearance of objects in images for deformable model based segmentation. It is shown that in both applications the observed variability is appropriately linear, allowing efficient modeling. State of the art results are achieved for both the classification of materials in the Columbia-Utrecht database and the segmentation of the kidney, bladder, and prostate in 3D CT images. While the applications presented in this dissertation use image-based appearance observations in the field of image analysis, the methods and theory should be widely applicable to the variety of observations found in the many scientific fields, and, more specifically, to shape observations in the field of computer vision.

Brown, Peter H. (2002)

“Multiscale Evaluation of 3D Shape Perception in Computer Graphics”
Under the direction of Christina Burbeck
Electronic copy available

This dissertation describes a tool and associated analysis techniques (collectively called the Multiscale Test of Perceived Shape, or MTOPS) for measuring observers’ perceptions of 3D surface orientation in computer-graphics visualizations at multiple spatial scales. Using this tool and techniques, I demonstrated that such multiscale measurements are both possible and fruitful. Specifically, I showed the following:

  • Observers consistently made different surface-orientation settings at different scales, indicating both that perceived orientation changed across scale and that MTOPS was able to measure that change.
  • By comparing observers’ settings to the stimulus geometry, I could measure the across-scale effects of adding a given depth cue to a visualization. Specifically, I found the following:
    • Adding stereo to a non-stereo presentation caused the accuracy of observers’ settings to improve more at large scale than at small.
    • Adding head-motion parallax to a stereo presentation caused a modest improvement in observers’ accuracy, mostly at small scale.
    • Adding automatic object motion to a non-stereo presentation improved observers’ accuracy substantially, at many scales. Adding user-controlled object motion to a non-stereo presentation gave a less consistent, and for most users smaller, accuracy benefit.
  • The MTOPS tool and techniques were sufficiently sensitive to capture differences in those multiscale depth-cue effects due to surface complexity and observer knowledge.
Brownlee, Jr., Edward H. (1975)

“Lossiness in Tessellation Automata”
Under the direction of Stephen F. Weiss

A tessellation automaton is an infinite array of finite state automata interacting in a prescribed manner. The state of the entire array at any instant is called its configuration. A tessellation automaton is said to be lossy if its global transition function is not one-to-one, that is, if some configuration has more than one predecessor. Lossiness is shown to be equivalent to three other properties of tessellation automata. One of these leads to a sufficient test for lossiness which is easily applied to the local transition function. A necessary condition is also given. Finally, it is shown that certain kinds of computation cannot be carried out in lossless spaces.

Burns, Eric (2007)

“MACBETH: Management of Avatar Conflict By Employment of a Technique Hybrid”
Under the direction of Frederick P. Brooks, Jr.
Electronic copy available

Since virtual objects do not prevent users from penetrating them, a virtual environment user may place his real hand inside a virtual object. If the virtual environment system prevents the user’s hand avatar from penetrating the object, the hand avatar must be placed somewhere other than the user’s real hand position. I propose a technique, named MACBETH (Management of Avatar Conflict By Employment of a Technique Hybrid) for managing the position of a user’s hand avatar in a natural manner after it has been separated from the user’s real hand due to collision with a virtual object. To gather the necessary information to implement MACBETH, I performed user studies to determine users detection thresholds for visual/proprioceptive discrepancy in hand position and velocity. I then ran a user study to evaluate MACBETH against two other techniques for managing the hand avatar position: the rubber-band and incremental-motion techniques. Users rated MACBETH as more natural than the other techniques and preferred MACBETH over both. Users performed better on a hand navigation task with MACBETH than with the incremental-motion technique and performed equally well as they did with the rubber-band technique.

C

Calandrino, John (2009)

“On the Design and Implementation of a Cache-Aware Soft Real-Time Scheduler for Multicore Platforms”
Under the direction of James Anderson
Electronic copy available

Real-time systems are those for which timing constraints must be satisfied. In this dissertation, research on multiprocessor real-time systems is extended to support multicore platforms, which contain multiple processing cores on a single chip. Specifically, this dissertation focuses on designing a cache-aware real-time scheduler to improve the performance of shared caches on multicore platforms when timing constraints must be satisfied. This scheduler, implemented in Linux, employs: (1) a scheduling method for real-time workloads that satisfies timing constraints while making scheduling choices that improve cache performance; and (2) a profiler that quantitatively approximates the cache impact of every task during its execution. In experiments, it is shown that the proposed cache-aware scheduler can result in significant performance improvements over other approaches. This is especially true when sufficient hardware support is provided, primarily in the form of cache-related performance monitoring features. It is also shown that scheduler-related overheads are comparable to other scheduling approaches, and therefore overheads would not be expected to offset any cache-related performance gains. Finally, in experiments involving a multimedia server workload, it was found that the use of the proposed cache-aware scheduler allowed the size of the workload to be increased. Prior work in the area of cache-aware scheduling for multicore platforms has not addressed support for real-time workloads, and prior work in the area of real-time scheduling has not addressed shared caches on multicore platforms. For real-time workloads running on multicore platforms, an improvement in shared cache performance can result in a corresponding decrease in execution times, which may allow a larger real-time workload to be supported, or hardware requirements (or costs) to be reduced. As multicore platforms are becoming ubiquitous in many domains, including those in which real-time constraints must be satisfied, the work in this dissertation is necessary to allow real-time workloads to effectively utilize current processor designs. If the chip manufacturing industry continues to adhere to the multicore paradigm (which is likely, given current projections), then this work should remain relevant as processors evolve.

Cao, Tian (2016)

Coupled Dictionary Learning for Image Analysis

Under the direction of March Niethammer

Modern imaging technologies provide us different ways to visualize various objects ranging from molecules in a cell to the tissue of a human body. Images from different imaging modalities reveal distinct information about these objects. Thus a common problem in image analysis is how to relate different information about the objects. For instance, relating protein locations from fluorescence microscopy and the protein structures from electron microscopy. These problems are challenging due to the difficulties to model the relationship between the information from different modalities. 

This dissertation provides an approach to capture the relationship between the data from two spaces based on coupled dictionary learning. Coupled dictionary learning is a method to learn a coupled basis for the compact representation of data in two spaces. 

In this dissertation, a coupled dictionary learning based image analogy method is first introduced to synthesize images based on the images of another modality. As a result, multi-modal image registration (for example, registration between correlative microscopy images) can be simplified to a mono-modal one. Furthermore, a semi-coupled dictionary learning based framework is proposed to estimate deformations from image appearances. Moreover, a coupled dictionary learning method is explored to capture the relationship between GTPase activations and cell protrusions and retractions. Finally, a probabilistic model is proposed for robust coupled dictionary learning to address learning a coupled dictionary with non-corresponding data. This method discriminates between corresponding and non-corresponding data thereby resulting in a “clean” coupled dictionary by removing non-corresponding data during the learning process.

Cannon, Robert L. (1973)

“State Grammar Parsing”
Under the direction of Stephen F. Weiss

A state grammar may be viewed as a context-free grammar with states. The states control the sequence in which the nonterminal symbols in a sentential form become eligible to be rewritten, giving state grammars the power to generate the class of context-sensitive languages. An algebraic parsing technique, developed for context-free grammars, has been extended for parsing context-sensitive languages as generated by state grammars. The set of all possible parses for an input string is found by iteration upon a single algebraic expression. The number of iterations is a multiple of the length of the input string. A second parsing technique uses a recursive transition network as a representational tool for state grammar parsing. For the class of non-left-recursive state grammars the algorithm finds a parse if one exists. A result from the algebraic theory extends this algorithm to apply to a larger class of state grammars.

Cao, Tian (2016)

“Coupled Dictionary Learning for Image Analysis”
Under the direction of Marc Niethammer

Modern imaging technologies provide us different ways to visualize various objects ranging from molecules in a cell to the tissue of a human body. Images from different imaging modalities reveal distinct information about these objects. Thus a common problem in image analysis is how to relate different information about the objects. For instance, relating protein locations from fluorescence microscopy and the protein structures from electron microscopy. These problems are challenging due to the difficulties to model the relationship between the information from different modalities.

This dissertation provides an approach to capture the relationship between the data from two spaces based on coupled dictionary learning. Coupled dictionary learning is a method to learn a coupled basis for the compact representation of data in two spaces.

In this dissertation, a coupled dictionary learning based image analogy method is first introduced to synthesize images based on the images of another modality. As a result, multi-modal image registration (for example, registration between correlative microscopy images) can be simplified to a mono-modal one. Furthermore, a semi-coupled dictionary learning based framework is proposed to estimate deformations from image appearances. Moreover, a coupled dictionary learning method is explored to capture the relationship between GTPase activations and cell protrusions and retractions. Finally, a probabilistic model is proposed for robust coupled dictionary learning to address learning a coupled dictionary with non-corresponding data. This method discriminates between corresponding and non-corresponding data thereby resulting in a “clean” coupled dictionary by removing non-corresponding data during the learning process.

Carlson, Eric D. (1972)

“Techniques for Analysis of Generalized Data Base Management Systems”
Under the direction of Peter Calingaert

Generalized Data Base Management Systems (GDBMS) are software packages designed to facilitate file creation and maintenance, retrieval, and report generation. Two models are proposed for analyzing GDBMS. The first is a formal model of GDBMS components and capabilities. The second is a general model for designing and describing performance analysis experiments. The formal model offers a simple and precise method for describing GDBMS, and for distinguishing them from other systems. Use of the experimental design model for GDBMS performance analyses clarifies the assumptions, formalizes the analysis, and improves the interpretation of results. The two models are shown to be related, and to encompass a variety of analysis techniques. The proposed techniques are compared with alternatives. Examples of the uses of each model are given; the major examples are analyses of the GDBMS ASAP. Of particular interest are the ASAP experiments which indicate the applicability of sample size calculation techniques, of various forms of random sampling, and of nonparametric analyses.

Carter, Jason L. (2014)
“Automatic Difficulty Detection”
Under the direction of Prasun Dewan

Previous work has suggested that the productivity of developers increases when they help each other and as distance increases, help is offered less.  One way to make the amount of help independent of distance is to develop a system that automatically determines and communicates developers’ difficulty. It is our thesis that automatic difficulty detection is possible and useful.

To provide evidence to support this thesis, we developed six novel components: programming-activity difficulty-detection, multimodal difficulty-detection, integrated workspace-difficulty awareness, difficulty-level detection, barrier detection, and reusable difficulty-detection framework.

Programming-activity difficulty-detection mines developers’ interactions. It is based on the insight that when developers are having difficulty their edit ratio decreases while other ratios such as the debug and navigation ratios increase.  Lab studies show that this component performs better than baseline measures. A limitation of this component is a high false negative rate.

This limitation is addressed by multimodal difficulty-detection. This component mines both programmers’ interactions and Kinect camera data. It is based on the insight that when developers are having difficulty, both edit ratios and postures can change. This component has a lower false negative rate than the programming-activity difficulty-detection component.

Integrated workspace-difficulty awareness combines continuous knowledge of remote users’ workspace with continuous knowledge of when developers are having difficulty. Two variations of this component are possible based on whether potential helpers can replay developers’ screen recordings.  A lab study shows that a) experts were much faster in solving problems than developers, and b) potential helpers prefer replaying the programming actions of developers who are stuck, but replaying these actions takes them longer to decide if they can offer help.  In a field study of this component, students were successfully offered help in a CS1 class.

One limitation of this component is that potential helpers may spend a large amount of time trying to determine if they can offer help. Difficulty-level and barrier detection address this limitation. The former is based on the insight that when developers are having surmountable difficulties they tend to perform a cycle of editing and debugging their code; and when they are having insurmountable difficulties they tend to spend a large amount of time a) between actions and b) outside of the programming environment. Barrier detection infers two kinds of difficulties: incorrect output and design. This component is based the insight that when developers have incorrect output, their debug ratios increase; and when they have difficulty designing algorithms, they spend a large amount of time outside of the programming environment. Both of these components perform better than baseline measures.

The reusable difficulty-detection framework uses standard design patterns to allow the programming-activity difficulty-detection to be used in two programming environments, Eclipse and Visual Studio. This component requires significantly fewer lines of code than difficulty-detection modules written specifically for each programming environment.

Chadha, Ritu (1991)

“Applications of Unskolemization”
Under the direction of David A. Plaisted
Electronic copy available

The dissertation describes a novel method for deriving logical consequences of first-order formulas using resolution and unskolemization. A complete unskolemization algorithm is given and its properties analyzed. This method is then applied to a number of different fields, namely program verification, machine learning, and mathematical induction. The foremost problem in automating program verification is the difficulty of mechanizing the generation of inductive assertions for loops in a program. We show that this problem can be viewed as one of generating logical consequences of the conditions which are true at the entry to a loop. A complete and sound algorithm for generating loop invariants in first-order logic is described. All previous techniques given in the literature for deriving loop invariants are heuristic in nature and are not complete in any sense. There are a number of problems associated with machine learning, such as the diversity of representation languages used and the complexity of learning. We present a graph-based polynomial-time algorithm for learning from examples which makes use of the method for generating logical consequences. The representation language used is first-order logic, which enables the algorithm to be applied in a large number of fields where first-order logic is the language of choice. The algorithm is shown to compare favorably with others in the literature, and applications of the algorithm in a number of fields are demonstrated. The difficulty of mechanizing mathematical induction in existing theorem provers is due to the inexpressibility of the principle of induction in first-order logic. In order to handle mathematical induction of the principle of induction within the framework of first-order logic, it is necessary to find an induction schema for each theorem. We describe two methods for tackling this problem, one of which makes use of our method for generating logical consequences. Most existing methods for mechanizing induction can only handle equational theorems. Our approach is more general and is applicable to equational as well as non-equational theorems.

Chan, Francis H. (1982)

“Evaluating the Perceived Dynamic Range of A Display Device Using Pseudocolor” (Biomedical Engineering and Mathematics, UNC-Chapel Hill)
Under the direction of Stephen M. Pizer

A methodology has been developed to measure the perceived dynamic range (PDR) of any display scale. Such a methodology involves an observer experiment that measures the total number of just noticeable differences (jnd) across the dynamic range of the display/observer combination. We used two display scales, namely, the heated-object spectrum (HCS), a pseudocolor scale, and the black and white gray scale in our experiment. We found the HCS to have 38% more discernible levels than the gray scale in one observer. Even though there was variability among observers, the error within the last two sets of results in each of the three observers was about 2%. A moderate learning effect was also shown in the first two sets of experiments in one observer. Our results were also compared with Frei and Baxter’s perception model. Good correlation was obtained in the model. We believe that the definitions underlying the PDR and the methodology of its measurement will be useful for the purpose of comparing display devices in imaging. The jnd-curve (of a display device) obtained in this method can also provide useful information needed to produce a linear relationship between display-driving intensity and perceived intensity.

Chandak, Anish (2011)

“Efficient Geometric Sound Propagation Using Visibility Culling”
Under the direction of Dinesh Manocha
Electronic copy available

Sound propagation is frequently used in interactive applications such as video games to improve the sense of realism and in engineering applications such as architectural acoustics for better designs.  In this thesis, we present efficient geometric sound propagation techniques based on the image-source method for modeling specular reflections and finite-edge diffraction.  We use the well-known Biot-Tolstoy-Medwin (BTM) model for edge diffraction.  We accelerate the computation of specular reflections by applying novel from-point visibility algorithms, FastV and AD-Frustum, and accelerate finite-edge diffraction modeling by using our novel from-region visibility algorithm. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms.  The AD-Frustum algorithm adapts its computation to the scene complexity and can trade-off accuracy of results for computational efficiency. FastV and our from-region visibility algorithm are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy.  Our geometric propagation algorithms are 10-20X faster than prior approaches for modeling specular re ections and 2-10X faster for modeling finite-edge diffraction.  Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes.  We also compare the accuracy of our sound propagation algorithms with other methods. After sound propagation is performed, we generate smooth, artifact-free output audio signals by applying efficient audio-processing algorithms.  We also present the first efficient audio-processing algorithm for scenarios with simultaneously moving source and moving receiver (MS-MR) which incur less than 25% overhead compared to static source and moving receiver (SS-MR) or moving source and static receiver (MS-SR) scenario.

Chang, Chun-Fa (2001)

“LDI Tree: A Sampling Rate Preserving and Hierarchical Data Representation for Image-Based Rendering”
Under the direction of Gary Bishop
Department of Computer Science Technical Report TR01-032
Electronic copy available

Multiple reference images are required to eliminate disocclusion artifacts in image-based rendering based on McMillan’s 3D image warping. Simply warping each individual reference image causes the rendering time to grow linearly with the number of reference images. This calls for a new data representation for merging multiple reference images. In this dissertation I present a hierarchical data representation, the LDI Tree, for merging reference images. It is distinguished from previous works by identifying the sampling rate issue and preserving the sampling rates of the reference images by adaptively selecting a suitable level in the hierarchy for each pixel. During rendering, the traversal of the LDI Tree is limited to the levels that are comparable to the sampling rate of the output image. This allows the rendering time to be driven by the requirements of output images instead of the number of input images. I also present a progressive refinement feature and a hole-filling algorithm implemented by pre-filtering the LDI Tree. I show that the amount of memory required has the same order of growth as the 2D reference images in the worst case. I also show that the rendering time depends mostly on the quality of the output image, while the number of reference images has a relatively small impact.

Chen, David T. (1998)

“Volume Rendering Guided by Multiscale Medial Models”
Under the direction of Stephen M. Pizer

Volume rendering is the generation of images from discrete samples of volume data. Multiscale medial models describe object shape by linking pairs of fuzzy boundary points at object middles. This dissertation presents a volume rendering method that uses object shape information provided by a multiscale medial model to guide a splatting volume renderer. This work shows how a user can select regions of interest based on objects in order to omit occluding objects, and how object shape can be used to improve weak and noisy boundaries. The amount of computation required is proportional to the number of screen pixel intersections with the medial model derived surface. In the method the renderer finds a most likely boundary point using a Bayesian approach based on the volume directional derivative and a boundary point specified by the medial model.

Chen, Wei Chao (2002)

“Light Field Mapping: Efficient Representation of Surface Light Fields”
Under the direction of Henry Fuchs
Electronic copy available

Recent developments in image-based modeling and rendering provide significant advantages over traditional image synthesis process, including improved realism, simple representation and automatic content creation. Representations such as Plenoptic Modeling, Light Field, and the Lumigraph are well suited for storing view-dependent radiance information for static scenes and objects. Unfortunately, these representations have much higher storage requirement than traditional approaches, and the acquisition process demands very dense sampling of radiance data. With the assist of geometric information, the sampling density of image-based representations can be greatly reduced, and the radiance data can potentially be represented more compactly. One such parameterization, called Surface Light Field, offers natural and intuitive description of the complex radiance data. However, issues including encoding and rendering efficiency present significant challenges to its practical application. In this dissertation, I present a method for efficient representation and interactive visualization of surface light fields. I propose to partition the radiance data over elementary surface primitives and to approximate each partitioned data by a small set of lower-dimensional discrete functions. By utilizing graphics hardware features, the proposed rendering algorithm decodes directly from this compact representation at interactive frame rates on a personal computer. Since the approximations are represented as texture maps, I refer to the proposed method as Light Field Mapping. The approximations can be further compressed using standard image compression techniques leading to extremely compact data sets that are up to four orders of magnitude smaller than the uncompressed light field data. I demonstrate the proposed representation through a variety of non-trivial physical and synthetic scenes and objects scanned through acquisition systems designed for capturing both small and large-scale scenes.

Chou, Chen-Rui (2013)

“Regression Learning for 2D/3D Image Registration”
Under the direction of Stephen M. Pizer
Electronic copy available

Image registration is a common technique in medical image analysis. The goal of image registration is to discover the underlying geometric transformation of target objects or regions appearing in two images. This dissertation investigates image registration methods for lung Image-Guided Radiation Therapy (IGRT). The goal of lung IGRT is to lay the radiation beam on the ever-changing tumor centroid but avoid organs at risk under the patient’s continuous respiratory motion during the therapeutic procedure. To achieve this goal, I developed regression learning methods that compute the patients 3D deformation between a treatment-time acquired x-ray image and a treatmentplanning CT image (2D/3D image registration) in real-time. The real-time computation involves learning x-ray to 3D deformation regressions from a simulated patient-specific training set that captures credible deformation variations obtained from the patients Respiratory-Correlated CT (RCCT) images. At treatment time, the learned regressions can be applied everciently to the acquired x-ray image to yield an estimation of the patients 3D deformation. In this dissertation, three regression learning methods – linear, non-linear, and locally-linear regression learning methods are presented to approach this 2D/3D image registration problem.

Christiansen, Mikkel (2002)

“The Performance of HTTP Traffic Under Random Early Detection Queue Management”
Under the direction of Kevin Jeffay and Don Smith
Electronic copy available

In RFC (Request For Comments) 2309 [9] the active queuing mechanism RED has been purposed for widespread deployment on Internet routers. This thesis presents an empirical study of the active queuing mechanism RED where we focus on the question: How will RED ect HTTP response times and can RED be tuned to optimize these? The empirical study is conducted on a laboratory network in which we model a typical scenario in which a router operates and becomes a bottleneck. A realistic HTTP traffic load is provided by a set traÆ nerator programs. These simulate the behavior of browsing users using an empirical model of HTTP traffic that is both well-founded as well as being widely accepted and used. Response time performance is measured for each request made in the simulation of browsing users, thus providing a detailed insight on the performance experienced by the end-user. The study consists of finding the optimal RED parameters under different loads. Where the offered load describes the average bandwidth utilization produced by the traffic generators on a network with no bandwidth constraints. To determine the impact of using RED we compare the response time performance with our choice of optimal RED parameters with the optimal performance of tail-drop queuing and the performance on the laboratory network without bandwidth constraints. The results of the study can be summarized as follows:

  • Contrary to expectations, compared to tail-drop queuing, RED has minimal effect on HTTP response times for offered loads up to 90% of the link capacity.  Response times at loads in this range are not substantially affected by RED parameters.
  • Between 90% and 100% load, RED can be carefully tuned to yield performance somewhat superior to tail-drop queuing, however, response times are quite sensitive to the actual RED parameter values selected.
  • In heavily congested networks, RED parameters that provide the best link utilization produce poorer response times.

In total, we conclude that for links carrying Web traffic, RED queue management appears to provide no clear advantage over tail-drop for end-to-end response times

Chu, Heng (1994)

“Semantically Guided First-Order Theorem Proving with Hyper-Linking”
Under the direction of David A. Plaisted
Department of Computer Science Technical Report TR94-051
Electronic copy available

An automated theorem prover accepts a set of axioms and a theorem, expressed in a formal logic, and then proves that the theorem logically follows from the axioms. Two basic aspects of a logic are syntax and semantics. Syntax refers to the form of the statements and semantics refers to their meanings. Most current theorem provers conduct proof searching based on syntax only and semantics is rarely used. In this dissertation we propose a new and complete theorem proving method, called semantic hyper-linking, which makes a substantial use of semantics in an instance-based theorem prover for first-order logic. Semantics is used to generate instances by a combination of unification and enumeration of the Herbrand base. In this method, the proof searching is essentially a process of exhaustively searching for a model of the input problem. Semantic hyper-linking performs especially well on non- Horn problems whose proofs contain small instances of the input clauses. We also describe several refinements including (1) various model finding strategies to improve model searching without using expensive semantic checks or expanding the search space, (2) the rough resolution method to cooperate with semantic hyper-linking and solve the part of the proof which contains large instances, and (3) the well-known UR resolution to solve the Horn part of the proof. We have implemented semantic hyper-linking with its refinements, and applied it to over a hundred problems in various domains. Our results show that semantic hyper-linking is indeed a promising technique. Several hard theorems, which many other theorem provers find difficult, have been proved by semantic hyper-linking using no other guidance than the given semantics.

Chung, Goopeel (2002)

“Log-based Collaborative Infrastructure”
Under the direction of Prasun Dewan

Current application-sharing systems support a single architectural mapping for all collaborations, though different systems support different architectural mappings. My thesis has identified a new collaborative infrastructure that supports a wide range of architectural mappings and dynamic mapping changes.  The supported mappings include all of the existing architectural mappings defined so far including the centralized, replicated, and hybrid mappings. The thesis shows that a logging approach is sufficient for implementing the supported mappings and dynamic mapping changes. The logging approach records messages sent to one incarnation of an object, and replays the recorded messages to a different incarnation of the object. It uses a new form of logging, called object-based logging, which models the state of program and user interface as attributed objects connected by dependencies, and offers the generality of traditional command-based logging and the efficiency of state copying. Instead of being bound to a specific I/O protocol such as the X protocol, it is based on a simple abstract I/O protocol to which specific I/O protocols must be mapped by client-supplied code. The thesis shows that the new collaborative infrastructure is composable with client systems representing various categories of software systems. It also shows, through experiment and a theoretical model, that the choice of the architectural mapping depends on the computers used by the collaborators, the speed of the connections between the computers, the amount of work necessary to process user input, and user composition changes. It, thus, contradicts the popular belief that the replicated mapping always gives better performance, and shows the need for supporting various mapping choices in a system.

Chung, James Che-Ming (1993)

“Intuitive Navigation in the Targeting of Radiation Therapy Treatment Beams”
Under the direction of Frederick P. Brooks, Jr.
Department of Computer Science Technical Report TR94-025
Electronic copy available

This research focused on the possible benefits to be gained from using the intuitive navigation made possible by head-mounted displays (HMDs) in the targeting of radiation therapy treatment beams. A tumor surrounded by various types of healthy tissue can present a very complex 3-D situation which must be thoroughly understood for effective treatment to be possible. Conventional 2-D treatment planning suffers from reliance on 2-D diagnostic imaging and dose computations to represent an inherently 3-D problem. Current state-of-the-art treatment planning systems use 3-D models of the patient and compute 3-D dose distributions, but complete exploration of the solution space of possible beam orientations can be hindered by not-so-intuitive navigation controls. The thesis of this dissertation was that head-mounted display will permit freer exploration of the patient anatomy and range of possible beam orientations, thereby resulting in more efficient treatment planning. To that end, I developed a new, intuitive navigation mode, which used the orientation of a HMD to determine the direction from which its wearer viewed a 3-D model of the patient’s anatomy. When the user looked down, he viewed the anatomy from above. Looking up yielded a view from below, and turning his head horizontally in a circle completely scanned around the model. Although it did not have a real-world metaphor, this mode (dubbed Orbital mode) proved to be surprisingly easy to use and well- suited to the task of targeting treatment beams. I compared Orbital mode with more conventional joystick-based rotation in a user study in which radiation oncology professionals designed treatment beam configurations for lung tumor cases. Slightly faster performance was found with Orbital mode, although there appeared to be no significant difference in the quality of the beam configurations. Movement in Orbital mode was more omnidirectional than with the joystick, most likely due to the mechanical construction of the joystick, which preferentially encouraged left-right and forward-back deflection. The overall conclusion from this study was that HMDs in their present state are not appropriate for clinical use, but with future developments in HMD technology, the benefits of intuitive navigation may appear in mainstream radiation treatment planning.

Clary, Greg (2003)

“Image Sequence Classification Via Anchor Primitives”
Under the direction of Stephen M. Pizer
Electronic copy available

I define a novel class of medial primitives called anchor primitives to provide a stable framework for feature definition for statistical classification of image sequences of closed objects in motion. Attributes of anchor primitives evolving over time are used as inputs into statistical classifiers to classify object motion. An anchor primitive model includes a center point location, landmark locations exhibiting multiple symmetries, sub-models of landmarks, parameterized curvilinear sections and relationships among all of these. Anchor primitives are placed using image measurements in various parts of an image and using prior knowledge of the expected geometric relationships between anchor primitive locations in time-adjacent images.  Hidden Markov models of time sequences of anchor primitive locations, scales and nearby intensities and changes in those values are used for the classification of object shape change across a sequence. The classification method is shown to be effective for automatic left ventricular wall motion classification and computer lipreading. Computer lipreading experiments were performed on a published database of video sequences of subjects speaking isolated digits. Classification results comparable to those found in the literature were achieved, using an anchor primitive based feature set that was arguably more concise and intuitive than those of the literature. Automatic left ventricular wall motion classification experiments were performed on gated blood pool scintigraphic image sequences. Classification results arguably comparable to human performance on the same task were achieved, using a concise and intuitive anchor primitive based feature set. For both driving problems, model parameters were tuned and features were selected in order to minimize the classification error rate using leave-oneout procedures.

Clipp, Brian Sanderson (2010)

“Multi-Camera Simultaneous Localization and Mapping”
Under the direction of Marc Pollefeys and Jan-Michael Frahm
Electronic copy available

In this thesis, we study two aspects of simultaneous localization and mapping (SLAM) for multi-camera systems: minimal solution methods for the scaled motion of non-overlapping and partially overlapping two camera systems and enabling online, real-time mapping of large areas using the parallelism inherent in the visual simultaneous localization and mapping (VSLAM) problem. We present the only existing minimal solution method for six degree of freedom structure and motion estimation using a non-overlapping, rigid two camera system with known intrinsic and extrinsic calibration. One example application of our method is the threedimensional reconstruction of urban scenes from video. Because our method does not require the cameras’ fields-of-view to overlap, we are able to maximize coverage of the scene and avoid processing redundant, overlapping imagery. Additionally, we developed a minimal solution method for partially overlapping stereo camera systems to overcome degeneracies inherent to non-overlapping two-camera systems but still have a wide total field of view. The method takes two stereo images as its input. It uses one feature visible in all four views and three features visible across two temporal view pairs to constrain the system camera’s motion. We show in synthetic experiments that our method creates rotation and translation estimates that are more accurate than the perspective three-point method as the overlap in the stereo camera’s fields-of-view is reduced. A final part of this thesis is the development of an online, real-time visual SLAM system that achieves real-time speed by exploiting the parallelism inherent in the VSLAM problem. We show that feature tracking, relative pose estimation, and global mapping operations such as loop detection and loop correction can be effectively parallelized. Additionally, we demonstrate that a combination of short baseline, differentially tracked corner features, which can be tracked at high frame rates and wide baseline matchable but slower to compute features such as the scale-invariant feature transform (SIFT) (Lowe, 2004) can facilitate high speed visual odometry and at the same time support location recognition for loop detection and global geometric error correction.

Cochran, Robby (2015)

Symbolic Verification of Remote Client Behavior in Distributed Systems

Under the direction of Mike Reiter

A malicious client in a distributed system can undermine the integrity of the larger distributed application in a number of different ways. For example, a server with a vulnerability may be compromised directly by a modified client. If a client is authoritative for state in the larger distributed application, a malicious client may transmit an altered version of this state throughout the distributed application. A player in a networked game might cheat by modifying the client executable or the user of a network service might craft a sequence of messages that exploit a vulnerability in a server application. We present symbolic client verification, a technique for detecting whether network traffic from a remote client could have been generated by sanctioned software. Our method is based on symbolic execution and uses the client source code as a model for expected behavior. By identifying possible execution paths a remote client may have followed to generate a particular sequence of network traffic, we enable a precise verification technique that has the benefits of requiring little to no modification to the client application and is server agnostic; the only required inputs to the algorithm are the observed network traffic and the client source code. We evaluate our technique in the context of online games, namely XPilot and Tetrinet, and we show that a parallel implementation of our technique can keep pace with our case study applications. These results open the door to using symbolic client verification on the critical path of serving client requests and could provide a new method for preventing malicious clients from disrupting gameplay or attacking server infrastructure.

Cohen, Jonathan D. (1998)

“Appearance-Preserving Simplification of Polygonal Models”
Under the direction of Dinesh Manocha
Electronic copy available

Over the last six years, the automatic simplification of polygonal models has become an important tools for achieving interactive frame rates in the visualization of complex virtual environments. Initial methods employed clever heuristics, but no real quality metrics; then measures of quality for these simplified output models began to develop. This dissertation focuses on the use of error metrics to provide guaranteed error bounds for the simplifications we create. We guarantee bounds on the total appearance of our simplifications with respect to the three major appearance attributes: surface position, surface curvature (normal), and material color. We present the simplification envelopes and successive mappings algorithms for geometric simplification, two unique approaches that bound the maximum surface-to-surface deviation between the input surface and the simplified output surfaces. We also present the first bound on maximum texture deviation. Using these tools, we develop the first appearance-preserving simplification algorithm. The geometric simplification provides the filtering of the surface position attribute, bounding the error by means of the surface deviation bound. The color and curvature information are stored in texture and normal map structures, which are filtered at run time on a per-pixel basis. The geometry and maps are tied together by the texture deviation metric, guaranteeing not only that the attributes are sampled properly, but that they cover the correct pixels on the screen, all to within a user- or application-supplied tolerance in pixels of deviation. We have tested these algorithms on polygonal environments composed of thousands of objects and up to a few million polygons, including the auxiliary machine room of a notional submarine model, a lion sculpture from the Yuan Ming garden model, a Ford Bronco model, a detailed “armadillo” model, and more. The algorithms have proven to be efficient and effective. We have seen improvements of up to an order of magnitude in the frame rate of interactive graphics applications, with little or no degradation in image quality.

Coombe, Gregory (2007)

“Practical Surface Light Fields”
Under the direction of Anselmo Lastra
Electronic copy available

The rendering of photorealistic surface appearance is one of the main challenges facing modern computer graphics. Image-based approaches have become increasingly important because they can capture the appearance of a wide variety of physical surfaces with complex reflectance behavior.  In this dissertation, I focus on Surface Light Fields, an image-based representation of view-dependent and spatially-varying appearance. Constructing a Surface Light Field can be a time-consuming and tedious process.  The data sizes are quite large, often requiring multiple gigabytes to represent complex reflectance properties. The result can only be viewed after a lengthy post-process is complete, so it can be difficult to determine when the light field is sufficiently sampled.  Often, uncertainty about the sampling density leads users to capture many more images than necessary in order to guarantee adequate coverage. To address these problems, I present several approaches to simplify the capture of Surface Light Fields. The first is a “human-in-the-loop” interactive feedback system based on the Online SVD. As each image is captured, it is incorporated into the representation in a streaming fashion and displayed to the user. In this way, the user receives direct feedback about the capture process, and can use this feedback to improve the sampling. To avoid the problems of discretization and resampling, I used Incremental Weighted Least Squares, a subset of Radial Basis Function which allows for incremental local construction and fast rendering on graphics hardware. Lastly, I address the limitation of fixed lighting by describing a system that captures the Surface Light Field of an object under synthetic lighting.

Couture, Heather (2018)

“Discriminative Representations for Heterogeneous Images and Multimodal Data”

Under the direction of Marc Niethammer

Histology images of tumor tissue provide pathologists with a means of diagnosing cancer and assessing prognosis.  More recently, molecular methods have become important for assigning clinically-relevant subtypes to guide treatment decisions.  However, these methods of subtyping are not routinely performed on all patients who could benefit. A lower cost and repeatable method to predict the subtypes from histology could help to personalize treatment decisions for a wider range of cancer patients.  Further, heterogeneity within a single tumor sample plays a role that has yet to be fully understood. While genomic data captures the state of a small sample of tissue, histological image analysis can provide a view of spatial heterogeneity by identifying multiple subtypes in a single tumor.  Quantifying tumor heterogeneity may lead to future insights into tumor progression. By combining both data types – imaging and genomics – a more complete view of the tumor is possible, enabling more accurate models for subtyping.

I first tackle the challenge of representing histology images by learning appropriate features directly from the images.  I compare hand-crafted features and deep transfer learning with dictionary learning, extending dictionary learning to a task-driven model.  Through multiple instance learning, I further improve prediction of tumor properties by accounting for heterogeneity during training. I develop both Support Vector Machine-based methods and end-to-end training of a Convolutional Neural Network to successfully capture intra-tumor variations in subtype.  Finally, I integrate image and genomic features in a multimodal model that learns a projection to a shared space that is also discriminative. I propose a set of task-driven Deep Canonical Correlation Analysis methods that can be used for cross-modal classification. These methods can also be used as regularization, improving predictions from images by taking advantage of genomic data available during training, even if it is not available at test time.

Cromartie, Robert C. (1995)

“Structure-Sensitive Contrast Enhancement: Development and Evaluation”
Under the direction of Stephen M. Pizer

Display of a digital image is the process by which an array of recorded intensities is presented to a human observer as a light image. A fundamental step in this process is the rule by which the recorded intensities of the original image are mapped to the display-driving intensities of the display device. When performed explicitly, this step is called contrast enhancement. The goal of a contrast enhancement mapping is to help the observer better use the information contained in the intensity variations of the image in the performance of some well-specified task. This dissertation addresses both the problem of designing effective contrast enhancement mappings and the problem of evaluating the resulting techniques for the purpose of choosing between competing methods and for parameter selection for a given method. First, I describe the development of two new structure-sensitive contrast enhancement techniques, both of which are based on edge-affected diffusion. One, sharpened adaptive histogram equalization (SHAHE), uses edge-affected unsharp masking as a pre-process forcontrast-limited adaptive histogram equalization (CLAHE). SHAE has been shown by observer study to offer a significant advantage when used to display radiation therapy portal images. I then present an experimental framework for using a model observer to evaluate competing display techniques and to find the best parameters for particular methods. My approach uses computer-generated images, Monte Carlo simulation of image noises, both computer generated random noise and structured noise from real images, and a carefully specified task. Using this framework, I conducted a two-stage experiment to select the best parameters for SHAHE for a distance estimation task on radiation oncology portal images. For these experiments, the model was based on Object Representation by Cores, an emerging theory of human object perception currently under development at the University of North Carolina. The experiments demonstrated the feasibility of the framework, and suggested that variations of SHAHE parameters within the subjectively plausible range could make the model’s performance vary from slightly better than no enhancement to considerably worse.

Csapo, Istvan (2018)

“Registration and Analysis of Developmental Image Sequences”

Under the direction of Marc Niethammer

Mapping images into the same anatomical coordinate system via image registration is a fundamental step when studying physiological processes, such as brain development. Standard registration methods are applicable when biological structures are mapped to the same anatomy and their appearance remains constant across the images or changes spatially uniformly. However, image sequences of animal or human development often do not obey by these assumptions and thus standard registration methods are unsuited for their analysis. In response, this dissertation tackles the problems of i) registering developmental image sequences with spatially non-uniform appearance change and ii) reconstructing a coherent 3D volume from serially sectioned images with non-matching anatomies between the sections. There are three major contributions presented in this dissertation. First, I develop a similarity metric that incorporates a time-dependent appearance model into the registration framework. The proposed metric allows for longitudinal image registration in the presence of spatially non-uniform appearance change over time —a common medical imaging problem for longitudinal magnetic resonance images of the neonatal brain. Next, a method is introduced for registering longitudinal developmental data sets with missing time-points using an appearance atlas built from a population. The proposed method is applied to a longitudinal study of young macaque monkeys with incomplete image sequences. The final contribution is a template-free registration method to reconstruct images of serially sectioned biological samples into a coherent 3D volume. The method is applied to confocal fluorescence microscopy images of serially sectioned embryonic mouse brains.

Culver, Timothy. (2000)

“Computing the Medial Axis of a Polyhedron Reliably and Efficiently”
Under the direction of Dinesh Manocha

The medial axis transform is a fundamental shape operation with applications in many fields. In solid modeling, the MAT has proven a useful tool for finite element meshing, model simplification, and feature recognition. The MAT is also a complete shape representation that could be used in place of a boundary representation. Yet the MAT is not widely used because computing it is difficult both in theory and in practice. For a three-dimensional polyhedral solid, the medial axis consists of quadric surfaces and degree-four algebraic space curves. Computing with high-degree curves and surfaces requires high numerical precision. Most previous methods attempt to avoid such computation by discretizing, or otherwise approximating, the medial axis. The few existing continuous methods are based exclusively on floating-point arithmetic, which leads to reliability problems. I present a new reliable, continuous algorithm for accurately computing the medial axis of a polyhedron. It is the only continuous medial axis algorithm that is insensitive to roundoff error. Further, my algorithm handles the most common forms of degeneracy. The algorithm is also efficient in a practical sense. The foundation of my approach is exact computation. My MAT representation uses arbitrary-precision rational numbers to describe the medial geometry. My algorithm is based on a point-ordering predicate that is always evaluated correctly. By its nature, exact computation requires high-precision arithmetic, which is significantly more expensive than hardware-supported floating-point arithmetic. However, my approach avoids the extra expense where feasible, using techniques such as floating-point filters and lazy evaluation. The result is an algorithm whose running time approaches that of floating-point methods when high precision is not required. I demonstrate this assertion by applying my implementation to several complex polyhedral solids.

Curtis, D. Sean (2014)

“Pedestrian Velocity Obstacles:  Pedestrian Simulation Through Reasoning in Velocity Space”
Under the direction of Dinesh Manocha

We live in a populous world. Furthermore, as social animals, we participate in activities which draw us together into shared spaces – office buildings, city sidewalks, parks, religious, sporting and political events, etc. Models that can predict how crowds of humans behave in such settings would be valuable in allowing us to analyze the designs for novel environments and anticipate issues with space utility and safety. They would also better enable robots to safely work in a common environment with humans. Furthermore, correct simulation of crowds of humans would allow us to populate virtual worlds, helping to increase the immersive properties of virtual reality or entertainment applications.

We propose a new model for pedestrian crowd simulation: Pedestrian Velocity Obstacles (PedVO). PedVO is based on Optimal Reciprocal Collision Avoidance (ORCA), a local navigation algorithm which performs optimization in velocity space to compute efficient collision avoidance behaviors for large numbers of agents. The PedVO model is an extension of ORCA which better reproduces observed pedestrian behavior. PedVO combines new models of pedestrian behavior with a modified geometric optimization planning technique to simulate agents with improved human-like behaviors interactively.

PedVO introduces asymmetric relationships between agents through two complementary techniques: Composite Agents and Right of Way. The former exploits the underlying collision avoidance mechanism to encode abstract factors and the latter modifies the optimization algorithm’s constraint definition to enforce asymmetric coordination. PedVO further changes the optimization algorithm to more fully encode the agent’s knowledge of its environment, allowing the agent to make more intelligent decisions, leading to a better utilization of space and improved flow. PedVO incorporates a new model, which works in conjunction with the local planning algorithm, to introduce a ubiquitous density-sensitive behavior observed in human crowds – the so-called “fundamental diagram”. We also provide a physically-plausible, interactive model for simulating walking motion to support the computed agent trajectories. We evaluate these techniques by reproducing various scenarios, such as pedestrian experiments and a challenging real-world scenario: simulating the performance of the Tawaf, an aspect of the Muslim pillar of the Hajj.

D

Danforth, Scott H. (1983)

“DOT: A Distributed Operating System Model of a Tree-Structured Multiprocessor”
Under the direction of Guyla A. Mago
Department of Computer Science Technical Report TR83-007
Electronic copy available

This dissertation presents DOT, a process-oriented design and simulation model for a highly parallel multiprocessor, and describes a complete associated programming system. The design methodology includes the use of layered design, abstract data types, and a process- oriented view of concurrency. Our results demonstrate that these software engineering structuring principles can be successfully applied to the design of highly parallel multiprocessors. DOT is represented using an executable high-level language that provides support for discrete-event simulation. This allows verification and accurate simulation of the complete programming system, which is composed of three logical levels. The top, or user level of the programming system is that of FFP (Formal Functional Programming) languages. The middle, or system support level is that of LPL, a low-level concurrent programming language used to define and implement FFP operators on the DOT architecture. The DOT design represents the lowest level of the programming system, a highly parallel tree-structured multiprocessor that directly supports the LPL and FFP languages. During execution, user programs consisting of FFP language symbols are entered into a linear array of processing cells (the leaves of the binary tree of processors represented in the DOT design), and segments of this array that contain innermost FFP applications execute LPL programs in order to perform the required reductions. The LPL programs for a useful set of FFP primitives are given. In addition to DOT and the overall programming system, this dissertation presents an analytic model which may be used to derive upper and lower bounds for program execution time. Predictions of the analytic model are compared with simulation results, and various design alternatives and possible extensions are examined.

Davis, Bradley Charles (2008)

“Medical Image Analysis via Frechet Means of Diffeomorphisms”
Under the direction of Sarang C. Joshi
Electronic copy available

The constructions of average models of anatomy, as well as regression analysis of anatomical structures, are key issues in medical research. For example, these methods are applied to investigate clinical hypotheses related to the size and shape of anatomical structures indicated in brain development, or to analyze the onset and progression of disease. When the underlying anatomical process can be modeled by parameters in a Euclidean space, classical statistical techniques are applicable. However, recent work suggests that attempts to describe anatomical shapes using flat Euclidean spaces undermines our ability to represent natural biological variability, and methods for nonlinear statistical analysis of anatomical data are emerging. This dissertation uses a nonlinear deformable image model to measure anatomical change and define geometry-based averaging and regression for anatomical structures represented within medical images. Geometric differences are modeled by coordinate transformations, i.e., deformations, of underlying image coordinates. In order to represent local geometric changes and accommodate large deformations, these transformations are taken to be the group of diffeomorphisms with an associated metric. A mean anatomical image is defined using this deformation-based metric via the Frechet mean—the minimizer of the sum of squared distances. Similarly, a new technique called manifold kernel regression is presented for estimating systematic changes—as a function of a predictor variable—from data in nonlinear spaces. It is defined by recasting kernel regression in terms of a kernel-weighted Frechet mean. This method is applied to determine systematic geometric changes in the brain from a random design dataset of medical images. Finally, diffeomorphic image mapping is extended to accommodate topological changes in the form of nuisance structures—objects that are present in one image and absent in another—by deflating them prior to the estimation of geometric change. The method is applied to quantify the motion of the prostate in the presence of transient bowel gas.

Davis, Mark C. (1990)

“A Computer for Low Context-Switch Time”
Under the direction of Frederick P. Brooks, Jr.
Department of Computer Science Technical Report TR90-012
Electronic copy available

A context switch is the suspension of one running process and the activation of another in a multitasking environment. Many applications, such as process control, require frequent context switches among many processes. A context switch requires a substantial amount of time: about 1000 microseconds on a VAX 11/780 and about 500 microseconds on Sun 4/280. Recently introduced computer architectures, such as the SUN 4, have not improved context-switch performance as much as they have improved throughput. A computer architecture with appropriate memory hierarchy can give better support to context switching. The Computer for Low Context-Switch Time (CLOCS) is a computer with such an architecture. Because the architecture has minimum state inside the Central Processing Unit, CLOCS can switch context in less than the time required to execute one instruction. The CLOCS Memory Management Unit provides virtual memory without degrading context- switch time as long as the new process is located in physical memory. Analyses of the architecture show that CLOCS throughout performance approaches the performance of contemporary RISC workstations and that it is well suited for real-time applications. Because these analyses showed promise for the CLOCS architecture, a register-transfer level implementation was designed and simulated to estimate more accurately the performance of a feasible CLOCS computer system. Although many standard implementation techniques were not useful, a technique called short-circuiting reduced memory references by 15%. On the Dhrystone integer benchmark program, CLOCS performed at least 30% as fast as contemporary workstations constructed from the same electronic technologies, and further significant improvement of CLOCS performance is possible. By using this lower bound on CLOCS throughout performance, the proper architecture can be identified for an application with challenging context-switch requirements.

Deb, Diptorup (2019)

Automating Data Layout Decisions in Domain-Specific Languages

Under the direction of Rob Fowler

Electronic copy available

Extreme scale computing requires simultaneous achievement of programmer productivity and hardware computational efficiency. This has been a long standing open challenge in high-performance computing. The challenge has been exacerbated by the onset of multi- and many-core CPUs and accelerators. Only a few expert programmers have been able to hand-code domain-specific data transformations and vectorization schemes needed to extract the best possible performance on such architectures. In this research, we examined the possibility of automating these methods by developing a domain-specific language (DSL) framework.

Our DSL approach extends C++14 by embedding into it a high-level data parallel array language, and uses a domain-specific compiler to compile the array language into hybrid parallel code. We implemented an array index space transformation algebra within this high-level array language to define array data layouts and distributions. This formed the basis of two important optimizations inside the compiler. The compiler introduces a novel method of speculative SIMD vectorization based on array data layouts. This technique is shown to outperform the default auto-vectorization strategy by up to 40% for stencil computations. Additionally, the compiler is able to automate distributed data movement and overlap local compute with remote data movement using polyhedral integer set analysis.  Along with these main innovations, we developed a new technique using C++ template metaprogramming for developing embedded DSLs using C++. We also proposed a domain-specific compiler intermediate representation that simplifies data flow analysis of abstract DSL constructs.

We evaluated our framework by constructing a DSL for the HPC grand-challenge domain of lattice quantum chromodynamics. Our DSL yielded performance gains of up to twice the flop rate over existing production C code for selected kernels. This gain in performance was obtained while using less than one-tenth the lines of code. The performance of this DSL was also competitive with the best hand-optimized and hand-vectorized code, and is an order of magnitude better than existing production DSLs.

Devi, UmaMaheswari C. (2006)

“Soft Real-Time Scheduling on Multiprocessors”
Under the direction of James H. Anderson
Electronic copy available

The design of real-time systems is being impacted by two trends. First, tightly-coupled multiprocessor platforms are becoming quite common. This is evidenced by the availability of affordable symmetric shared-memory multiprocessors and the emergence of multicore architectures. Second, there is an increase in the number of real-time systems that require only soft real-time guarantees and have workloads that necessitate a multiprocessor. Examples of such systems include some tracking, signal-processing, and multimedia systems. Due to the above trends, cost-effective multiprocessor-based soft real-time system designs are of growing importance. Most prior research on real-time scheduling on multiprocessors has focused only on hard real-time systems. In a hard real-time system, no deadline may ever be missed. To meet such stringent timing requirements, all known theoretically optimal scheduling algorithms tend to preempt process threads and migrate them across processors frequently, and also impose certain other restrictions. Hence, the overheads of such algorithms can significantly reduce the amount of useful work that is accomplished and limit their practical implementation. On the other hand, non-optimal algorithms that are more practical suffer from the drawback that their validation tests require workload restrictions that can approach roughly 50% of the available processing capacity. Thus, for soft real-time systems, which can tolerate occasional or bounded deadline misses, and hence, allow for a trade-off between timeliness and improved processor utilization, the existing scheduling algorithms or their validation tests can be overkill. The thesis of this dissertation is:

  • Processor utilization can be improved on multiprocessors while ensuring non-trivial soft real-time guarantees for different soft real-time applications, whose preemption and migration overheads can span different ranges and whose tolerances to tardiness are different, by designing new scheduling algorithms, simplifying optimal ones, and developing new validation tests.

The above thesis is established by developing validation tests that are sufficient to provide soft real-time guarantees under non-optimal (but more practical) algorithms, designing and analyzing a new restricted-migration scheduling algorithm, determining the guarantees on timeliness that can be provided when some limiting restrictions of known optimal algorithms are relaxed, and quantifying the benefits of the proposed mechanisms through simulations. First, we show that both preemptive and non-preemptive global earliest-deadline-first (EDF) scheduling can guarantee bounded tardiness (that is, lateness) to every recurrent real-time task system while requiring no restriction on the workload (except that it not exceed the available processing capacity). The tardiness bounds that we derive can be used to devise validation tests for soft real-time systems that are EDF-scheduled. Though overheads due to migrations and other factors are lower under EDF (than under known optimal algorithms), task migrations are still unrestricted. This may be unappealing for some applications, but if migrations are forbidden entirely, then bounded tardiness cannot always be guaranteed. Hence, we consider providing an acceptable middle path between unrestricted-migration and no-migration algorithms, and as a second result, present a new algorithm that restricts, but does not eliminate, migrations. We also determine bounds on tardiness that can be guaranteed under this algorithm. Finally, we consider a more efficient but non-optimal variant of an optimal class of algorithms called Pfair scheduling algorithms. We show that under this variant, called earliest- pseudo-deadline-first (EPDF) scheduling, significantly more liberal restrictions on workloads than previously known are sufficient for ensuring a specified tardiness bound. We also show that bounded tardiness can be guaranteed if some limiting restrictions of optimal Pfair algorithms are relaxed. The algorithms considered in this dissertation differ in the tardiness bounds guaranteed and overheads imposed. Simulation studies show that these algorithms can guarantee bounded tardiness for a significant percentage of task sets that are not schedulable in a hard real-time sense. Furthermore, for each algorithm, conditions exist in which it may be the preferred choice.

Dunigan Jr., Thomas H. (1978)

“The Design of a Computer System with All-Electronic Files”
Under the direction of Frederick P. Brooks, Jr.

A new computer memory device is postulated and its hardware/software implications are investigated. The memory device is defined to have an access time of ten microseconds and a ten millicent cost-per-bit, slower and cheaper than current main memories but faster than electro-mechanical devices (drums and disks). The current uses of mass memories are investigated, and a set of device parameters that would serve these uses is described. Alternative architectures for interfacing the new device to the central processing unit are analyzed. The analysis shows that treating the new device in a cache-like memory hierarchy yields the most desirable configuration. Techniques for including the device as part of a multi-level, hardware-controlled cache-like memory hierarchy are investigated. Design decisions affecting the performance between adjacent levels of the hierarchy are analyzed, including block size, fetch and replacement policies, block addressing, store-through, and write-back. Algorithms for determining the number of levels, their capacities, and access times are presented, with analyses of the effect of task-switch time on memory-fault management. The analyses show that optimum cost- performance is obtained when the device is the slowest member of a three-level cache-like memory hierarchy. Alternatives for address-space and name-space management are analyzed assuming that the new device is part of a billion-byte processor address space. Segmentation schemes and file systems are compared as alternatives for managing the information on the postulated billion-byte, directly addressable device. The analyses show that segmentation is a more effective management policy. The new device’s effect on third-generation data management systems and control program structure is investigated. The new device would reduce I/O traffic on existing third generation computing systems by 90%. Most access methods and some utilities can be eliminated and the structure of the operating system is freed from constraints of limited main-memory space.

Dunn, David (2019)

Deformable Beamsplitters: Enhancing Perception with Wide Field of View, Varifocal Augmented Reality Displays

Under the direction of Henry Fuchs

         An augmented reality head-mounted display with full environmental awareness could present data in new ways and provide a new type of experience, allowing seamless transitions between real life and virtual content. However, creating a light-weight optical see-through display providing both focus support and wide field of view remains a challenge.

    This dissertation describes a new dynamic optical element: the deformable beamsplitter and its applications for wide field of view, varifocal, augmented reality displays. Deformable beamsplitters combine a traditional deformable membrane mirror and a beamsplitter into a single element, allowing reflected light to be manipulated by the deforming membrane mirror, while transmitted light remains unchanged. This research enables both single element optical design and correct focus while maintaining a wide field of view, as demonstrated by the description and analysis of two prototype hardware display systems which incorporate deformable beamsplitters. When looking through these displays, as a user changes the depth of their gaze, the focus of virtual content can quickly be altered to match the real world by simply modulating air pressure behind the deformable beamsplitter; thus addressing vergence–accommodation conflict.

    Two user studies verify the display prototypes’ capabilities and show the potential of the display in enhancing human performance in quickly perceiving visual stimuli. This shows that near-eye displays built with deformable beamsplitters allow for simple optical designs that enable wide field of view and comfortable viewing experiences with the potential to enhance user perception.

Dwyer, Christopher L. (2003)

“Self-Assembled Computer Architecture: Design and Fabrication Theory”
Under the direction of Russell M. Taylor
Electronic copy available

This dissertation explores the design and fabrication of massively-parallel computers using self-assembling electronic circuitry. A DNA-guided self-assembly method, inspired by discoveries in chemistry, materials science, and physics, is used to develop an argument for the feasibility of constructing complex circuitry. The fabrication yield of such a process is calculated. Together, these form the foundation for a discussion of the computer architectures and implementations that this self-assembling process enables. Simulations estimate the electrical performance of the components used in the self-assembly process. Traditional drift-diffusion simulations were used to evaluate a ring-gated field effect transistor and the results were applied to a circuit level simulation to evaluate specific circuit topologies. These circuits were then grouped into implementation level components (logic gates, memory elements, etc.) and used in an architecture-level behavior simulator. The electrical performance results enable an informed evaluation of higher-level computer designs that could be built using self-assembly. Estimates of the performance, including power consumption, of two architectures are presented. These architectures appear to be impractical without a self-assembling fabrication process and are shown to have remarkable advantages over conventional computing machines. These machines are estimated to be nearly three orders of magnitude faster than the fastest next-generation supercomputer (IBM BlueGene /L) on certain classes of problems.

Dybvig, R. Kent (1987)

“Three Implementation Models for Scheme”
Under the direction of Gyula A. Mago
Department of Computer Science Technical Report TR87-011
Electronic copy available

This dissertation presents three implementation models for the Scheme Programming Language. The first is a heap-based model used in some form in most Scheme implementations to date; the second is a new stack-based model that is considerably more efficient than the heap-based model at executing most programs; and the third is a new string-based model intended for use in a multiple-processor implementation of Scheme. The heap-based model allocates several important data structures in a heap, including actual parameter lists, binding environments, and call frames. The stack-based model allocates these same structures on a stack whenever possible. This results in less heap allocation, fewer memory references, shorter instruction sequences, less garbage collection, and more efficient use of memory. The string-based model allocates versions of these structures right in the program text, which is represented as a string of symbols. In the string-based model, Scheme programs are translated into an FFP language designed specifically to support Scheme. Programs in this language are directly executed by the FFP machine, a multiple-processor string-reduction computer. The stack-based model is of immediate practical benefit; it is the model used by the author’s Chez Scheme system, a high-performance implementation of Scheme. The string-based model will be useful for providing Scheme as a high-level alternative to FFP on the FFP machine once the machine is realized.

E

Eastman, Caroline M. (1977)

“A Tree Algorithm for Nearest Neighbor Searching in Document Retrieval Systems”
Under the direction of Stephen F. Weiss
Electronic copy available

The problem of finding nearest neighbors in a document collection is a special case of associative retrieval, in which searches are performed using more than one key. The concept tree algorithm, a new associative retrieval algorithm suitable for document retrieval systems that use similarity measures to select documents, is described. The basic structure used is a binary tree; at each node a set of keys (concepts) is tested to select the most promising branch. Backtracking to initially rejected branches is allowed and often necessary. The average search time required by this algorithm is O(log2N)k, where N is the number of documents in the collection, and k is a system-dependent parameter. A series of experiments with a small document collection confirm the predictions made using the analytic model; the value of k is approximately 4 in this situation. The concept tree algorithm is compared with two other searching algorithms, sequential search and clustered search. For large collections, the predicted average search time for a concept tree search is less than that for a sequential search and greater than that for a clustered search. Only the sequential search and the concept tree search guarantee that the near neighbors found are actually the nearest neighbors. The concept tree search could be used to advantage in some document retrieval systems. The algorithm can also be considered for other applications requiring associative retrieval.

Eastwood, Brian (2009)

“Multiple Layer Image Analysis for Video Microscopy”
Under the direction of Russell Taylor
Electronic copy available

Motion analysis is a fundamental problem that serves as the basis for many other image analysis tasks, such as structure estimation and object segmentation. Many motion analysis techniques assume that objects are opaque and non-reflective, asserting that a single pixel is an observation of a single scene object. This assumption breaks down when observing semitransparent objects—a single pixel is an observation of the object and whatever lies behind it. This dissertation is concerned with methods for analyzing multiple layer motion in microscopy, a domain where most objects are semitransparent. I present a novel approach to estimating the transmission of light through stationary, semitransparent objects by estimating the gradient of the constant transmission observed over all frames in a video. This enables removing the non-moving elements from the video, providing an enhanced view of the moving elements. I present a novel structured illumination technique that introduces a semitransparent pattern layer to microscopy, enabling microscope stage tracking even in the presence of stationary, sparse, or moving specimens. Magnitude comparisons at the frequencies present in the pattern layer provide estimates of pattern orientation and focal depth. Two pattern tracking techniques are examined, one based on phase correlation at pattern frequencies, and one based on spatial correlation using a model of pattern layer appearance based on microscopy image formation. Finally, I present a method for designing optimal structured illumination patterns tuned for constraints imposed by specific microscopy experiments. This approach is based on analysis of the microscope’s optical transfer function at different focal depths.

Eberly, David H. (1994)

“Geometric Methods for Analysis of Ridges in N-Dimensional Images”
Under the direction of Stephen M. Pizer
Department of Computer Science Technical Report TR94-066
Electronic copy available

Image segmentation is a process which identifies and labels objects in an image. The goals of this dissertation are to produce an algorithm for segmenting an image in the way that a front-end vision system does, using the local geometry induced by the intensity values of the image, to create multiscale representations of the objects that allow exploration of the details of the image via an interactive computer system, and to provide a formal geometric foundation for multiscale image analysis. The geometric concept of ridges is discussed. These structures are used to describe the shape properties of objects in an image. Various definitions are given for d-dimensional ridge structures of n-dimensional images. Ridges are used in conjunction with multiscale methods to segment an image. The output of the segmentation is a single tree and the objects in the image are represented as unions and differences of subtrees. The tree and image are used as input to a visualization tool which allows the user to explore the image interactively. A formal foundation for multiscale analysis is given which involves non- Euclidean geometry. Metric selection for scale space is naturally related to invariances required for a particular application. The anisotropic diffusion process for generating multiscale data is automatically determined by the metric. Moreover, the metric is used as an aid in developing fast, stable, and adaptive numerical algorithms for solving the nonlinear diffusion equations. This geometric foundation for multiscale analysis provides a natural set of tools for extracting information about objects in an image.

Ellsworth, David A. (1996)

“Polygon Rendering for Interactive Visualization on Multicomputers”
Under the direction of Henry Fuchs
Electronic copy available

This dissertation identifies a class of parallel polygon rendering algorithms suitable for interactive use on multicomputers, and presents a methodology for designing efficient algorithms within that class. The methodology was used to design a new polygon rendering algorithm that uses the frame-to-frame coherence of the screen image to evenly partition the rasterization at reasonable cost. An implementation of the algorithm on the Intel Touchstone Delta at Caltech, the largest multicomputer at the time, renders 3.1 million triangles per second. The rate was measured using a 806,640 triangle model and 512 i860 processors, and includes back-facing triangles. A similar algorithm is used in Pixel- Planes 5, a system that has specialized rasterization processors, and which, when introduced, had a benchmark score for the SPEC Graphics Performance Characterization Group “head” benchmark that was nearly four times faster than commercial workstations. The algorithm design methodology also identified significant performance improvements for Pixel-Planes 5. All fully parallel polygon rendering algorithms have a sorting step to redistribute primitives or fragments according to their screen location. The algorithm class mentioned above is one of four classes of parallel rendering algorithms identified; the classes are differentiated by the type of data that is communicated between processors. The identified algorithm class, called sort-middle, sorts screen-space primitives between the transformation and rasterization. The design methodology uses simulations and performance models to help make the design decisions. The resulting algorithm partitions the screen during rasterization into adaptively sized regions with an average of four regions per processor. The region boundaries are only changed when necessary: when one region is the rasterization bottle-neck. On smaller systems, the algorithm balances the loads by assigning regions to processors once per frame, using the assignments made during one frame in the next. However, when 128 or more processors are used at high frame rates, the load balancing may take too long, and so static load balancing should be used. Additionally, a new all-to-all communication method improves the algorithm’s performance on systems with more than 64 processors.

Enliang, Zheng (2016)

Toward 3D Reconstruction of Static and Dynamic Objects

Under the direction of Jan-Michael Frahm

The goal of image-based 3D reconstruction is to build up a spatial understanding of the world from a collection of images. For applications that seek to model generic real-world scenes, it is important that the reconstruction methods used are able to characterize both static scene elements (e.g. trees and buildings) as well as dynamic objects (e.g. cars and pedestrians). However, due to many inherent ambiguities in the reconstruction problem, recovering this 3D information with accuracy, robustness, and efficiency is a considerable challenge. To advance the research frontier for image-based 3D modeling, this dissertation focuses on three challenging problems in static scene and dynamic object reconstruction.

We first target the problem of static scene depthmap estimation from crowd-sourced datasets (i.e. photos collected from the Internet). While achieving high-quality depthmaps using images taken under a controlled environment is already a difficult task, heterogeneous crowd-sourced data presents a unique set of challenges for multi-view depth estimation, including varying illumination and occasional occlusions. We propose a depthmap estimation method that demonstrates high accuracy, robustness, and scalability on a large number of photos collected from the Internet.

Compared to static scene reconstruction, the problem of dynamic object reconstruction from monocular images is fundamentally ambiguous when not imposing any additional assumptions. This is because having only a single observation of an object is insufficient for valid 3D triangulation, which typically requires concurrent observations of the object from multiple viewpoints. Assuming that dynamic objects of the same class (e.g. all the pedestrians walking on a sidewalk) move in a common path in the real world, we develop a method that estimates the 3D positions of the dynamic objects from unstructured monocular images. Experiments on both synthetic and real datasets illustrate the solvability of the problem and the effectiveness of our approach.

Finally, we address the problem of dynamic object reconstruction from a set of unsynchronized videos capturing the same dynamic event, instead of using monocular images as input. This problem is of great interest because, due to the increased availability of portable capture devices, captures using multiple unsynchronized videos are common in the real world. To resolve the difficulty that arises from non-concurrent captures and unknown temporal overlap among video streams, we propose a self-expressive dictionary learning framework, where the dictionary entries are defined as the collection of temporally varying structures. The experimental results show that this novel approach effectively solves the previously unsolved problem.

Erikson, Carl M. (2000)

“Hierarchical Levels of Detail to Accelerate the Rendering of Large Static and Dynamic Polygonal Environments”
Under the direction of Dinesh Manocha
Department of Computer Science Technical Report TR00-012
Electronic copy available

Interactive visualization of large three-dimensional polygonal datasets is an important topic in the field of computer graphics and scientific visualization. These datasets can be static, such as walkthrough applications, or dynamic, such as CAD scenarios where a designer moves, adds, or deletes parts. In many cases, the number of primitives in these models overwhelms the rendering performance of current graphics systems. One method for accelerating the rendering of these environments is polygonal simplification. This dissertation describes the creation and application of levels of detail, or LODs, and hierarchical levels of detail, or HLODs, to accelerate the rendering of large static and dynamic polygonal environments. The principal idea of this work is that simplification methods should not always treat objects, or collections of polygons, in a scene independently. Instead, they may be able to produce higher quality and drastic approximations by simplifying disjoint regions of a scene together. This idea is applicable to the creation of LODs for individual objects that consist of unconnected polygons, and for the construction of HLODs, or approximations representing groups of static or dynamic objects. We present a polygonal simplification algorithm called General and Automatic Polygonal Simplification, or GAPS, that excels at merging disjoint polygons through the use of an adaptive distance threshold and surface area preservation. We use GAPS to create LODs and HLODs for nodes in the scene graphs of large polygonal environments. These approximations enable two rendering modes, one that allows the user to specify a desired image quality and another that targets a frame-rate. When objects in the scene move, our algorithm updates HLODs that have become inaccurate or invalid using asynchronous simplification processes. Our algorithm currently handles scenes with limited rigid-body dynamic movement. We demonstrate its performance on several large CAD environments including a 13 million polygon power plant model. Our approach has achieved nearly an order of magnitude improvement in average rendering speed with little or no loss in image quality for several viewing paths of complex environments.

Erickson, Jeremy P. (2014)

“Managing Tardiness Bounds and Overload in Soft Real-Time Systems”
Under the direction of Jim Anderson

In some systems, such as future generations of military unmanned aerial vehicles (UAVs), different software running on the same machine will require different types of timing correctness. For example, flight control software has hard real-time (HRT) requirements—if a job (i.e., invocation of a program) completes late, then safety may be compromised, so jobs must be guaranteed to complete within short deadlines. However, mission control software is likely to have soft real-time (SRT) requirements—if a job completes a fraction of a second late, the result is not likely to be catastrophic, but unbounded lateness would continue to be unacceptable.

The global earliest-deadline-first (G-EDF) scheduler has been demonstrated to be useful for scheduling software with SRT requirements on a multiprocessor, and the multicore mixed-criticality (MC2) framework that uses G-EDF for SRT scheduling has been proposed to safely mix HRT and SRT work on multicore UAV platforms. However, this prior work has limitations that are addressed by this dissertation.

G-EDF is attractive for SRT systems because it allows the system to be fully utilized with reasonable overheads. Furthermore, previous analysis of G-EDF can provide “lateness bounds” on the amount of time between a job’s deadline and its completion time. However, smaller lateness bounds would be preferable, and some tasks (i.e., programs) may be more sensitive to lateness than others. In this dissertation, we explore the broader category of G-EDF-like (GEL) schedulers that have identical implementation and overhead characteristics to G-EDF. We show that by choosing GEL schedulers other than G-EDF, better lateness characteristics can be achieved. Furthermore, we show that certain modifications can further improve lateness bounds while maintaining reasonable overheads. Specifically, successive jobs from the same task can be permitted to run in parallel with each other, or jobs can be split into smaller pieces by the operating system (OS).

Previous analysis of MC2 has always used less pessimistic assumptions about execution times when analyzing SRT work than when analyzing HRT work. It is possible that these assumptions can be violated, creating an overload that causes SRT guarantees to be violated. Furthermore, even in the expected case that such violations are transient, the system is not guaranteed to return to its normal operation. In this dissertation, we also provide a mechanism that can be used to provide such recovery.

Estes, Dave (2019)

n-Dimensional Display Interface: A Replacement for the Venerable Framebuffer

Under the direction of Ketan Mayer-Patel

The n-Dimensional Display Interface is the result of an exploration for a new abstraction for display interfaces. Modern and future display use cases are pushing the boundaries of what is possible with even the highest-speed data links connecting computing devices and displays. The n-Dimensional Display Interface was designed to meet a set of guiding principles. The result is an abstraction just above the framebuffer that provides a great deal of backward compatibility and scalability to deliver transmission savings for the most challenging use cases such as 8k displays, display walls, remote connected displays, and low-bandwidth mobile displays. The first two phases of research demonstrated the concept of backward compatibility and then extended the architecture for blending while leveraging video semantics to support full screen video. The third phase finalized the architecture and tackled larger, more complicated use of nDDI: very large display walls.

F

Faith, Rickard E. (1998)

“Debugging Programs After Structure-Changing Transformation”
Under the direction of Jan F. Prins
Electronic copy available

Translators convert a program from one language to another, and are used to solve a wide range of problems, such as the construction of compilers, optimizers, and preprocessors. Although many tools support the creation of translators, these tools do not provide integrated support for debugging the translator or the output of the translator. This dissertation investigates the tracking of information necessary to provide debugging capabilities for those translators that are structured as a set of program transformations operating on a tree-based representation. In this setting I describe how basic debugging capabilities can be automatically and transparently defined without semantic knowledge of the languages being translated. Furthermore, advanced debugging support, relying on the semantics of the languages and transformations, can be incorporated into this basic framework in a systematic manner. To evaluate this approach I have constructed KHEPERA, a program transformation system with integral support for the construction of debuggers. With this system I have explored debugging capabilities for traditional compiler optimizations, for more aggressive loop and parallelizing transformations, and for the transformation process itself. I also present algorithms that increase the performance of the transformation process.

Faulk, Stuart R.(1989)

“State Determination in Hard-Embedded Systems”
Under the direction of David L. Parnas

“Hard-embedded” describes a class of real-time systems with tight time and memory constraints. We are interested in developing documentation and software for hard-embedded systems that are easy to understand, change, and maintain. Part of what makes these systems difficult to understand or change is a failure to separate concerns where system state information must be shared among system tasks. We present a systematic approach to 1) writing software requirements, 2) designing software modules, and 3) writing code so loosely connected tasks sharing state information can be developed and understood independently. We construct a formal model for system states and state transitions based on finite state automata. A method for specifying system requirements, based on the model, is developed and applied to a real system. The requirements are realized in the design of an information hiding module that keeps track of state information at run- time. A formal specification of the module interface is provided and we show that the design preserves ease of change for programs sharing state information. A technique that aids separation of concerns is development of a system as a set of cooperating sequential processes. Where a hard-embedded system is developed as loosely connected processes, a synchronization mechanism must be used to allow processes to signal and synchronize with changes in the system state. Since there are no established criteria for choosing an appropriate synchronization device, we describe a set of objective criteria appropriate for hard-embedded systems. We demonstrate that the State Transition Event (STE) synchronization mechanism satisfies our criteria. As part of that demonstration, we give a procedure for translating a formal specification of synchronization requirements into an implementation in terms of STE variables. We demonstrate effectiveness of the overall approach by developing a real example from requirements through implementation. Experience with the techniques on a large system (the U.S, Naval Research Laboratory’s SCR A-7E aircraft software) is described.

Feng, David (2010)

“Visualization of Uncertain Multivariate 3D Scalar Fields”
Under the direction of Russell M. Taylor II
Electronic copy available

This dissertation presents a visualization system that enables the exploration of relationships in multivariate scalar volume data with statistically uncertain values. The n- Dimensional Volume Explorer (nDive) creates multiple visualizations that emphasize diㄦent features of the data. nDive provides data selection mechanisms tha are linked between views to enable the comparison of separate data populations in the presence of uncertainty. Incorporating representations of uncertainty into data visualizations is crucial for preventing viewers from drawing conclusions based on untrustworthy data points. The challenge of visualizing uncertain data increases with the complexity of the data set. nDive separates the visualization into multiple views: a glyph-based spatial visualization and abstract multivariate density plots that incorporate uncertainty. The spatial visualization, Scaled Data-Driven Spheres (SDDS), distributes (for each variable) a set of scaled, colored spheres in the sample space. A user study demonstrates that viewers are faster and more accurate using SDDS to identify spatial values and relationships as compared to superquadric glyphs, an alternative technique. The abstract plots complement SDDS by preattentively focusing the viewer on trustworthy data points using the probability density function of the data. These views, coupled with novel interaction techniques that utilize data value uncertainty, enable the identification of relationships while avoiding false conclusions based on uncertain data. The primary application of nDive is aiding radiologists who use magnetic resonance spectroscopy (MRS) to better understand the composition of abnormalities such as brain tumors and improve diagnostic accuracy. This work demonstrates how nDive has been successfully used to identify metabolic signatures for multiple types of tumors.

Fisher, Nathan Wayne (2009)

“The Multiprocessor Real-Time Scheduling of General Task Systems”
Under the direction of Sanjoy K. Baruah
Electronic copy available

The recent emergence of multicore and related technologies in many commercial systems has increased the prevalence of multiprocessor architectures. Contempora- neously, real-time applications have become more complex and sophisticated in their behavior and interaction. Inevitably, these complex real-time applications will be de- ployed upon these multiprocessor platforms and require temporal analysis techniques to verify their correctness. However, most prior research in multiprocessor real-time scheduling has addressed the temporal analysis only of Liu and Layland task systems. The goal of this dissertation is to extend real-time scheduling theory for multiproces- sor systems by developing temporal analysis techniques for more general task models such as the sporadic task model, the generalized multiframe task model, and the recurring real-time task model. The thesis of this dissertation is:

  • Optimal online multiprocessor real-time scheduling algorithms for sporadic and more general task systems are impossible; however, efficient, online scheduling algorithms and associated feasibility and schedulability tests, with provably bounded deviation from any optimal test, exist.

To support our thesis, this dissertation develops feasibility and schedulability tests for various multiprocessor scheduling paradigms. We consider three classes of mul- tiprocessor scheduling based on whether a real-time job may migrate between prcessors: full-migration, restricted-migration, and partitioned. For all general task systems, we obtain feasibility tests for arbitrary real-time instances under the full- and restricted- migration paradigms. Despite the existence of tests for feasibility, we show that optimal online scheduling of sporadic and more general systems is impossible. Therefore, we focus on scheduling algorithms that have constant-factor approximation ratios in terms of an analysis technique known as resource augmentation. We develop schedulability tests for scheduling algorithms, earliest-deadline-first (edf) and deadline-monotonic (dm), under full-migration and partitioned scheduling paradigms. Feasibility and schedulability tests presented in this dissertation use the workload metrics of demand-based load and maximum job density and have provably bounded deviation from optimal in terms of resource augmentation. We show the demand-based load and maximum job density metrics may be exactly computed in pseudo-polynomial time for general task systems and approximated in polynomial time for sporadic task systems.

Fletcher, P. Thomas (2004)

“Statistical Variability in Nonlinear Spaces: Application to Shape Analysis and DT-MRI”
Under the direction of Stephen M. Pizer and Sarang Joshi
Electronic copy available

Statistical descriptions of anatomical geometry play an important role in many medical image analysis applications. For instance, geometry statistics are useful in understanding the structural changes in anatomy that are caused by growth and disease. Classical statistical techniques can be applied to study geometric data that are elements of a linear space. However, the geometric entities relevant to medical image analysis are often elements of a nonlinear manifold, in which case linear multivariate statistics are not applicable. This dissertation presents a new technique called principal geodesic analysis for describing the variability of data in nonlinear spaces. Principal geodesic analysis is a generalization of a classical technique in linear statistics called principal component analysis, which is a method for computing an efficient parameterization of the variability of linear data. A key feature of principal geodesic analysis is that it is based solely on intrinsic properties, such as the notion of distance, of the underlying data space. The principal geodesic analysis framework is applied to two driving problems in this dissertation: (1) statistical shape analysis using medial representations of geometry, which is applied within an image segmentation framework via posterior optimization of deformable medial models, and (2) statistical analysis of diffusion tensor data intended as a tool for studying white matter fiber connection structures within the brain imaged by magnetic resonance diffusion tensor imaging. It is shown that both medial representations and diffusion tensor data are best parameterized as Riemannian symmetric spaces, which are a class of nonlinear manifolds that are particularly well-suited for principal geodesic analysis. While the applications presented in this dissertation are in the field of medical image analysis, the methods and theory should be widely applicable to many scientific fields, including robotics, computer vision, and molecular biology.

Frank, Geoffrey A. (1979)

“Virtual Memory Systems for Closed Applicative Language Interpreters”
Under the direction of Donald F. Stanat

Closed applicative languages include the FFP languages introduced by Backus in the 1977 Turing lecture. In this work, a class of interpreters for these languages is described. These interpreters are designed to be implemented on machines with a two level storage hierarchy, and are called virtual memory interpreters since they hide the memory hierarchy from the user. Virtual memory interpreters eliminate the need to store the entire program text in main store, and may reduce the execution time and space required to execute some programs by (1) allowing pointers to auxiliary memory to be moved instead of expressions, (2) allowing data sharing in auxiliary store. A virtual memory interpreter is shown to be a correct implementation of its closed applicative language by describing a virtual memory language for the interpreter, and defining a translation function that (1) maps the expressions of the closed applicative language to the expressions of the virtual memory language and (2) preserves the meanings of the languages. Mago has designed a cellular processor system for interpreting closed applicative languages that can take full advantage of the potential parallelism in a program if there is enough space in the machine to hold the program throughout execution. In order to demonstrate the capabilities of a virtual memory interpreter, a program for a file update problem is analyzed, and the execution times of this program on a Mago machine and on a virtual memory machine based on a Mago machine are compared. This example indicates that for some programs execution on a virtual memory interpreter can save time and space when compared with execution on a Mago machine.

Fredericksen, R. Eric (1993)

“The Biological Computation of Visual Motion”
Under the direction of Stephen M. Pizer and Willem H. van de Grind (University of Utrecht)
Department of Computer Science Technical Report TR93-036
Electronic copy available

The mechanisms underlying the computation of visual motion in biological systems are explored. Psychophysical experiments in visual perception of apparent motion (sequences of static images as in film, video, or computer controlled displays) were performed using random dot apparent motion stimuli designed to isolate populations of biological motion detectors. The results of these experiments along with extant information in the psychophysical, neurophysiological, and neuroanatomical literature are used to construct mathematical models for the distribution of biological motion detectors in the visual field, and for the mechanisms underlying the combination of motion detector outputs over visual space and time (stimulus duration). The mathematical models are shown to capture the major characteristics of the visibility of motion in humans. The detector distribution is described in terms of the visibility of spatial displacement size and image frame rate, and the dependence of that visibility on stimulus eccentricity in the visual field. The mathematical form of the distribution is a natural consequence of the mapping of visual space to visual cortex combined with a homogeneous distribution of cortical correlation distances, self-organization processes, and the availability of motion information in the visual field. The mechanisms for combination of motion detector responses over visual space and over time are modeled based on known neurophysiology. Preliminary results from simulations of the self-organization of the detector array are consistent with the psychophysical data. A comprehensive model is presented and the implications of the model with respect to the computer controlled display of motion information are discussed.

Fridman, Yonatan (2004)

“Extracting Branching Object Geometry via Cores”
Under the direction of Stephen M. Pizer
Electronic copy available

The extraction of branching objects and their geometry from 3D medical images is the goal of the research described here. The methods I develop and analyze are based on optimization of medial strength embodied in cores, a type of multiscale medial axis. The cores research described here focuses on object branches. The dissertation describes two sets of methods, one that computes 1D cores of tubular objects and another that computes 2D cores of non-tubular, or slab-like, objects. Each of these sets of methods uses a core-following technique that optimizes a medialness measure over position, radius, and orientation. I combine methods for branch-finding, branchreseeding, and core-termination with core-following to produce a tool that can extract complex branching objects from 3D images without the need for user interaction. I demonstrate the performance of the tubular core methods by extracting blood vessel trees from 3D head MR data, and I demonstrate the performance of the slab-like core methods by extracting kidneys from 3D abdominal CT data. I also integrate the two sets of methods with each other to extract kidneys and adjoining vasculature from CT data. Finally, I perform a thorough analysis of the effects of object geometry on cores using synthetic images. This analysis shows impressive resistance of the tubular core methods to image noise. It also shows that the slab-like core methods are not as robust but are still effective for extracting branching objects from medical images with relatively low noise.

Fritsch, Daniel S. (1993)

“Registration of Radiotherapy Images Using Multiscale Medial Descriptions of Image Structure” (Biomedical Engineering, UNC-Chapel Hill)

Under the direction of James Coggins The representation of object shape in medical images is an important precursor to many common image interpretation needs, including recognition, registration and measurement. Typically, methods for such computer vision tasks require the preliminary step of image segmentation, often via the detection of object edges. This dissertation presents a new means for describing greyscale objects that obviates the need for edge-finding, i.e., classifying pixels as being either inside or outside of an object boundary. Instead, this technique operates directly on the greyscale image intensity distribution to produce a set of “medialness” measurements at every pixel and across multiple spatial scales that capture more global properties of object structure than edge-based methods. When the set of all medialness measurements at some optimal scale of measurement are considered, a ridge-like intensity structure is formed along the middles of objects and a generalized medial axis is obtained by labeling pixels along this ridge and associating a scale value with each axis position. The result of this procedure is an image description called the Multiscale Medial Axis (MMA), which represents the structure of objects in an image at multiple and simultaneous levels of scale, and which provides several desirable properties for describing the shape of greyscale forms. Application of this image description technique to the automatic registration of radiotherapy planning and treatment images is described.

Fu, Cheng-Yang (2019)

Single-Shot Object Detector Series and A Future Application of Object Detection

Under the direction of Alexander C. Berg

Object Detection is a core task in computer vision and essential for many downstream tasks/applications encountered in robotics, self-driving cars, satellite image analysis, and e-commerce.  In my thesis I present a series of work on object detection, from basic work on efficient single-shot object detection to future applications, as well as work using detection to explicitly improve other vision tasks. In the first part, I will talk about Single-shot detection which is known for efficiency and easy implementation in embedded systems and explain how this work makes object detection more efficient and accurate. In the second part, I present a new operator called Instance Mask Projection (IMP), which projects a predicted instance segmentation in order to use it as a feature for semantic segmentation. Our experiments show the effectiveness of IMP on both clothing parsing (with complex layering, large deformations, and non-convex objects), and on street scene segmentation (with many overlapping instances and small objects).

Funk, Shelby Hyat (2004)

“EDF Scheduling on Heterogeneous Multiprocessors” (Biomedical Engineering, UNC-Chapel Hill)
Under the direction of Sanjoy K. Baruah
Electronic copy available

The goal of this dissertation is to expand the types of systems available for real-time applications. Specifically, this dissertation introduces tests that ensure jobs will meet their deadlines when scheduled on a uniform heterogeneous multiprocessor using the Earliest Deadline First (EDF) scheduling algorithm, in which jobs with earlier deadlines have higher priority. On uniform heterogeneous multiprocessors, each processor has a speed s, which is the amount of work that processor can complete in one unit of time. Multiprocessor scheduling algorithms can have several variations depending on whether jobs may migrate between processors — i.e., if a job that starts executing on one processor may move to another processor and continue executing. This dissertation considers three different migration strategies: full migration, partitioning, and restricted migration. The full migration strategy applies to all types of job sets. The partitioning and restricted migration strategies apply only to periodic tasks, which generate jobs at regular intervals. In the full migration strategy, jobs may migrate at any time provided a job never executes on two processors simultaneously. In the partitioning strategy, all jobs generated by a periodic task must execute on the same processor. In the restricted migration strategy, different jobs generated by a periodic task may execute on different processors, but each individual job can execute on only one processor. The thesis of this dissertation is: Schedulability tests exist for the Earliest Deadline First (EDF) scheduling algorithm on heterogeneous multiprocessors under different migration strategies including full migration, partitioning, and restricted migration. Furthermore, these tests have polynomial-time complexity as a function of the number of processors (m) and the number of periodic tasks (n). • The schedulability test with full migration requires two phases: an O(m) onetime calculation, and an O(n) calculation for each periodic task set. • The schedulability test with restricted migration requires an O(m + n) test for each multiprocessor / task set system. • The schedulability test with partitioning requires two phases: a one-time exponential calculation, and an O(n) calculation for each periodic task

Furst, Jacob D. (1999)

“Height Ridges of Oriented Medialness”
Under the direction of Stephen M. Pizer
Electronic copy available

Shape analysis of objects is an important aspect of medical image processing. Information gained from shape analysis can be used for object segmentation, object-based registration and object visualization. One shape analysis tool is the core, defined to be a height ridge of a medial strength measure made on an image. In this dissertation I present 3D cores, defined here to be optimal scale-orientation height ridges of oriented medial strength measurements. This dissertation covers:

  • a medial strength measurement, Blum-like medialness, that is robust, efficient, and insensitive to intrafigural interference,
  • a new definition for a ridge, the optimal parameter height ridge, and its properties, and
  • an algorithm, Marching Ridges, for extracting cores.

The medial strength measurement uses Guassian derivatives, so is less sensitive to noise, and responds to object boundaries at points rather than on entire spheres, so is faster to calculate and less sensitive to boundaries of other image figures. The Marching Ridges algorithm uses the grid structure of the image domain to identify ridge points as zero-crossings of first derivatives and to track ridges through the image domain.

G

Gallup, David (2010)

“Efficient 3D Reconstruction of Large-Scale Urban Environments from Street-Level Video”
Under the direction of Marc Pollefeys and Jan-Michael Frahm
Electronic copy available

Recovering the 3-dimensional (3D) structure of a scene from 2-dimensional (2D) images is a fundamental problem in computer vision with many applications in computer graphics, entertainment, robotics, transportation, manufacturing, security, etc. In this dissertation, I will show how to automatically and efficiently create detailed 3D models of urban environments from street-level imagery. Modeling large urban areas, even entire cities, is an enormous challenge due to the sheer scale of the problem. Even a partial data capture of the town of Chapel Hill requires millions of frames of street-level video. The methods presented in this dissertation are highly parallel and use little memory, and therefore utilize modern graphics hardware (GPU) technology to process video at the recording frame rate. Also, the structure in urban scenes such as planarity, orthogonality, verticality, and texture regularity can be exploited to achieve 3D reconstructions with greater efficiency, higher quality, and lower complexity. By examining the structure of an urban scene, a multiple-direction plane-sweep stereo method is performed on the GPU in real-time. An analysis of stereo precision leads to a view selection strategy that guarantees constant depth resolution and improves bounds on time complexity. Depth measurements are further improved by segmenting the scene into piecewise-planar and non-planar regions, which is driven by learned planar surface appearance. Finally, depth measurements are fused and the final 3D surface is recovered using a multi-layer heightmap model that produces clean, complete, and compact 3D reconstructions. The effectiveness of these methods is demonstrated by results from thousands of frames of video from a variety of urban scenes.

Galoppo, Nico (2008)

“Animation, Simulation, and Control of Soft Characters using Layered Representations and Simplified Physics-based Methods”
Under the direction of Ming Lin
Electronic copy available

Realistic behavior of computer generated characters is key to bringing virtual environments, computer games, and other interactive applications to life. The plausibility of a virtual scene is strongly influenced by the way objects move around and interact with each other. Traditionally, actions are limited to motion capture driven or pre-scripted motion of the characters. Physics enhance the sense of realism: without it virtual objects do not seem to behave as expected in real life. To make gaming and virtual environments truly immersive, it is crucial to simulate the response of characters to collisions and to produce secondary effects such as skin wrinkling and muscle bulging. Unfortunately, existing techniques cannot achieve these effects in real time, do not address the coupled response of a character’s skeleton and skin to collisions nor do they support artistic control. In this dissertation, I present interactive algorithms that enable physical simulation of deformable characters with high surface detail and support for intuitive deformation control. I propose a novel unified framework for real-time modeling of soft objects with skeletal deformations and surface deformation due to contact, and their interplay for object surfaces with up to tens of thousands of degrees of freedom. I make use of layered models to reduce computational complexity. I introduce dynamic deformation textures, which map three dimensional deformations in the deformable skin layer to a two dimensional domain for extremely efficient parallel computation of the dynamic elasticity equations and optimized hierarchical collision detection. I also enhance layered models with responsive contact handling, to support the interplay between skeletal motion and surface contact and the resulting two-way coupling effects. Finally, I present dynamic morph targets, which enable intui! tive control of dynamic skin deformations at runtime by simply sculpting pose-specific surface shapes. The resulting framework enables real-time and directable simulation of soft articulated characters with frictional contact response, capturing the interplay between skeletal dynamics and complex, non-linear skin deformations.morph targets, which enable intuitive control of dynamic skin deformations at runtime by simply sculpting pose-specific surface shapes.

Gamblin, Todd (2009)

“Scalable Performance Measurement and Analysis”
Under the direction of Dan Reed
Electronic copy available

Concurrency levels in large-scale, distributed-memory super computers are rising exponentially. Four of the current top five machines contain 100,000 or more microprocessor cores, and the largest of these, IBM’s Blue Gene/L, contains over 200,000 cores, and systems with millions of concurrent tasks are expected within the next three years. Tuning the performance of large-scale applications can be a subtle and time-consuming task because performance engineers must measure and make sense of data from many independent processes, and the volume of this data can quickly become unmanageable. In this dissertation, I focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. I present three techniques for reducing parallel performance data volume. The first draws on wavelet compression to compress system-wide, time-varying load balance data to manageable size. The second uses statistical sampling to measure a small subset of running processes and to generate low-volume, sampled traces. The third approach combines sampling and wavelet compression to adaptively stratify processes into performance equivalence classes at run-time, and to further reduce the cost of sampled tracing with stratified sampling. Finally, I have integrated these approaches into Libra, a toolset for scalable load-balance analysis. I describe Libra and show how it can be used to scalably visualize performance data from large- scale scientific applications.

Gao, Tianxiang (2017)

Extracting Information from Deep Learning Models

Under the direction of Vladimir Jojic

The advances of deep learning technologies in this decade are providing powerful tools for many machine learning tasks. Deep learning models, in contrast to traditional linear models, can learn nonlinear functions and high-order features, which enable exceptional performance.  In the field of computational biology, the rapid growth of data scale and complexity increases the demand for building powerful deep learning based tools. Despite the success of using the deep learning methods, understanding of the reasons for the effectiveness and interpretation of models remain elusive. 

This dissertation aims to provide several different approaches to extract information from deep models. This information could be used to address the problems of model complexity and model interpretability.

The amount of data needed to train a model depends on the complexity of the model. The cost of generating data in biology is typically large. Hence, collecting the data on the scale comparable to other deep learning application areas, such as computer vision and speech understanding, is prohibitively expensive and datasets are, consequently, small. Training models of high complexity on small datasets can result in overfitting — model tries to over-explain the observed data and has a low prediction accuracy on unobserved data. The number of parameters in the model is often regarded as the complexity of the model. However, deep learning models usually have thousands to millions of parameters, and they are still capable of yielding meaningful results and avoiding over-fitting even on modest data size. To explain this phenomenon, I proposed a method to estimate the degrees of freedom — a proper estimate of the complexity — in deep learning models. My results show that the actual complexity of a deep learning model is much smaller than its number of parameters. Using this measure of complexity, I propose a new model selection score which obviates the need for cross-validation.

Another concern for deep learning models is the ability to extract comprehensible knowledge from the model. In linear models, a coefficient corresponding to an input variable represents the influence of that variable on the prediction. However, in a deep neural network, the relationship between input and output is much more complex. In biological and medical applications, lack of interpretability prevents deep neural networks from being a source of new scientific knowledge. To address this problem, I provide 1) a framework to select hypotheses about perturbations that lead to the largest phenotypic change, and 2) a novel auto-encoder with guided-training that selects a representation of a biological system informative of a target phenotype. Computational biology application case studies were provided to illustrate the success of both methods.

Gao, Yaozong (2016)

“Accurate CT Pelvic Organ Segmentation in Image Guided Radiotherapy of Prostate Cancer”
Under the direction of Dinggang Shen

Accurate segmentation of male pelvic organs from computed tomography (CT) images is important in image guided radiotherapy (IGRT) of prostate cancer. The efficacy of radiation treatment highly depends on the segmentation accuracy of planning and treatment CT images. Clinically manual delineation is still generally performed in most hospitals. However, it is time consuming and suffers large inter-operator variability due to the low tissue contrast of CT images. To reduce the manual efforts and improve the consistency of segmentation, it is desirable to develop an automatic method for rapid and accurate segmentation of pelvic organs from planning and treatment CT images.

This dissertation marries machine learning and medical image analysis for addressing two fundamental yet challenging segmentation problems in image guided radiotherapy of prostate cancer.

Planning CT Segmentation. Deformable models are popular methods for planning CT segmentation. However, they are well known to be sensitive to initialization and ineffective in segmenting organs with complex shapes. To address these limitations, this dissertation investigates a novel deformable model named regression-based deformable model (RDM). Instead of locally deforming the shape model, in RDM the deformation at each model point is explicitly estimated from local image appearance and used to guide deformable segmentation. As the estimated deformation can be long-distance and is spatially adaptive to each model point, RDM is insensitive to initialization and more flexible than conventional deformable models. These properties render it very suitable for CT pelvic organ segmentation, where initialization is difficult to get, and organs may have complex shapes.

Treatment CT Segmentation. Most existing methods have two limitations when they are applied to treatment CT segmentation. First, they have a limited accuracy because they overlook the availability of patient-specific data in the IGRT workflow. Second, they are time consuming and may take minutes or even longer for segmentation. To improve both accuracy and efficiency, this dissertation combines incremental learning with anatomical landmark detection for fast localization of the prostate in treatment CT images. Specifically, cascade classifiers are learned from a population to automatically detect several anatomical landmarks in the image. Based on these landmarks, the prostate is quickly localized by aligning and then fusing previous segmented prostate shapes of the same patient. To improve the performance of landmark detection, a novel learning scheme named “incremental learning with selective memory” is proposed to personalize the population-based cascade classifiers to the patient under treatment. Extensive experiments on a large dataset show that the proposed method achieves comparable accuracy to the state of the art methods while substantially reducing runtime from minutes to just 4 seconds.

Gauch, John M. (1989)

“The Multiresolution Intensity Axis of Symmetry and its Application to Image Segmentation”
Under the direction of Stephen M. Pizer
Department of Computer Science Technical Report TR89-047

A new shape description for grey-scale images called the intensity axis of symmetry (IAS) and an associated curvature-based description called vertex curves are presented. Both of these shape description methods focus on properties of the level curves of the image and combine this information across intensities to obtain representations which capture properties of both the spatial and intensity shape of an image. Methods to calculate and to display both image shape descriptions are described. To provide the necessary coherence across the spatial and intensity dimensions while computing the IAS, the boundary-based active contour method of Kass is extended to obtain a surface-based functional called the active surface. To illustrate the effectiveness of the IAS for image shape description, an interactive image segmentation program which identifies and displays image regions associated with individual components of the IAS is demonstrated. These regions often correspond to sensible anatomical structures in medical images. An analysis of the multiresolution behavior of the IAS reveals that it is possible to impose a quasi-hierarchy on IAS sheets by focusing on the multiresolution properties of much simpler geometric structures: vertex curves approximated by watershed boundaries.

Gauch, Susan E. (1990)

“An Expert System for Searching in Full-Text”
Under the direction of John B. Smith
Department of Computer Science Technical Report TR89-043
Electronic copy available

This dissertation explores techniques to improve full-text information retrieval by experienced computer users or novice users of retrieval systems. An expert system which automatically reformulates Boolean user queries to improve search results is presented. The expert system differs from other intelligent database functions in two ways: it works with semantically and syntactically unprocessed text; and the expert system contains a knowledge base of domain independent search strategies. The passages retrieved are presented to the user in decreasing order of estimated relevancy. This combination of user interface features provides powerful, yet simple, access to full-text documents. Experimental results demonstrate that the expert system can improve the search efficiency of novice searchers without decreasing their search effectiveness. Further, an evaluation of the ranking algorithm confirms that, in general, the system presents potentially relevant passages to the user before irrelevant passages.

Gayle, T. Russell (2010)

“Physics-based Sampling for Motion Planning”
Under the direction of Dinesh Manocha and Ming Lin
Electronic copy available

Motion planning is a fundamental problem with applications in a wide variety of areas including robotics, computer graphics, animation, virtual prototyping, medical simulations, industrial simulations, and traffic planning. Despite being an active area of research for nearly four decades, prior motion planning algorithms are unable to provide adequate solutions that satisfy the various constraints that arise in these applications. We present a novel approach based on physics-based sampling for motion planning that can compute collision-free paths while also satisfying many physical constraints. Our planning algorithms use constrained simulation to generate samples which are biased in the direction of the final goal positions of the agent or agents. The underlying simulation core implicitly incorporates kinematics and dynamics of the robot or agent as constraints or as part of the motion model itself. Thus, the resulting motion is smooth and physically-plausible for both single robot and multi-robot planning. We apply our approach to planning of deformable soft-body agents via the use of graphics hardware accelerated interference queries. We highlight the approach with a case study on pre-operative planning for liver chemoembolization. Next, we apply it to the case of highly articulated serial chains. Through dynamic dimensionality reduction and optimized collision response, we can successfully plan the motion of “snake-like” robots in a practical amount of time despite the high number of degrees of freedom in the problem. Finally, we show the use of the approach for a large number of bodies in dynamic environments. By applying our approach to both global and local interactions between agents, we can successfully plan for thousands of simple robots in real-world scenarios. In particular, we demonstrate their application to large crowd simulations.

Gill, Gennette D. (2010)

“Analysis and Optimization for Pipelined Asynchronous Systems”
Under the direction of Montek Singh
Electronic copy available

Most microelectronic chips used today–in systems ranging from cell phones to desktop computers to supercomputers–operate in basically the same way: they synchronize the operation of their millions of internal components using a clock that is distributed globally. This global clocking is becoming a critical design challenge in the quest for building chips that offer increasingly greater functionality, higher speed, and better energy efficiency. As an alternative, asynchronous or “clockless” design obviates the need for global synchronization; instead, components operate concurrently and synchronize locally only when necessary. This dissertation focuses on one class of asynchronous circuits: application specific dataflow systems (i.e. those that take in a stream of data items and produce a stream of processed results.) High-speed stream processors are a natural match for many high-end applications, including 3D graphics rendering, image and video processing, digital filters and DSPs, cryptography, and networking processors. This dissertation aims to make the design, analysis, optimization, and testing of circuits in the chosen domain both fast and efficient. Although much of the groundwork has already been laid by years of past work, my work identifies and addresses four critical missing pieces: i) fast performance analysis for estimating the throughput of a fine-grained pipelined system; ii) automated and versatile design space exploration; iii) a full suite of circuit level modules that connect together to implement a wide variety of system behaviors; and iv) testing and design for testability techniques that identify and target the types of errors found only in high-speed pipelined asynchronous systems. I demonstrate these techniques on a number of examples, ranging from simple applications that allow for easy comparison to hand-designed alternatives to more complex systems, such as a JPEG encoder. I also demonstrate these techniques through the design and test of a fully asynchronous GCD demonstration chip.

Glassner, Andrew S. (1988)

“Algorithms for Efficient Image Synthesis”
Under the direction of Frederick P. Brooks, Jr.
Department of Computer Science Technical Report TR90-031
Electronic copy available

This dissertation embodies six individual papers, each directed towards the efficient synthesis of realistic, three-dimensional images and animations. The papers form four major categories: ray tracing, animation, texture mapping, and fast iterative rendering. The ray tracing papers present algorithms for efficiently rendering static and animated scenes. I show that it is possible to make use of coherence in both object space and time to quickly find the first intersected object on a ray’s path. The result is shorter rendering times with no loss of quality. The first animation paper considers the needs of a modern animation system and suggests a particular object-oriented architecture. The other animation paper presents an efficient and numerically stable technique for transforming an arbitrary modeling matrix into a fixed sequence of parametric transformations which yield the same matrix when composed. The result is that hierarchical, articulated models may be described by the human modeler or animator with any convenient sequence of transformations at each node, and the animation system will still be able to perform parametrically smooth motion interpolation. The fast rendering paper describes a system built to allow quick modification of object surface description and lighting. I use a space/time tradeoff to capitalize on the constant geometry in a scene undergoing adjustment. The result is a system that allows relatively fast, iterative modification of the appearance of an image. The texture mapping paper offers a refinement to the sum table technique. I show that the fixed, rectangular filter footprint used by sum tables can lead to oversampling artifacts. I offer a method which detects when oversampling is likely to occur, and another method for iteratively refining the texture estimate until it satisfies an error bound based on the oversampled area. Together, these six papers present a collection of algorithms designed to enhance synthetic images and animations and reduce the time required for their creation.

Goddard, Stephen (1998)

“On the Management of Latency in the Synthesis of Real-Time Signal Processing Systems from Processing Graphs”
Under the direction of Kevin Jeffay
Department of Computer Science Technical Report TR98-027
Electronic copy available

Complex digital signal processing systems are commonly developed using directed graphs called processing graphs. Processing graphs are large grain dataflow graphs in which nodes represent processing functions and graph edges depict the flow of data from one node to the next. When sufficient data arrives, a node executes its function from start to finish without synchronization with other nodes, and appends data to the edge connecting it to a consumer node. We combine software engineering techniques with real-time scheduling theory to solve the problem of transforming a processing graph into a predictable real-time system in which latency can be managed. For signal processing graphs, real-time execution means processing signal samples as they arrive without losing data. Latency is defined as the time between when a sample of sensor data is produced and when the graph outputs the processed signal. We study a processing graph method, called PGM, developed by the U.S. Navy for embedded signal processing applications. We present formulae for computing node execution rates, techniques for mapping nodes to tasks in the rate-based-execution (RBE) task model, and conditions to verify the schedulability of the resulting task set under a rate-based, earliest-deadline-first scheduling algorithm. Furthermore, we prove upper and lower bounds for the total latency any sample will encounter in the system. We show that there are two sources of latency in real-time systems created from processing graphs: inherent and imposed latency. Inherent latency is the latency defined by the dataflow attributes and topology of the processing graph. Imposed latency is the latency imposed by the scheduling and execution of nodes in the graph. We demonstrate our synthesis method, and the management of latency using three applications from the literature and industry: a synthetic aperture radar application, an INMARSAT mobile satellite receiver application, and an acoustic signal processing application from the ALFS anti-submarine warfare system. This research is the first to model the execution of processing graphs with the real-time RBE model, and appears to be the first to identify and quantify inherent latency in processing graphs.

Gong, Yunchao (2014)

“Large-Scale Image Retrieval Using Similarity Preserving Binary Codes”
Under the direction of Svetlana Lazebnik

Image retrieval is a fundamental problem in computer vision, and has many applications. When the dataset size gets larger, retrieving images in very large-scale Internet image collections becomes very challenging. The challenges come from storage, computation speed, and similarity representation. My thesis addresses learning compact similarity preserving binary codes, which represent each image by a short binary string, for fast retrieval in large image databases.

I will first present an approach called Iterative Quantization to convert high dimensional vectors to compact binary codes, which works by learning a rotation to minimize the quantization error of mapping data to the vertices of a binary Hamming cube. This approach achieves state-of-the-art accuracy for preserving neighbors in the original feature space, as well as semantic precision. Second, I will extend the approach to two different scenarios in large-scale recognition and retrieval problems. The first extension is aimed at high-dimensional histogram data, such as bag-of-words features or text documents. Such vectors are typically sparse, and nonnegative. I develop an algorithm that explores the special structure of such data by specifically mapping data to binary vertices in the positive orthant, which gives improved performance. The second extension is for Fisher Vectors, which are state-of-the-art descriptors. I develop a novel method for converting such descriptors to compact similarity-preserving binary codes that exploits their natural matrix structure to reduce their dimensionality using compact bilinear projections instead of a single large projection matrix. This method achieves retrieval and classification accuracy comparable to the original descriptors and to the state-of-the-art Product Quantization approach while having orders of magnitude faster code generation time and smaller memory footprint. Finally, I present two applications of using Internet images and tags/labels to learn semantic binary codes, and show improved retrieval accuracy on several large Internet image datasets.

By showing that binary coding algorithms can be effectively combined with semantic embeddings, I address the problem of how to learn good representations for large-scale image retrieval.

Gottschalk, Stefan (2000)

“Collision Queries Using Oriented Bounding Boxes”
Under the direction of Ming C. Lin and Dinesh Manocha
Electronic copy available

Bounding volume hierarchies (BVHs) have been used widely in collision detection algorithms. The most commonly used bounding volume types are axis-aligned bounding boxes (AABBs) and spheres, which have simple representations, compact storage, and are easy to implement. This dissertation explores the use of oriented bounding boxes (OBBs), which may be aligned with the underlying geometry to fit more tightly. We believe that OBBs historically have been used less often because previously known methods for testing OBBs for overlap were relatively expensive, good methods for automatic construction of trees of OBBs were not known, and the benefits of using OBBs were not well understood. In this dissertation we present methods for building good trees of OBBs and demonstrate their use in efficient collision detection. In particular, we present a new OBB overlap test which is more efficient than previously known methods. We also examine some of the trade-offs of using OBBs by analyzing benchmark results comparing the performance of OBBs, AABBs and spheres, and we show that OBBs can significantly outperform the latter two bounding volumes for important classes of inputs. We also present two new tools, the bounding volume test tree (BVTT) and the contact pair matrix (CPM), for analyzing collision queries. Finally, we describe the design and implementation of a software system that permits the fair comparison of algorithmic variations on BVH-based collision detection.

Gotz, David (2005)

“Channel Set Adaptation: Scalable and Adaptive Streaming for Non-Linear Media”
Under the direction of Ketan Mayer-Patel
Electronic copy available

Streaming of linear media objects, such as audio and video, has become ubiquitous on today’s Internet. Large groups of users regularly tune in to a wide variety of online programming, including radio shows, sports events, and news coverage. However, non-linear media objects, such as large 3D computer graphics models and visualization databases, have proven more difficult to stream due to their interactive nature. In this dissertation, I present a complete framework for the efficient streaming of non-linear datasets to large user groups. The framework has several components. First, I present the Representation Graph, an abstract data representation for expressing the semantic and syntactic relationships between elements of information in a non-linear multimedia database. I then present a computational model for achieving multidimensional adaptation. The model is based on a spatially defined utility metric that allows applications to mathematically express trade-offs between different dimensions of adaptivity. The representation graph and adaptation model serve as the foundation for the Generic Adaptation Library (GAL). GAL defines a layered design for non-linear media applications and provides an implementation for the middle adaptation layer. The representation graph, adaptation model, and GAL can be combined to support adaptive non-linear streaming. I evaluate the performance of an experimental prototype based on these principles and show that they can effectively support the adaptive requirements of dynamic and interactive access to non-linear media. I also define Channel Set Adaptation (CSA), an architecture for scalable delivery of non-linear media. CSA maps concepts from the representation graph to a scalable subscription-based network model. CSA provides, in the ideal case, an infinitely scalable streaming solution for non-linear media applications. I include a evaluation of CSA’s performance on both multicast and broadcast networks. In addition, I develop a performance model based on results from the experimental evaluation. The performance model highlights several key properties of the underlying communication model that are most important to CSA performance.

Govindaraju, Naga (2004)

“Efficient Visibility-Based Algorithms for Interactive Walkthrough, Shadow Generation, and Collision Detection”
Under the direction of Dinesh Manocha
Electronic copy available

We present novel visibility-based algorithms for solving three problems: interactive display, shadow generation, and collision detection. We use the visibility capabilities available on graphics processing units (GPUs) in order to perform visibility computations, and we apply the results to complex 3-D environments. We present a real-time visibility culling algorithm that reduces the number of rendered primitives by culling portions of a complex 3-D environment that are not visible to the user. We introduce an occlusion switch to perform visibility computations. Each occlusion switch consists of two GPUs, and we use these GPUs either for computing an occlusion representation or for culling away primitives from a given view point. Moreover, we switch the roles of each GPU across successive frames. The visible primitives are rendered on a separate GPU or are used for generating hard-edged umbral shadows from a moving light source. We use our visibility culling algorithm for computing the potential shadow receivers and shadow casters from the eye and light sources, respectively. We further reduce their size using a novel cross-culling algorithm. We present a novel visibility-based algorithm for reducing the number of pair-wise interference tests among multiple deformable and breaking objects in a large environment. Our collision detection algorithm computes a potentially colliding set (PCS) using image-space occlusion queries. Our algorithm involves no pre-computation and proceeds in multiple stages: PCS computation at an object level and PCS computation at a subobject level, followed by exact collision detection. We use a linear-time two-pass rendering algorithm for computing each PCS efficiently. Our collision-pruning algorithm can also compute self-collisions in general deformable models. Further, we overcome the sampling and precision problems in our pruning algorithm by fattening the triangles sufficiently in PCS. We show that the Minkowski sum of each primitive with a sphere provides a conservative bound for performing reliable 2.5-D overlap tests using GPUs. In contrast to prior GPU-based collision detection algorithms, our algorithm guarantees that no collisions will be missed due to limited frame-buffer precision or quantization errors during rasterization. We have implemented our visibility-based algorithms on PCs with a commodity graphics processor such as NVIDIA GeForce FX 5900. We highlight their performance on complex 3-D environments composed of millions of polygons. In practice, we are able to achieve interactive frame rates of 10 – 20 frames per second in these experiments.

Grant, Eric D. (1991)

“Constraint-Based Design by Cost Function Optimization”
Under the direction of Turner Whitted
Electronic copy available

Constraint-based design is the process of selecting among alternatives to best satisfy a set of potentially conflicting goals. A key problem in constraint-based design is finding globally optimal solutions to problems without limiting the complexity of constraints. In this work, constraints are encoded as cost functions that express how well the constraints are satisfied. A geometric modeling problem is defined by specifying a collection of constraints on the desired model. A composite cost function representing all constraints is formed by combining the component cost functions. The optimal solution to a constraint problem can be found by minimizing the value of the composite cost function. A standard probabilistic optimization technique, simulated annealing, is used to seek the global minimum value and the corresponding optimal solution. In practice, global optima cannot be guaranteed, but often near-globally optimal results are satisfactory. The cost function representation for design problems is not new; VLSI researchers have used annealing-based optimization methods to minimize chip area and wire length. The cost functions for general constraint-based modeling problems are not as well defined as the simple VLSI cost functions. A contribution of this research is a systematic method of encoding different classes of constraints as cost functions. The validity of this approach is demonstrated by applying the methodology to two problems: product design (specifically, opaque projector design), and site planning.

Gross, Richard R. (1985)

“Using Software Technology to Specify Abstract Interfaces in VLSI Design”
Under the direction of Peter Calingaert

Good techniques for VLSI design change management do not now exist. A principal reason is the absence of effective methods for the specification of abstract interfaces for VLSI designs. This dissertation presents a new approach to such specification, an approach that extends to the VLSI domain D.L. Parnas’s techniques for precise specification of abstract software design interfaces to the VLSI domain. The proposed approach provides important new features, including scalability to VLSI design levels, integration into the design life-cycle, and a uniform treatment of functional, electrical, and geometric design information. A technique is also introduced for attaching a value to the degree of specification adherence of a candidate module. To illustrate and evaluate the proposed approach, an example specification method using it is described and evaluated experimentally.

Guan, Li (2010)

“Multi-view Dynamic Scene Modeling”
Under the direction of Mark Pollefeys
Electronic copy available

Modeling dynamic scenes/events from multiple fixed-location vision sensors, such as video camcorders, infrared cameras, Time-of-Flight sensors etc, is of broad interest in computer vision society, with many applications including 3D TV, virtual reality, medical surgery, marker-less motion capture, video games, and security surveillance. However, most of the existing multi-view systems are set up in strictly controlled indoor environment, with fixed lighting conditions and simple background views. Many challenges are preventing the technology to the outdoor natural environment. These include varying sunlight, shadows, reflections, motion at the background and visual occlusion etc. In this thesis, I address dierent aspects to overcome all of the aforementioned diculties, so as to reduce human preparation and manipulation, and to make a robust system as automatic as possible. In particular, the main novel technical contributions of this thesis are as follows: a generic heterogeneous sensor fusion framework for robust 3D shape estimation together; a way to automatically recover 3D shapes of static occluder from dynamic object silhouette cues, which explicitly models the static “visual occluding event” along the viewing rays; a system to model multiple dynamic objects shapes and track their identities simultaneously, which explicitly models the “inter-occluding event” between dynamic objects; a scheme to recover an object’s dense 3D motion flow over time, without assuming any prior knowledge of the underlying structure of the dynamic object being modeled, which helps to enforce temporal consistency of natural motions and initializes more advanced shape learning and motion analysis. A unified automatic calibration algorithm for the heterogeneous network of conventional cameras/camcorders and new Time-of-Flight sensors is also proposed in the thesis

Guan, Sheng-Uei (1989)

“A Model, Architecture, and Operating System Support for Shared Workspace Cooperation”
Under the direction of Hussein Abdel-Wahab
Department of Computer Science Technical Report TR89-029
Electronic copy available

As more and more special-purpose real-time user cooperation tools are being built, the impact of these new applications to operating systems has just emerged. Instead of directly implementing an operating system for such applications, we try to identify user requirements for such cooperation and the support that designers of such tools seek. With this investigation, the desirable requirements and support can then be achieved through different approaches, e.g. operating system kernels, on-line libraries, or user module libraries. Rather than tackle the problem in its full generality, we focus on real-time cooperation with shared workspaces, which is the core of most real-time cooperation. This dissertation describes a shared workspaces model. Subtleties supporting this shared workspace have been studied. A Remote Shared Workspaces facility has been built as a research vehicle. Desirable features for shared workspace cooperation are investigated; the relevance of operating systems supporting them is also discussed. An architecture supporting dynamic groups formation and activities is presented. It supports remote cooperation and also cooperation with existing single-user applications. Operating system mechanisms are also described to support multi-user tools development and sharing of user privileges in a session, namely: multi-user processes and shared capabilities-lists. A system-call level programmers’ interface has been specified. We also introduce jointly-owned objects, found in real life and the computer world. The use of multi-user tools makes the existence of jointly-owned objects a necessity: a participant who joins a multi-user tool written by others knows that the user agent executed in his name is not a Trojan horse if the multi-user tool is jointly owned by all the participants. The concept of “jointly-owned” is generalized to “conditionally jointly-owned”, which helps resolve conflicts among joint-owners. Graham and Denning’s protection model is extended to incorporate these conditionally jointly owned entities. Authority- and quorum-based objects are investigated as instances of conditionally jointly-owned objects.

Guo. Zhishan (2016)

“Real-Time Scheduling of Mixed-Critical Workloads upon Platforms with Uncertainties”
Under the direction of Sanjoy Baruah
Electronic Copy Available

In designing safety-critical real-time systems, there is an emerging trend in moving towards mixed-criticality (MC), where functionalities with different degrees of importance (i.e., criticality) are implemented upon a shared platform. Since 2007, there has been a large amount of research in MC scheduling, most of which considers the Vestal Model. In this model, all kinds of uncertainties in the system are characterized into the workloads by assuming multiple worst-case execution time (WCET) estimations for each execution (of a piece of code).

However, uncertainties of estimations may arise from different aspects (instead of WCET only), especially upon more widely used commercial-off-the-shelf (COTS) hardware that typically provides good average-case performance rather than worst-case guarantees. This dissertation addresses two questions fundamental to the modeling and analyzing of such MC real-time systems: (i) Can Vestal model be used to describe all kinds of uncertainties at no significant analytical capacity loss? (ii) If not, can new mechanisms be developed with better performances over existing ones (in MC scheduling theory), under certain assumptions?

To answer these questions, we first investigate the Vestal model carefully. We propose a new algorithm (named LE-EDF) which dominates state-of-the-art schedulers for MC job scheduling. We also improve the understanding of certain existing algorithms by proving a better (and even optimal) speedup bound. We have found that by introducing the probabilistic WCET workload model into MC scheduling, the uncertain behaviors can be better characterized comparing to Vestal model in the sense of schedulability ratio via experiments.

We then present a new MC system model to describe the uncertainties arising from the platform’s performance. We show that under this model, where uncertainties of execution speed are separately captured, better schedulability results can be achieved compared to using the Vestal model instead. We propose a linear programming (LP) based algorithm for scheduling MC job set on uniprocessor platforms, and show its optimality (i.e., with zero analytical capacity loss), in the sense that it dominates any existing MC scheduler. Under the fluid (processor sharing) scheme, we further show that the optimality result can be retained even when the work is extended to multiprocessor scheduling and MC task scheduling.

This thesis further addresses the two questions by studying cases where uncertainties arise from more than one aspect, by integrating both dimensions of uncertainties (i.e., WCET estimation and system performance) within a single integrated framework and designing scheduling algorithms with associated schedulability tests. The proposed LE-EDF algorithm is shown to be well applicable for MC job scheduling. While For MC task scheduling, we adapt an existing algorithm named EDF-VD, and show that it has the same worst case analytical capacity loss; i.e., the framework generalization is available “for free” at least from the perspective of speedup factor.

Under many cases, experimental studies upon randomly generated workloads are conducted to verify and quantify the theoretically proven domination relationships for both uniprocessor and multiprocessor scenarios.

Gupta, Gopal (1992)

“Parallel Execution of Logic Programs on Shared Memory Multiprocessors”
Under the direction of Bharadwaj Jayaraman

This dissertation addresses the problem of compiling parallelism present in logic languages for efficient execution on shared memory multiprocessors. We present an extended and-or tree and an extended WAM (Warren Abstract Machine) for efficiently supporting both and- parallel and or-parallel execution of logic programs on shared-memory multiprocessors. Our approach for exploiting both and- and or-parallelism is based on the binding-arrays method for or-parallelism and the RAP (Restricted And-Parallelism) method for and-parallelism, the two most successful methods for implementing or-parallelism and and- parallelism respectively. Our combined and-or model avoids redundant computations when goals exhibit both and- and or-parallelism by representing the cross-product of the solutions from the and-or parallel goals rather than re-computing them. We extend the classical and-or tree with two new nodes: a ‘sequential’ node (for RAP’s sequential goals), and a ‘cross-product’ node (for the cross-product of solutions from and-or parallel goals). An extension of the WAM, called AO-WAM, is also presented which is used to compile logic programs for and-or parallel execution based on the extended and-or tree. The AO-WAM incorporates a number of novel features: (i) inclusion of a base array with each processor’s binding-array for constant-time access to variables in the presence of and-parallelism; (ii) inclusion of new stack frames and instructions to express solution sharing; (iii) novel optimizations which minimize the cost of binding-array updates in the presence of and- parallelism. Techniques are also presented for incorporating side-effects and extra-logical features in the and-or parallel model, so that “real world” logic programs can be run. The and-or model is further enhanced to incorporate stream-parallelism based on theAndorra Principle. Problems that can arise due to interaction of restricted and-parallelism with stream-parallelism are described and solutions presented. This research has been supported by grant DCR-8603609 from the National Science Foundation and partially by U.K. Science and Engineering Research Council grant GR/F 27420.

Guy, Stephen J. (2012)

“Geometric Collision Avoidance for Heterogeneous Crowd Simulation”
Under the direction of Ming Lin and Dinesh Manocha

Simulation of human crowds can create plausible human trajectories, predict likely flows of pedestrians, and has application in areas such as games, movies, safety planning, and virtual environments. This dissertation presents new crowd simulation methods based on geometric techniques. I will show how geometric optimization techniques can be used to efficiently compute collision-avoidance constraints, and use these constraints to generate human-like trajectories in simulated environments. This process of reacting to the nearby environment is known as local navigation and it forms the basis for many crowd simulation techniques, including those described in this dissertation.

Given the importance of local navigation computations, I devote much of this dissertation to the derivation, analysis, and implementation of new local navigation techniques. I discuss how to efficiently exploit parallelization features available on modern processors, and show how an efficient parallel implementation allows simulations of hundreds of thousands of agents in real time on many-core processors and tens of thousands of agents on multi-core CPUs. I analyze the macroscopic flows which arise from these geometric collision avoidance techniques and compare them to flows seen in real human crowds, both qualitatively (in terms of flow patterns) and quantitatively (in terms of flow rates).

Building on the basis of these strong local navigation models, I further develop many important extensions to the simulation framework. Firstly, I develop a model for global navigation which allows for more complex scenarios by accounting for long-term planning around large obstacles or emergent congestion. Secondly, I demonstrate methods for using data-driven approaches to improve crowd simulations. These include using real-world data to automatically tune parameters, and using perceptual user study data to introduce behavioral variation.

Finally, looking beyond geometric avoidance based crowd simulation methods, I discuss methods for objectively evaluating different crowd simulation strategies using statistical measures. Specifically, I focus on the problem of quantifying how closely a simulation approach matches real-world data. I propose a similarity metric that can be applied to a wide variety of simulation approaches and datasets.

Taken together, the methods presented in this dissertation enable simulations of large, complex humans crowds with a level of realism and efficiency not previously possible.

Gyllstrom, Karl A. (2009)

“Enriching personal information management with document interaction histories”
Under the direction of David Stotts
Electronic copy available

Personal information management is increasingly challenging, as more and more of our personal and professional activities migrate to personal computers. Manual organization and search remain the two primary options available to users, and both bear significant limitations; the former requires too much effort on the part of the user, while the latter is dependent on users’ ability to recall discriminating information. I pursue an alternative approach, where users’ computer interaction with their workspace is recorded, algorithms draw inferences from this interaction, and these inferences are applied to improve information management and retrieval for users. This approach requires no effort from users and enables retrieval to be more personalized, natural, and intuitive. This approach is adopted in my design of the Passages system, which enhances information management by maintaining a detailed chronicle of all the text the user ever reads or edits, and making this chronicle available for rich temporal queries about the user’s information workspace. Passages enables queries like, “which papers and web pages did I read when writing the `related work’ section of this paper?”, and, “which of the emails in this folder have I skimmed, but not yet read in detail?” As time and interaction history are important attributes in users’ recollection of their personal information, effectively supporting them creates useful possibilities for information retrieval. I present methods to collect and make sense of the large volume of text with which the user interacts. I show through user evaluation the accuracy of Passages in building interaction history, and illustrate its capacity to both improve existing retrieval systems and enable novel ways to characterize document activity across time

H

Han, Qiong (2007)

“Proper Shape Representation of Single- and Multi-Figure Anatomical Objects”
Under the direction of Stephen Pizer
Electronic copy available

Extracting anatomic objects from medical images is useful in various medical applications. This extraction, called image segmentation, is often realized through deformable models. Among deformable model methods, medial deformable models have the unique advantage of representing not only the object boundary surfaces but also the object interior volume. This dissertation faces several remaining challenges in an existing medially based deformable model method: 1. how to interpolate a proper continuous form from a discrete medial shape representation; 2. how to represent complex objects with multiple parts and do statistical analysis on them; 3. how to avoid local shape defects, such as folding or creasing, in shapes represented by the deformable models. The proposed methods in this dissertation address these challenges in more detail: 1. The interpolation method is based on the integration of a medial shape operator and thereby guarantees the legality of the interpolated shapes. 2. A medially based representation with hierarchy is proposed to represent complex objects with multiple parts by explicitly modeling the interrelations between object parts and the smooth transitions between each pair of connected parts. Based on such a representation a hierarchical statistical analysis is proposed for such complex objects. 3. The method to fit a medial model to binary images uses explicit legality constraints derived from the medial shape operator. Using probability distributions learned from these fits yields better image segmentation results. These proposed methods have proven to either improve the existing methods or add new capability to the existing medially based deformable model method

Han, Taisook (1990)

“A New Class of Recursive Routing Algorithms on Mesh-connected Computers”
Under the direction of Donald F. Stanat
Department of Computer Science Technical Report TR90-044
Electronic copy available

We describe a class of deterministic routing algorithms called “move and smooth” algorithms for one-to-one and one-to-many message routing problems on meshes. Initially, each processor contains at most one message, and each message has one or more designated destinations. No processor is the destination of more than one message. Move and smooth algorithms are recursive. Initially, the entire mesh is considered a single region. At each recursive stage,

  • Each region is partitioned into contiguous subregions;
  • A copy of each message is moved to each of the regions that contains one of its destinations (the move phase)
  • Messages within each region are re-distributed so that each processor contains at most one message (the smooth phase)

The recursion continues until each region contains a single row or column of processors, at which time each message has arrived at or can be moved directly to its destination. We examine two representative move and smooth algorithms in detail. On a square n by n mesh, one of the algorithms requires 5.5nparallel communications steps and five buffers per processor; the other requires 9n parallel communication steps and three buffers per processor. We show that under appropriate assumptions, these costs are not changed for one-to-many routing problems. The number of buffers is independent of the size of the mesh. The costs of our move and smooth algorithms are higher than those of some one-to-one routing algorithms, but move and smooth algorithms are suited for a wider variety of problems including one-to-many problems. These algorithms do not rely on sorting. We describe other move and smooth algorithms, and generalizations to higher dimensions.

Han, Xufeng (2016)

“Learning with More Data and Better Models for Visual Similarity and Differentiation”
Under the direction of Alex Berg

This thesis studies machine learning problems involved in visual recognition on a variety of computer vision tasks. It attacks the challenge of scaling-up learning to efficiently handle more training data in object recognition, more noise in brain activation patterns, and learning more capable visual similarity models.

For learning similarity models, one challenge is to capture from data the subtle correlations that preserve the notion of similarity relevant to the task. Most previous work focused on improving feature learning and metric learning separately.  Instead, we propose a unified deep-learning modeling framework that jointly optimizes the two through back-propagation. We model the feature mapping using a convolutional neural network and the metric function using a multi-layer fully-connected network.  Enabled by large datasets and a sampler to handle the intrinsic imbqalance between positive and negative samples, we are able to learn such models efficiently. We apply this approach to patch-based image matching and cross-domain clothing-item matching.

For analyzing activation patterns in images acquired using functional Magnetic Resonance Imaging (fMRI), a technology widely used in neuroscience to study human brain, challenges are small number of examples and high level of noise. The common ways of increasing the signal to noise ratio include adding more repetitions, averaging trials, and analyzing statistics maps solved based on a general linear model. In collaboration with neuroscientists, we developed a machine learning approach that allows us to analyze individual trials directly. This approach uses multi-voxel patterns over regions of interest as feature representation, and helps discover effects previous analyses missed.

For multi-class object recognition, one challenge is learning a non-one-vs-all multi-class classifier with large numbers of categories each with large numbers of examples. A common approach is data parallelization in a synchronized fashion: evenly and randomly distribute the data into splits, learn a full model on each split and average the models. We reformulate the overall learning problem in a consensus optimization framework and propose a more principled synchronized approach to distributed training. Moreover, we develop an efficient algorithm for solving the sub-problem by reducing it to a standard problem with warm start.

Harris, Mark Jason (2003)

“Real-Time Cloud Simulation and Rendering”
Under the direction of Anselmo Lastra
Electronic copy available

Clouds are a ubiquitous, awe-inspiring, and ever-changing feature of the outdoors. They are an integral factor in Earth’s weather systems, and a strong indicator of weather patterns and changes. Their effects and indications are important to flight and flight training. Clouds are an important component of the visual simulation of any outdoor scene, but the complexity of cloud formation, dynamics, and light interaction makes cloud simulation and rendering difficult in real time. In an interactive flight simulation, users would like to fly in and around realistic, volumetric clouds, and to see other aircraft convincingly pass within and behind them. Ideally, simulated clouds would grow and disperse as real clouds do, and move in response to wind and other forces. Simulated clouds should be realistically illuminated by direct sunlight, internal scattering, and reflections from the sky and the earth below. Previous real-time techniques have not provided users with such experiences. I present a physically-based, visually-realistic cloud simulation suitable for interactive applications such as flight simulators. Clouds in the system are modeled using partial differential equations describing fluid motion, thermodynamic processes, buoyant forces, and water phase transitions. I also simulate the interaction of clouds with light, including self-shadowing and multiple forward light scattering. I implement both simulations — dynamic and radiometric — entirely on programmable floating point graphics hardware. The speed and parallelism of graphics hardware enables simulation of cloud dynamics in real time. I exploit the relatively slow evolution of clouds in calm skies to enable interactive visualization of the simulation. The work required to simulate a single time step is automatically spread over many frames while the user views the results of the previous time step. I use dynamically-generated impostors to accelerate cloud rendering. These techniques enable incorporation of realistic, simulated clouds into real applications without sacrificing interactivity. Beyond clouds, I also present general techniques for using graphics hardware to simulate dynamic phenomena ranging from fluid dynamics to chemical reaction-diffusion. I demonstrate that these simulations can be executed faster on graphics hardware than on traditional CPUs.

Heath, Lenwood S. (1985)

“Algorithms for Embedding Graphs in Books”
Under the direction of Arnold Rosenberg (Duke University)

We investigate the problem of embedding graphs in books. A book is some number of half-planes (the pages of the book), which share a common line as boundary (the spine of the book). A book embedding of a graph embeds the vertices on the spine in some order and embeds each edge in some page so that in each page no two edges intersect. The pagenumber of a graph is the number of pages in a minimum-page embedding of the graph. The pagewidth of a book embedding is the maximum cutwidth of the embedding in any page. A practical application is in the realization of a fault-tolerant array of VLSI processors. Our results are efficient algorithms for embedding certain classes of planar graphs in books of small pagenumber or small pagewidth. The first result is a linear time algorithm that embeds any planar graph in a book of seven pages. This establishes the smallest upper bound known for the pagenumber of the class of planar graphs. The algorithm uses three main ideas. The first is to level the planar graph. The second is to extend a cycle at one level to the next level by doing micro-surgery. The third is to nest the embedding of successive levels to obtain finite pagenumber. The second result is a linear time algorithm that embeds any trivalent planar graph in a book of two pages. The algorithm edge-augments the graph to make it hamiltonian while keeping it planar. The third result is an O(n log n) time algorithm for embedding any outerplanar graph with small pagewidth. Our algorithm embeds anyd-valent outerplanar graph in a two-page book with O(d log n) pagewidth. This result is optimal in pagewidth to within a constant factor. The significance for VLSI design is that any outerplanar graph can be implemented in small area a fault-tolerant fashion.

Hensley, Justin Aaron (2007)

“Increasing Rendering Performance of Graphics Hardware”
Under the direction of Anselmo Lastra and Montek Singh
Electronic copy available

The advances of new technologies have made data collection easier and faster, GPU performance is increasing at an exponential rate, and is often reported to be above the rate predicted by Moore’s Law. This growth is driven by performance improvements that can be divided into the following three categories: algorithmic improvements, architectural improvements, and circuit-level improvements. In this dissertation I present several techniques that improve the rendering performance of graphics hardware along these three lines. At the algorithmic level, I present several techniques that are capable of improving the visual quality of images rendered on current commodity GPUs without requiring modifications to the underlying hardware or architecture. In particular, I describe a method to generate summed-area tables rapidly using graphics hardware, and several rendering techniques that take advantage of summed-area tables. At the architectural level, I discuss modifying current GPUs with conditional streaming capabilities. I describe a novel ray-tracing algorithm that takes advantage of conditional output streams to improve the memory bandwidth requirements when compared to previous techniques. At the circuit level, I present work that uses dynamic logic and the high-capacity asynchronous pipelining style to implement a novel z-comparator that is both energy-efficient and fast. The comparator gains its increased performance by taking advantage of average case performance, and introduces the “compute-on-demand” paradigm for arithmetic circuits. Finally, I describe an asynchronous Booth multiplier that uses a novel implementation of a counterflow architecture in a single pipeline. This counterflow architecture allows for shorter critical paths, and therefore higher operating speed.

Hernandez-Campos, Felix (2006)

“Generation and Validation of Empirically-Derived TCP Application Workloads”
Under the direction of Kevin Jeffay
Electronic copy available

Networking researchers and practitioners commonly rely on simulations or network testbeds to evaluate the performance of network protocols and resource-management mechanisms in controlled environments. A key component of these experimental studies is the generation of synthetic network traffic. To ensure valid experimental results, traffic generators require accurate models of how network applications create the data workloads carried by real network links. Since TCP is the dominant transport protocol for applications using the Internet today, we need a way to model the combined workloads from the full mix of TCP applications in use at any point in time on any network link. Unfortunately, present workload models are limited to a small number of rather outdated and poorly maintained models of individual TCP applications such as the web. In this dissertation, I present a method for rapidly generating a model of the workload corresponding to the full mix of TCP applications using a given network link. Given a trace of TCP/IP packet headers from a network link, two classes of modeling information are extracted. First, the packet header trace is reverse compiled to extract patterns of application data-unit exchanges within each TCP connection. These network-independent patterns are used to develop a source-level model of TCP application behavior that captures the essential properties of how a mix of applications uses the network. The source-level model is constructed without any knowledge of which specific applications are present. It is validated empirically through controlled experiments in a large-scale network testbed. The second class of modeling information concerns network-dependent properties that influence the performance of TCP connections. In order to generate synthetic traffic that is a realisticrepresentation of that carried by a real network link, properties of TCP implementations (e.g., window sizes) and properties of the end-to-end network path (e.g., latency) must be incorporated into the generation process. I show the importance of modeling both source-level and network-dependent properties and how both can be combined to generate synthetic TCP traffic that is statistically similar to the traffic on a measured link. This approach to traffic generation is validated through the construction and analysis of models for traffic on an Abilene (Internet-2) backbone link, the UNC campus egress link, and a link from a major European university. Given an easily acquired TCP/IP packet-header trace, the result of this research is a method for the automatic generation of a TCP workload model and network-dependent properties that are suitable for realistic synthetic traffic generation. This model and generation process is tunable in a controlled way across a variety of interesting and important network parameters.

Hetzel, William C. (1976)

“An Experimental Analysis of Program Verification Methods”
Under the direction of Peter Calingaert

A controlled experiment was designed and conducted in order to analyze program verification methods. Three basic verification methods were selected. One was reading, the second was specification testing and the third was selective testing. Three structured PL/I programs were specially prepared by reinserting a number of errors that had been uncovered during their development. Each of 39 subjects participated in three verification sessions. In each session subjects were given one of the programs and tried to find as many errors as they could using one of the methods. The basic data were studied using analysis of variance techniques. Reading was found to be inferior to the other two methods. Specification and selective testing did not differ in effectiveness. On the average, subjects found little more than half of the errors present, even on a severity-weighted basis. The experiment showed that an intensive period of independent verification under near optimal conditions is not likely to find all errors even in simple, well-written programs. Many other analyses of the data were also conducted. One set of analyses looked at the individual performance differences. Computing education and experience were found to be significantly associated with good performance. Self estimates and attitudes were also related. Additional analyses included an analysis of method effectiveness on individual errors, an analysis of the distribution of time between finding errors, a factor analysis and a regression prediction model.

Hillesland, Karl (2005)

“Image Streaming to Build Image-Based Models”
Under the direction of Anselmo Lastra
Electronic copy available

An important goal in computer graphics is to synthesize photo-realistic images of objects and environments. The realism of the images depends on the quality of the model used for synthesis. Image-based modeling is an automated modeling technique aimed at achieving this realism by constructing models from photographs of real-world objects or environments. A wide number of image-based modeling techniques have been proposed, each with its own approach to model generation. However, we can treat all image-based modeling as a form of function fitting, where the function is our representation for appearance, and the data to which we want to fit are the photographs. This dissertation addresses the use of nonlinear optimization to construct image-based models of an object’s appearance, including a new algorithm for efficient computation that is designed to take advantage of the high computational throughput of programmable graphics hardware in order to generate the models in a timely manner. Application to two diverse types of image-based models demonstrates the versatility and computational efficiency of the approach. The two types are Light-Field Mapping (Chen et al., 2002), which is a radiance model generated from data decomposition, and a per-texel Lafortune representation (McAllister et al., 2002), which is an analytic reflectance model. Although faster than CPUs in terms of raw computational power, GPUs lack the precision and accuracy of CPUs. This work also includes a closer look at the kind of numerical error that occurs in employing a numerical technique such as nonlinear optimization in this limited precision and accuracy context. Furthermore, since GPU floating-point operations are not fully documented, this work also includes a technique to measure the accuracy of GPU floating-point operations.

Hirota, Gentaro (2002)

“An Improved Finite Element Contact Model for Anatomical Simulations”
Under the direction of Henry Fuchs
Electronic copy available

This work focuses on the simulation of mechanical contact between nonlinearly elastic objects such as the components of the human body. The computation of the reaction forces that act on the contact surfaces (contact forces) is the key for designing a reliable contact handling algorithm. In traditional methods, contact forces are often defined as discontinuous functions of deformation, which leads to poor convergence characteristics. This problem becomes especially serious in areas with complicated self-contact such as skin folds. I introduce a novel penalty method for finite element simulation based on the concept of material depth, which is the distance between a particle inside an object and the object’s boundary. By linearly interpolating pre-computed material depths at node points, contact forces can be analytically integrated over contact surfaces without raising the computational cost. The continuity achieved by this formulation reduces oscillation and artificial acceleration resulting in a more reliable simulation algorithm. This algorithm is implemented as part of an implicit finite element program for static, quasistatic and dynamic analysis of nonlinear viscoelastic solids. I demonstrate its effectiveness in an animation showing realistic effects such as folding skin and sliding contacts of tissues involved in knee flexion. The finite element model of the leg and its internal structures was derived from the Visible Human dataset. With this method, it is now easier for engineers and scientists to simulate a wide variety of anatomical and tissue structures such as multiple adjacent organs and biomaterials made of intertwined protein fibers. This method also helps animators to use more anatomically accurate human and animal models to enhance the realism of the generated images.

Ho, Sean (2004)

“Profile Scale Spaces for Statistical Image Match in Bayesian Segmentation”
Under the direction of Guido Gerig
Electronic copy available

Object boundaries in images often exhibit a complex greylevel appearance, and modeling of that greylevel appearance is important in Bayesian segmentation. Traditional image match models such as gradient magnitude or static templates are insufficient to model complex and variable appearance at the object boundary, in the presence of image noise, jitter in correspondence, and variability in a population of objects. I present a new image match model for Bayesian segmentation that is statistical, multiscale, and uses a non-Euclidean object-intrinsic coordinate system. The segmentation framework is based on the spherical harmonics object representation and segmentation framework of Kelemen et al., which in turn uses the profile-based image match model of Active Shape Models. The traditional profile model does not take advantage of the expected high degree of correlation between adjacent profiles along the boundary. My new multiscale image match model uses a profile scale space, which blurs along the boundary but not across the boundary. This blurring is done not in Euclidean space but in an object-intrinsic coordinate system provided by the geometric representation of the object. Blurring is done on the sphere via a spherical harmonic decomposition; thus, spherical harmonics are used both in the geometric representation as well as the image profile representation. The profile scale space is sampled after the fashion of the Laplacian pyramid; the resulting tree of features is used to build a Markov Random Field probability distribution for Bayesian image match. Results are shown on a large dataset of 114 segmented caudates in T1-weighted magnetic resonance images (MRI). The image match model is evaluated on the basis of generalizability, specificity, and variance; it is compared against the traditional singlescale profile model. The image match model is also evaluated in the context of a full segmentation framework, when optimized together with a shape prior. I test whether automatic segmentations using my multiscale profile model come closer to the manual expert segmentations than automatic segmentations using the single-scale profile model do. Results are compared against intra-rater and inter-rater reliability of manual segmentations.

Hoffman, Daniel M. (1984)

“Trace Specification of Communications Protocols”
Under the direction of Richard T. Snodgrass

This dissertation describes a methodology for the formal specification of communications protocols. Specification is an important step in the successful development of computer software. Further, it is important that these specifications be precise. With such a specification, designers and implementors are likely to have the same expectations of the software; without a specification, costly misunderstandings often arise. Communications protocol software offers special specification problems. Typically, such software connects computers that are widely distributed geographically and differ in model, manufacturer, and operating system. Thus, for communications protocol software, misunderstandings are particularly likely, making precise specifications especially important. Formal specifications, written in a language with formally defined syntax and semantics, support precise specification and are also suitable for verification. Various techniques have been proposed for the formal specification of communications protocols, each with certain drawbacks. The specification method chosen for this dissertation is a modified version of traces, developed by Parnas and Bartussek. Traces were originally developed as a general technique for software specification. Until now, only trace specifications of small, simple modules had been attempted, and no trace specifications of communications protocols existed. We discuss an extension made to traces to handle communications protocol specification. We then describe the trace methodology comprised of heuristics which make the specification of complex communications protocols intellectually manageable. We describe our experience in using the methodology to write specifications of major portions of two commercial standards: the Advanced Data Communications Control Protocol (ADCCP) and the Internet Protocol (IP). Finally, we conclude that traces, using the extension and methodology described herein, are a feasible technique for the formal specification of communications protocols, and that traces explicitly address deficiencies in methods currently in use.

Hoffman, Doug L. (1996)

“Comparison of Protein Structures by Transformation into Dihedral Angle Sequences”
Under the direction of Raj K. Singh
Electronic copy available

Proteins are large complex organic molecules that are essential to the existence of life. Decades of study have revealed that proteins having different sequences of amino acids can posses very similar three- dimensional structures. To date, protein structure comparison methods have been accurate but costly in terms of computer time. This dissertation presents a new method for comparing protein structures using dihedral transformations. Atomic XYZ coordinates are transformed into a sequence of dihedral angles, which is then transformed into a sequence of dihedral sectors. Alignment of two sequences of dihedral sectors reveals similarities between the original protein structures. Experiments have shown that this method detects structural similarities between sequences with less that 20% amino acid sequence identity, finding structural similarities that would not have been detected using amino acid alignment techniques. Comparisons can be performed in seconds that had previously taken minutes or hours.

Holloway, Richard L. (1995)

“Registration Errors in Augmented Reality Systems”
Under the direction of Frederick P. Brooks, Jr.
Department of Computer Science Technical Report TR95-016
Electronic copy available

Augmented reality (AR) systems combine three-dimensional computer-generated imagery with the view of the real environment in order to make unseen objects visible or to present additional information. A critical problem is that the computer-generated objects do not currently remain correctly registered with the real environment–objects aligned from one viewpoint appear misaligned from another and appear to swim about as the viewer moves. This registration error is caused by a number of factors, such as system delay, optical distortion, and tracker measurement error, and is difficult to correct with existing technology. This dissertation presents a registration error model for AR systems and uses it to gain insight into the nature and severity of the registration error caused by the various error sources. My thesis is that a mathematical error model enables the system architect to determine

  • which error sources are the most significant,
  • the sensitivity of the net registration error to each error,
  • the nature of the distortions caused by each type of error,
  • the level of registration accuracy one can expect, and also provides insights on how best to calibrate the system.

Analysis of a surgery planning application yielded the following main results:

  • Even for moderate head velocities, system delay causes more registration error than all other sources combined;
  • Using the eye’s center of rotation as the eyepoint in the computer graphics model reduces the error due to eye rotation to zero for points along the line of gaze. This should obviate the need for eye tracking;
  • Tracker error is a significant problem both in head tracking and in system calibration;
  • The World coordinate system should be omitted when possible;
  • Optical distortion is a significant error source, but correcting it computationally in the graphics pipeline often induces delay error larger than the distortion error itself;
  • Knowledge of the nature of the various types of error facilitates identification and correction of errors in the calibration process.

Although the model was developed for see-through head-mounted displays (STHMDs) for surgical planning, many of the results are applicable to other HMD systems as well.

Holman, Philip L. (2004)

“On the Implementation of Pfair-scheduled Multiprocessor Systems”
Under the direction of James H. Anderson
Electronic copy available

The goal of this dissertation is to extend the Pfair scheduling approach in order to enable its efficient implementation on a real-time multiprocessor. At present, Pfair scheduling is the only known means for optimally scheduling recurrent real-time tasks on multiprocessors. In addition, there has been growing practical interest in such approaches due to their fairness guarantees. Unfortunately, prior work in this area has considered only the scheduling of independent tasks, which do not communicate with each other or share resources. The work presented herein focuses both on adding support for these actions and also on developing techniques for reducing various forms of implementation overhead, including that produced by task migrations and context switches. The thesis of this dissertation is:

  • tasks can be efficiently synchronized in Pfair-scheduled systems and overhead due to common system events, such as context switches and migrations, can be reduced.

This thesis is established through research in three areas. First, the pre-existing Pfair schedul- ing theory is extended to support the scheduling of groups of tasks as a single entity. Second, mechanisms for supporting both lock-based and lock-free synchronization are presented. Lock- based synchronization coordinates access to shared resources through the use of semaphores. On the other hand, lock-free operations are optimistically attempted and then retried if the operation fails. Last, the deployment of Pfair scheduling on a symmetric multiprocessor is considered.

Holt, James Matthew (2016)

“Using the Multi-String Burrow-Wheeler Transform for High-Throughput Sequence Analysis”
Under the direction of Leonard McMillan

The throughput of sequencing technologies has created a bottleneck where raw sequence files are stored in an un-indexed format on disk. Alignment to a reference genome is the most common pre-processing method for indexing this data, but alignment requires a priori knowledge of a reference sequence, and often loses a significant amount of sequencing data due to biases. Sequencing data can instead be stored in a lossless, compressed, indexed format using the multi-string Burrows Wheeler Transform (BWT).

This dissertation introduces three algorithms that enable faster construction of the BWT for sequencing datasets. The first two algorithms are a merge algorithm for merging two or more BWTs into a single BWT and a merge-based divide-and-conquer algorithm that will construct a BWT from any sequencing dataset. The third algorithm is an induced sorting algorithm that constructs the BWT from any string collection and is well-suited for building BWTs of long-read sequencing datasets. These algorithms are evaluated based on their efficiency and utility in constructing BWTs of different types of sequencing data. This dissertation also introduces two applications of the BWT: long-read error correction and a set of biologically motivated sequence search tools. The long-read error correction is evaluated based on accuracy and efficiency of the correction.

Our analyses show that the BWT of almost all sequencing datasets can now be efficiently constructed. Once constructed, we show that the BWT offers significant utility in performing fast searches as well as fast and accurate long read corrections. Additionally, we highlight several use cases of the BWT-based web tools in answering biologically motivated problems.

Hong, Junpyo (2018)

Classification of Neuroanatomical Structures Based on Non-Euclidean Geometric Properties

Under the direction of Stephen M. Pizer

Electronic copy available

Diagnosing neurodevelopmental disorders by studying morphological differences in neuroanatomical structures has been important. Effective shape classification requires 1) a choice of an object model that can provide rich geometric object properties (GOPs) relevant for a given classification task, and 2) a choice of a statistical classification method capable of learning on a curved manifold on which GOPs live. I lay out my methodological contributions to make these choices in the context of early diagnosis and detection of Autism Spectrum Disorder (ASD) in infants based on shapes of hippocampi and caudate nuclei.

My first contribution in modeling shapes is a method to obtain a set of skeletal models called s-reps from a set of 3D Objects. First, the method iteratively deforms the object surface via Mean Curvature Flow (MCF) until the deformed surface is approximately ellipsoidal. Then, the analytical s-rep of the ellipsoid is deformed back into the object. On a few neuroanatomical structures, the method’s effectiveness is demonstrated.

My second contribution in modeling shapes is an extension to the current s-rep to represent an object with a sharp tail narrowing to a singular point. I demonstrate that this extension allows appropriate surface modeling of a narrowing sharp tail region of the caudate nucleus.

In statistical methods I propose both a novel shape classification framework and a novel method to evaluate the resulting classifications. The classification method combines s-reps, Principal Nested Spheres (PNS) analysis to handle the non-Euclidean s-rep GOPs, and Distance Weighted Discrimination (DWD). The evaluation method combines repeated hold-outs with Area Under the ROC (AUC) measurements. Also, I extend the proposed shape classification method based on DWD applied to temporal shape differences of Euclideanized s-reps. The methods are applied to classifying autistic and non-autistic infants based on either hippocampal shapes or caudate shapes.

Hong, Yi (2016)

“Image and Shape Analysis for Spatiotemporal Data”
Under the direction of Marc Niethammer

In analyzing brain development or identifying disease it is important to understand anatomical age-related changes and shape differences. Data for these studies is frequently spatiotemporal and collected from normal and/or abnormal subjects. However, images and shapes over time often have complex structures and are best treated as elements of non-Euclidean spaces. This dissertation tackles problems of uncovering time-varying changes and statistical group differences in image or shape time-series.

There are three major contributions: 1) a framework of parametric regression models on manifolds to capture time-varying changes. These include a metamorphic geodesic regression approach for image time-series and standard geodesic regression, time- warped geodesic regression, and cubic spline regression on the Grassmann manifold; 2) a spatiotemporal statistical atlas approach, which augments a commonly-used atlas such as the median with measures of data variance via a weighted functional boxplot; 3) hypothesis testing for shape analysis to detect group differences between populations.

The proposed method for cross-sectional data uses shape ordering and hence does not require dense shape correspondences or strong distributional assumptions on the data. For longitudinal data, hypothesis testing is performed on shape trajectories which are estimated from individual subjects.

Applications of these methods include 1) capturing brain development and degeneration; 2) revealing growth patterns in pediatric upper airways and the scoring of airway abnormalities; 3) detecting group differences in longitudinal corpus callosum shapes of subjects with dementia versus normal controls.

Hsieh, Cheng-Hong (1989)

“A Connectionist Algorithm for Image Segmentation”
Under the direction of Stephen M. Pizer
Department of Computer Science Technical Report TR89-008
Electronic copy available

An edge-based segmentation algorithm based on the knowledge in human vision as developed. The research followed Grossberg’s boundary contour system and developed a parallel distributive algorithm which consists of multiple processing stages – mainly anisotropic edge filtering, corner detection, and spatial coherence check. The two-dimensional input information is processed in parallel within each stage and pipelined among stages. Within each stage, local operations are performed at each pixel. The application of this algorithm to many test patterns shows that the algorithm gives good segmentation and behaves reasonably well against random noise. A multiscale mechanism in the algorithm can segment an object into contours at different levels of detail. The algorithm was compared with an approximation of Grossberg’s boundary contour system. Both algorithms gave reasonable performance for segmentation. The differences lie in the level of image dependency of the configuration parameters of the algorithm. Also, the way random noise affects the algorithm was compared with the way it affects human object detection. Data obtained from psychophysical experiments and from application of the algorithm show a similar trend.

Huan, Jun (Luke) (2006)

“Graph Based Pattern Discovery in Protein Structures”
Under the direction of Jan Prins and Wei Wang
Electronic copy available

The rapidly growing body of 3D protein structure data provides new opportunities to study the relation between protein structure and protein function. Local structure pattern of proteins has been the focus of recent efforts to link structural features found in proteins to protein function. In addition, structure patterns have demonstrated values in applications such as predicting protein-protein interaction, engineering proteins, and designing novel medicines. My thesis introduces graph-based representations of protein structure and new subgraph mining algorithms to identify recurring structure patterns common to a set of proteins. These techniques enable families of proteins exhibiting similar function to be analyzed for structural similarity. Previous approaches to protein local structure pattern discovery operate in a pairwise fashion and have prohibitive computational cost when scaled to families of proteins. The graph mining strategy is robust in the face of errors in the structure, and errors in the set of proteins thought to share a function. Two collaborations with domain experts at the UNC School of Pharmacy and the UNC Medical School demonstrate the utility of these techniques. The first is to predict the function of several newly characterized protein structures. The second is to identify conserved structural features in evolutionarily related proteins.

Hudson, Thomas C. (2004)

“Adapting a Collaborative, Force-Feedback, Graphical User Interface to Best-Effort Networks”
Under the direction of Russell M. Taylor II and Kevin Jeffay
Department of Computer Science Technical Report TR04-021
Electronic copy available

Latency is an unavoidable fact of distributed systems, and an unrelenting foe of interface usability. I present methods for lessening the impact of latency on distributed haptic, graphic, and collaborative interfaces. These three interfaces are present in the distributed nanoManipulator, a shared tool for remote operation of Atomic Force Microscopes. The work is carried out in the context of the Internet, where best-effort service means that network performance is not guaranteed and that applications must function under a wide range of conditions. The ability of a distributed control algorithm to tolerate latency is innately tied up with how data or operations in the algorithm are represented for transmission over the network. I introduce two new representations for haptics, the warped plane approximation and local environment sampling, with superior latency tolerance. I show how image-based rendering is a useful representation for transferring the output of a remote graphics engine across the network, yielding a tenfold reduction in mean response time over video-based approaches, and how optimistic concurrency control is a useful representation for transmitting the actions of a remote collaborator across the network, requiring 85% less concurrency control overhead than pessimistic approaches. All of these intermediate representations convey a meaning that is not just true at a single point space and a single point in time, but that is valid across a volume of space or across a range of times. These higher-order representations reduce both the amount of blocking necessary and the rate at which closed feedback loops must run in an algorithm. I show how techniques used to adaptively deliver multimedia content — the User Datagram Protocol (UDP), Forward Error Correction (FEC), and Queue Monitoring (QM) — are applicable to the streams of data transmitted by force feedback tele-operators. UDP alone yields a 35% reduction in latency and a 75% reduction in jitter. The new algorithms and adaptations for the distributed nanoManipulator’s interfaces combined to make this collaborative tool feasible, leading to successful use by experimenters more than 2000 miles apart.

Hultquist, Jeffrey P. (1995)

“Interactive Numerical Flow Visualization Using Stream Surfaces”
Under the direction of Frederick P. Brooks, Jr.
Electronic copy available

Three-dimensional steady fluid flows are often numerically simulated over multiple overlapping curvilinear arrays of sample points. Such flows are often visualized using tangent curves or streamlines computed through the interpolated velocity field. A stream surface is the locus of an infinite number of streamlines rooted at all points along a short line segment or rake. Stream surfaces can depict the structure of a flow field more effectively than is possible with mere streamline curves, but careful placement of the rakes is needed to most effectively depict the important features of the flow. I have built visualization software which supports the interactive calculation and display of stream surfaces in flow fields represented on composite curvilinear grids. This software exploits several novel methods to improve the speed with which a particle may be advected through a vector field. This is combined with a new algorithm which constructs adaptively sampled polygonal models of stream surfaces. These new methods make stream surfaces a viable tool for interactive numerical flow visualization. Software based on these methods has been used by scientists at the NASA Ames Research Center and the Wright-Patterson Air Force Base. Initial use of this software has demonstrated the value of this approach.

I

Ichnowski, Jeffrey (2019)

“Scaling Robot Motion Planning to Multi-Core Processors and the Cloud”

Under the direction of Ron Alterovitz

Electronic copy available

Imagine a world in which robots safely interoperate with humans, gracefully and efficiently accomplishing everyday tasks. The robot’s motions for these tasks, constrained by the design of the robot and task at hand, must avoid collisions with obstacles.  Unfortunately, planning a constrained obstacle-free motion for a robot is computationally complex—often resulting in slow computation of inefficient motions. The methods in this dissertation speed up this motion plan computation with new algorithms and data structures that leverage readily available parallel processing, whether that processing power is on the robot or in the cloud, enabling robots to operate safer, more gracefully, and with improved efficiency.

The contributions of this dissertation that enable faster motion planning, both on robot and in tandem with the cloud, are novel parallel lock-free algorithms, fast and concurrent nearest neighbor searching data structures, cache-aware operation, and split robot-cloud computation.  The parallel lock-free algorithms avoid the slowdown due to contention over shared data structures, resulting in empirical speedup proportional to the number of CPU cores working on the problem. The fast nearest neighbor searching data structure speeds up searching in metric spaces needed for rigid body motion planning (SO(3) and SE(3)).  The concurrent nearest neighbor data structure improves searching performance on metric spaces common to robot motion planning problems, while providing provably asymptotic wait-free concurrent operation. With the cache-aware contributions, the motion planning algorithm avoids slowdown due to long memory access times, allowing the algorithm to exhibit superlinear speedup.  The split robot-cloud computation enables robots with low-power CPUs to react to changing environments by having the robot compute reactive paths in real-time from a set of motion plan options generated in a computationally intensive cloud-based algorithm.

We demonstrate the effectiveness of our contributions in both simulation and on physical robots of varying design and complexity, while scaling up to 72 concurrent hardware threads.  Scenarios include finding any solution to a complex motion planning problem, pre-computing motion plans that converge towards the optimal, and reactive interaction with dynamic environments.  Robots include 2D holonomic robots, 3D rigid-body robots, a self-driving 1/10 scale car, articulated robot arms with and without mobile bases, and a small humanoid robot. With the methods of this dissertation, a small humanoid robot with a low-power CPU is able to perform reactive whole-body motion planning.

Ilie, Dumitru Adrian (2010)

“On-Line Control of Active Camera Networks”
Under the direction of Greg Welch
Electronic copy available

Large networks of cameras have been increasingly employed to capture dynamic events for tasks such as surveillance and training. When using active (pan-tilt-zoom) cameras to capture events distributed throughout a large area, human control becomes impractical and unreliable. This has led to the development of automated approaches for on-line camera control. I introduce a new approach that consists of a stochastic performance metric and a constrained optimization method. The metric quantifies the uncertainty in the state of multiple points on each target. It uses state-space methods with stochastic models of the target dynamics and camera measurements. It can account for static and dynamic occlusions, accommodate requirements specific to the algorithm used to process the images, and incorporate other factors that can affect its results. The optimization explores the space of camera configurations over time under constraints associated with the cameras, the predicted target trajectories, and the image processing algorithm. While an exhaustive exploration of this parameter space is intractable, through careful complexity analysis and application domain observations I have identified appropriate alternatives for reducing the space. Specifically, I reduce the spatial dimension of the search by dividing the optimization problem into subproblems, and then optimizing each subproblem independently. I reduce the temporal dimension of the search by using empirically-based heuristics inside each subproblem. The result is a tractable optimization that explores an appropriate subspace of the parameters, while attempting to minimize the risk of excluding the global optimum. The approach can be applied to conventional surveillance tasks (e.g., tracking or face recognition), as well as tasks employing more complex computer vision methods (e.g., markerless motion capture or 3D reconstruction). I present the results of experimental simulations of two such scenarios, using controlled and natural (unconstrained) target motions, employing simulated and real target tracks, in realistic scenes, and with realistic camera networks.

Insko, Brent E. (2001)

“Passive Haptics Significantly Enhances Virtual Environments”
Under the direction of Frederick P. Brooks Jr.
Electronic copy available

One of the most disconcertingly unnatural properties of most virtual environments (VEs) is the ability of the user to pass through objects. I hypothesize that passive haptics, augmenting a high-fidelity visual virtual environment with low-fidelity physical objects, will markedly improve both sense of presence and spatial knowledge training transfer. The low-fidelity physical models can be constructed from cheap, easy-to-assemble materials such as styrofoam, plywood, and particle board. The first study investigated the effects of augmenting a visual-cliff environment with a slight physical ledge on participants’ sense ofpresence. I found when participants experienced passive haptics in the VE, they exhibited significantly more behaviors associated with pit avoidance than when experiencing the non-augmented VE. Changes in heart rate and skin conductivity were significantly higher than when they experienced the VE without passive haptics. The second study investigated passive haptics’ effects on performance of a real-world navigation task after training in a virtual environment. Half of the participants trained on maze navigation in a VE augmented with a styrofoam physical model, while half trained in a non-augmented VE but were given visual and audio contact cues. The task was to gain as much information as possible about the layout of the environment. Participants knew before the VE session that their training would be tested by navigating an identical real maze environment while blindfolded. Significant differences in the time to complete the blindfolded navigation task and significant differences in the number of collisions with objects were found between the participants trained in an augmented VE and the participants trained in a non-augmented VE. 11 of 15 participants trained without passive haptics bumped into the next-to-last obstacle encountered in the testing session and turned the wrong direction to navigate around it; only 2 of 15 participants trained with passive haptics made the same navigation error. On the other hand, the assessment of the participants’ cognitive maps of the virtual environment did not find significant differences between groups as measured by sketch maps and object dimension estimation.

Interrante, Victoria L. (1996)

“Illustrating Transparency: Communicating the 3D Shape of Layered Transparent Surfaces via Texture”
Under the direction of Henry Fuchs and Stephen M. Pizer
Electronic copy available

There are many applications in which transparency can be a useful tool for displaying the outer surface of an object together with underlying structures. The driving application for this research is radiation therapy treatment planning, in which physicians need to understand the volume distribution of radiation dose in the context of patient anatomy. To effectively display data containing multiple overlapping surfaces, the surfaces must be rendered in such a way that they can simultaneously be seen and also seen through. In computer-generated images, as in real life, however, it is often difficult to adequately perceive the three-dimensional shape of a plain transparent surface and to judge its relative depth distance from underlying opaque objects. Inspired by the ability of gifted artists to define a figure with just a few strokes, I have explored methods for automatically generating a small, stable set of intuitively meaningful lines that intend to capture the essence of a surface’s shape. This dissertation describes my investigations into the use of opaque texture lines as an artistic device for enhancing the communication of the shape and depth of an external transparent surface while only minimally occluding underlying structure. I provide an overview of the role of 3D visualization in radiation treatment planning and a survey of shape and depth perception, focusing on aspects that may be most crucial for conveying shape and depth information in computer-generated images, and then motivate the use of two specific types of shape-conveying surface markings: valley/ridge lines, which may be useful for sketching the essential form of certain surfaces, and distributed short strokes, oriented in the direction of greatest normal curvature, which may meaningfully convey the local shape of general surface patches. An experimental paradigm is proposed for objectively measuring observers’ ability to simultaneously see and see through a transparent surface, and is used to demonstrate, in an experiment with five subjects, that consistent performance improvements can be achieved, on a task relevant to the needs of radiotherapy treatment planning and based on images generated from actual clinical data, when opaque texture lines are added to an otherwise plain transparent surface.

Isenburg, Martin (2004)

“Compression and Streaming of Polygon Meshes”
Under the direction of Jack Snoeyink
Electronic copy available

Polygon meshes provide a simple way to represent three-dimensional surfaces and are the de-facto standard for interactive visualization of geometric models. Storing large polygon meshes in standard indexed formats results in files of substantial size. Such formats allow listing vertices and polygons in any order so that not only the mesh is stored but also the particular ordering of its elements. Mesh compression rearranges vertices and polygons into an order that allows more compact coding of the incidence between vertices and predictive compression of their positions. Previous schemes were designed for triangle meshes and polygonal faces were triangulated prior to compression. I show that polygon models can be encoded more compactly by avoiding the initial triangulation step. I describe two compression schemes that achieve better compression by encoding meshes directly in their polygonal representation. I demonstrate that the same holds true for volume meshes by extending one scheme to hexahedral meshes. Nowadays scientists create polygonal meshes of incredible size. Ironically, compression schemes are not capable|at least not on common desktop PCs|to deal with giga-byte size meshes that need compression the most. I describe how to compress such meshes on a standard PC using an out-of-core approach. The compressed mesh allows streaming decompression with minimal memory requirements while providing seamless connectivity along the advancing decompression boundaries. I show that this type of mesh access allows the design of IO-elementscient out-of-core mesh simplification algorithms. In contrast, the mesh access provided by today’s indexed formats complicates subsequent processing because of their IO-ineciency in de-referencing (in resolving all polygon to vertex references). These mesh formats were designed years ago and do not take into account that a mesh may not fit into main memory. When operating on large data sets that mostly reside on disk, the data access must be consistent with its layout. I extract the essence of our compressed format to design a general streaming format that provides concurrent access to coherently ordered elements while documenting their coherence. This eliminates the problem of IO-ineciencycient de-referencing. Furthermore, it allows to re-design mesh processing tasks to work as streaming, possibly pipelined, modules on large meshes, such as on-the-fly compression of simplified mesh output.

J

Janikow, Cezary Z. (1991)

“Inductive Learning of Decision Rules from Attribute-Based Examples: A Knowledge-Intensive Genetic Algorithm Approach”
Under the direction of Kenneth De Jong and David A. Plaisted
Department of Computer Science Technical Report TR91-030
Electronic copy available

Genetic algorithms are stochastic adaptive systems whose search method models natural genetic inheritance and the Darwinian struggle for survival. Their importance results from the robustness and domain independence of such a search. Robustness is a desirable quality of any search method. In particular, this property has led to many successful genetic algorithm applications involving parameter optimization of unknown, possibly non-smooth and discontinuous functions. Domain independence of the search is also a praised characteristic since it allows for easy applications in different domains. However, it is a potential source of limitations of the method as well. In this dissertation, we present a modified genetic algorithm designed for the problem of supervised inductive learning in feature-based spaces which utilizes domain dependent task-specific knowledge. Supervised learning is one of the most popular problems studied in machine learning and, consequently, has attracted considerable attention of the genetic algorithm community. Thus far, these efforts have lacked the level of success achieved in parameter optimization. The approach developed here uses the same high level descriptive language that is used in rule-based supervised learning methods. This allows for an easy utilization of inference rules of the well known inductive learning methodology, which replace the traditional domain independent operators. Moreover, a closer relationship between the underlying task and the processing mechanisms provides a setting for an application of more powerful task-specific heuristics. Initial results indicate that genetic algorithms can be effectively used to process high level concepts and incorporate task-specific knowledge. In this particular case of supervised learning, this new method proves to be competitive to other symbolic systems. Moreover, it is potentially more robust as it provides a powerful framework that uses cooperation among competing solutions and does not assume any prior relationship among attributes.

Jeong, Ja-Yeon (2009)

“Estimation of Probability Distribution on Multiple Anatomical Objects and Evaluation of Statistical Shape Models”
Under the direction Stephen Pizer
Electronic copy available

The estimation of shape probability distributions of anatomic structures is a major research area in medical image analysis. The statistical shape descriptions estimated from training samples provide means and the geometric shape variations of such structures. These are key components in many applications. This dissertation presents two approaches to the estimation of a shape probability distribution of a multi-object complex. Both approaches were applied to objects in the male pelvis, and showed improvement in the estimated shape distributions of the objects. The first approach is to estimate shape variation of each object in the complex in terms of two components: the object’s variation independent of the effect of its neighboring objects; and the neighbors’ effect on the object. The neighbors’ effect on the target object is interpreted using the idea on which linear mixed models are based. The second approach is to estimate a conditional shape probability distribution of a target object given its neighboring objects. The estimation of the conditional probability is based on principal component regression. This dissertation also presents a measure to evaluate the estimated shape probability distribution regarding its predictive power, that is, the ability of a statistical shape model to describe unseen members of the population. This aspect of statistical shape models is of key importance to any application that uses the shape models. The measure can be applied to PCA-based shape models and can be interpreted as a ratio of the variation of a new data explained by the retained principal directions estimated from a training data. This measure was applied to shape models of synthetic warped ellipsoids and right hippocampi. According to two surface distance measures and a volume overlap measure it was empirically verified that the predictive measure reflects what happens in the ambient space where the model lies.

Jerald, Jason (2009)

“Scene-Motion Thresholds and Latency Thresholds for Head-Mounted Displays”
Under the direction of Dr. Frederick P. Brooks, Jr.
Electronic copy available

A fundamental task of an immersive virtual-environment (IVE) system is to present images that change appropriately as the user’s head moves. Current IVE systems, especially those using head-mounted displays (HMDs), often produce unstable scenes, resulting in simulator sickness, degraded task performance, degraded visual acuity, and breaks in presence. In HMDs, instability resulting from latency is greater than all other causes combined [Holloway 1997]. The primary way users perceive end-to-end HMD latency is by improper motion of world-fixed scenes. Whereas scene motion due to latency is well defined mathematically, less is understood about when users start to perceive latency. I built a simulated HMD system such that no scene motion occurs due to latency. I intentionally and artificially insert scene motion into the virtual environment in order to determine how much scene motion and/or latency can occur without subjects noticing. I measure perceptual thresholds of latency and scene motion under different conditions across five experiments. By studying the relationship between scene-motion thresholds and latency thresholds, I develop a mathematical model of latency thresholds as an inverse function of peak head-yaw acceleration. Measured latency thresholds correlate with the inverse form of this model better than with a line. This work enables scientists and engineers to measure latency thresholds as a function of head motion under other particular conditions readily, by using an off-the-shelf projector system. Latency requirements can thus be estimated before designing any IVE.

Ji, Dinghuang (2017)

Data-Driven 3D Reconstruction and View Synthesis of Dynamic Scene Elements

Under the direction of Jan-Michael Frahm

Electronic copy available

Our world is filled with living beings and other dynamic elements. It is important to record dynamic things and events for the sake of education, archeology, and culture inheritance. From vintage to modern times, people have recorded dynamic scene elements in different ways, from sequences of cave paintings to frames of motion pictures. This thesis focuses on two key computer vision techniques by which dynamic element representation moves beyond video capture: 3D reconstruction and view synthesis. Although previous methods on these two aspects have been adopted to model and represent static scene elements, dynamic scene elements present unique and difficult challenges for the tasks.

This thesis focuses on three types of dynamic scene elements, namely 1) dynamic texture with static shape, 2) dynamic shapes with static texture, and 3) dynamic illumination of static scenes. Two research aspects will be explored to represent and visualize them: dynamic 3D reconstruction and dynamic view synthesis. Dynamic 3D reconstruction aims to recover the 3D geometry of dynamic objects and, by modeling the objects’ movements, bring 3D reconstructions to life. Dynamic view synthesis, on the other hand, summarizes or predicts the dynamic appearance change of dynamic objects — for example, the daytime-to-nighttime illumination of a building or the future movements of a rigid body.

We first target the problem of reconstructing dynamic textures of objects that have (approximately) fixed 3D shape but time-varying appearance. Examples of such objects include waterfalls, fountains, and electronic billboards.  Since the appearance of dynamic-textured objects can be random and complicated, estimating the 3D geometry of these objects from 2D images/video requires novel tools beyond the appearance-based point correspondence methods of traditional 3D computer vision.  To perform this 3D reconstruction, we introduce a method that simultaneously 1) segments dynamically textured scene objects in the input images and 2) reconstructs the 3D geometry of the entire scene, assuming a static 3D shape for the dynamically textured objects.

Compared to dynamic textures, the appearance change of dynamic shapes is due to physically defined motions like rigid body movements. In these cases, assumptions can be made about the object’s motion constraints in order to identify corresponding points on the object at different timepoints. For example, two points on a rigid object have constant distance between them in the 3D space, no matter how the object moves. Based on this assumption of local rigidity, we propose a robust method to correctly identify point correspondences of two images viewing the same moving object from different viewpoints and at different times. Dense 3D geometry could be obtained from the computed point correspondences. We apply this method on unsynchronized video streams, and observe that the number of inlier correspondences found by this method can be used as indicator for frame alignment among the different streams.  To model dynamic scene appearance caused by illumination changes, we propose a framework to find a sequence of images that have similar geometric composition as a single reference image and also show a smooth transition in illumination throughout the day. These images could be registered to visualize patterns of illumination change from a single viewpoint.

The final topic of this thesis involves predicting the movements of dynamic shapes in the image domain. Towards this end, we propose deep neural network architectures to predict future views of dynamic motions, such as rigid body movements and flowers blooming. Instead of predicting image pixels from the network, my methods predict pixel offsets and iteratively synthesize future views.

Jin, Ning (2012)

“Discriminative Subgraph Pattern Mining and Its Applications”
Under the direction of Wei Wang

My dissertation concentrates on two problems in mining discriminative subgraphs: how to efficiently identify subgraph patterns that discriminate two sets of graphs and how to improve discrimination power of subgraph patterns by allowing flexibility. To achieve high efficiency, I adapted evolutionary computation to subgraph mining and proposed to learn how to prune search space from search history. To allow flexibility, I proposed to loosely assemble small rigid graphs for structural flexibility and I proposed a label relaxation technique for label flexibility. I evaluated how applications of discriminative subgraphs can benefit from more efficient and effective mining algorithms. Experimental results showed that the proposed algorithms outperform other algorithms in terms of speed. In addition, using discriminative subgraph patterns found by the proposed algorithms leads to competitive or higher classification accuracy than other methods. Allowing structural flexibility enables users to identify subgraph patterns with even higher discrimination power.

 

Johnson, Frankford M. (1969)

“An Experiment in the Teaching of Programming Language/One Using Computer Assisted Instruction” (Education, UNC-Chapel Hill)
Under the direction of Frederick P. Brooks, Jr.

Three sections of Computer and Information Science 10, designated A, B, and C, were used in an experiment to investigate the comparative effectiveness of teaching Programming Language/One using computer assisted instruction techniques and “traditional” classroom lecture-discussion techniques. Students were randomly assigned to an experimental and control sections. Each student in the experimental section received one hour of instruction per week using an IBM 1050 multimedia terminal connected by telephone to an IBM System/360 Model 75 computer and one hour of classroom instruction for testing, giving uniform instructions, and answering general questions concerning the course. The control sections received “traditional” classroom instruction. The terminal consisted of a 1050 typewriter, a random access tape recorder, and a random access slide projector each separately controlled by the computer. The experiment lasted approximately four weeks. A locally constructed criterion test was used to evaluate the effectiveness of the teaching methods and an opinion questionnaire was used immediately after the criterion test to obtain student reaction and evaluation of instruction received and student reaction to computer assisted instruction. An analysis of variance of the criterion test scores indicated no significant difference among the sections and no significant correlation was found between any combination of test score, total time, and number of questions presented. The opinion questionnaire indicated a usually positive reaction to computer assisted instruction. It seemed reasonable to conclude that under the conditions of this experiment:

  1. Computer assisted instruction was as effective as “traditional” classroom instruction when evaluated by the use of teacher made tests.
  2. Student reaction to computer assisted instruction was usually positive.
  3. Most students enrolled in a course in programming preferred a mixture of computer assisted instruction and “traditional” classroom work to either element alone.
Johnson, Tyler M. (2009)

“A Cooperative Approach to Continuous Calibration in Multi-Projector Displays”
Under the direction of Henry Fuchs
Electronic copy available

Projection-based displays are often used to create large, immersive environments for virtual reality (VR), simulation, and training. These displays have become increasingly widespread due to the decreasing cost of projectors and advances in calibration and rendering that allow the images of multiple projectors to be registered together accurately on complex display surfaces, while simultaneously compensating for the surface shape – a process that requires knowledge of the pose of each projector as well as a geometric representation of the display surface. A common limitation of many techniques used to calibrate multi-projector displays is that they cannot be applied continuously, as the display is being used, to ensure projected imagery remains precisely registered on a potentially complex display surface. This lack of any continuous correction or refinement means that if a projector is moved slightly out of alignment, either suddenly or slowly over time, use of the display must be interrupted, possibly for many minutes at a time, while system components are recalibrated. This dissertation proposes a novel framework for continuous calibration where intelligent projector units (projectors augmented with cameras and dedicated computers) interact cooperatively as peers to continuously estimate display parameters during system operation. Using a Kalman filter to fuse local camera image measurements with remote measurements obtained by other projector units, we show that it is possible for the projector units to cooperatively estimate the poses of all projectors in a multi-projector display, as well as information about the display surface, in a continuous fashion. This decentralized approach to continuous calibration has the potential to enable scalable and fault-tolerant display solutions that can be configured and maintained by untrained users.

Jolley Jr., Truman M. (1972)

“The Use of the Walsh Transform in Scintigram Enhancement”
Under the direction of Stephen M. Pizer

Radioisotope scintigrams are pictures used in diagnostic medicine to obtain information about the distribution of a radioactive tracer substance in a patient’s body. The basic goal of the research was to speed up computer processing of scintigrams. The tack taken was to speed a basic tool of scintigram processing methods, two-dimensional discrete convolution. The computationally simple Walsh transform was used with a linear transformation called a silter in the Walsh transform or sequency domain. It is known that a nondiagonal silter matrix can be used to effect any desired filter matrix independent of the data being processed. The silter matrices corresponding to a wide range of filter shapes were found to possess a symmetric patterned sparsity if very small elements were treated as zero. The silters corresponding to low order Metz filters required only one-half as many multiplications as the standard implementation of convolution in the frequency domain. The numerical and visual difference in the nondiagonal silter and the filter results is very small. Diagonal silter matrices were investigated also, but were found to effect filters which vary too sensitively with the data being processed. Two useful concepts were developed for evaluating scintigram processing methods: 1) measuring the effectiveness of a method by its retrieval power when processing simulated scintigrams by using measures of differences between pictures, and 2) investigating two-dimensional numerical methods by processing one-dimensional analogs in order to reduce the computational cost of the experiment. An aliasing theorem for the sequency domain describes explicitly the consequences of sampling a function possessing a Walsh Series expansion at rate n = 2**m where n is less than twice the sequency of the highest sequency component in the data.

Jones, Edward L. (1984)

“Procedure-Level Program Modeling for Virtual-Memory Performance Improvement”
Under the direction of Peter Calingaert
Electronic copy available

The page-fault overhead incurred by a program executing on a demand-paged virtual-memory computer system is a function of the program’s module-to-module reference pattern and the program’s layout—the assignment of program modules to pages in the virtual name space. Defining the layout using program restructuring methods can greatly reduce this overhead, but the restructuring process is itself expensive when an execution trace of the program is required. This research aims to reduce the computer and programmer time required to perform program restructuring. Only externally relocatable code modules (i.e., subroutines or procedures) are treated. A generative procedure-level program model (PAM) is defined and used to synthesize program executions that, when used in program restructuring instead of actual execution traces, are shown to produce acceptable layouts. Moreover, the PAM program modeling system can be fully automated.

Junuzovic, Sasa (2010)

“Towards Self-Optimizing Frameworks for Collaborative Systems”
Under the direction of Prasun Dewan
Electronic copy available

Two important performance metrics in collaborative systems are local and remote response times. For certain classes of applications, it is possible to meet response time requirements better than existing systems through a new system without requiring hardware, network, or user-interface changes. This self-optimizing system improves response times by automatically making runtime adjustments to three aspects of a collaborative application. One of these aspects is the collaboration architecture. Previous work has shown that dynamically switching architectures at runtime can improve response times; however, no previous work performs the switch automatically. The thesis shows that (a) another important performance parameter is whether multicast or unicast is used to transmit commands, and (b) response times can be noticeably better with multicast than with unicast when transmission costs are high. Traditional architectures, however, support only unicast – a computer that processes input commands must also transmit commands to all other computers. To support multicast, a new bi-architecture model of collaborative systems is introduced in which two separate architectures govern the processing and transmission tasks that each computer must perform. The thesis also shows that another important performance aspect is the order in which a computer performs these tasks. These tasks can be scheduled sequentially or concurrently on a single-core, or in parallel on multiple cores. As the thesis shows, existing single-core policies trade-off noticeable improvements in local (remote) for noticeable degradations in remote (local) response times. A new lazy policy for scheduling these tasks on a single-core is introduced that trades-off an unnoticeable degradation in performance of some users for a much larger noticeable improvement in performance of others. The thesis also shows that on multi-core devices, the tasks should always be scheduled on separate cores. The self-optimizing system adjusts the processing architecture, communication architecture, and scheduling policy based on response time predictions given by a new analytical model. Both the analytical model and the self-optimizing system are validated through simulations and experiments in practical scenarios.

K

Kabul, Ilknur Kaynar (2011)

“Patient-specific Anatomical Illustration via Model-guided Texture Synthesis”
Under the direction of Stephen M. Pizer

Medical illustrations can make powerful use of textures to attractively, effectively, and understandably visualize the appearance of the surface or cut surface of anatomic structures. It can do this by implying the anatomic structure‘s physical composition and clarifying its identity and 3-D shape. Current visualization methods are only capable of conveying detailed information about the orientation, internal structure, and other local properties of the anatomical objects for a typical individual, not for a particular patient. Although one can derive the shape of the individual patient‘s object from CT or MRI, it is important to apply these illustrative techniques to those particular shapes. In this research patient-specific anatomical illustrations are created by model-guided texture synthesis (MGTS). Given 2D exemplar textures and model-based guidance information as input, MGTS uses exemplar-based texture synthesis techniques to create model-specific surface and solid textures. It consists of three main components. The first component includes a novel texture metamorphosis approach for creating interpolated exemplar textures given two exemplar textures. This component uses an energy optimization scheme derived from optimal control principles that utilizes intensity and structure information in obtaining the transformation. The second component consists of creating the model-based guidance information, such as directions and layers, for that specific model. This component uses coordinates implied by discrete medial 3D anatomical models (“m-reps”). The last component accomplishes exemplar-based texture synthesis of progressively changing textures on and inside the 3D models. It considers the exemplar textures from the first component and guidance information from the second component in synthesizing high-quality, high-resolution solid and surface textures. The methods in the MGTS framework are shown to be effective in creating patient-specific illustrations with a variety of textures for different anatomical models, such as muscles and bones.

Kalarickal, George J. (1998)

“Theory of Cortical Plasticity in Vision”
Under the direction of Jonathan A. Marshall
Electronic copy available

A theory of postnatal activity-dependent neural plasticity based on synaptic weight modification is presented. Synaptic weight modifications are governed by simple variants of a Hebbian rule for excitatory pathways and an anti-Hebbian rule for inhibitory pathways. The dissertation focuses on modeling the following cortical phenomena: long-term potentiation and depression (LTP and LTD); dynamic receptive field changes during artificial scotoma conditioning in adult animals; adult cortical plasticity induced by bilateral retinal lesions, intracortical microstimulation (ICMS), and repetitive peripheral stimulation; changes in ocular dominance during “classical” rearing conditioning; and the effect of neuropharmacological manipulations on plasticity. Novel experiments are proposed to test the predictions of the proposed models, and the models are compared with other models of cortical properties. The models presented in the dissertation provide insights into the neural basis of perceptual learning. In perceptual learning, persistent changes in cortical neuronal receptive fields are produced by conditioning procedures that manipulate the activation of cortical neurons by repeated activation of localized cortical regions. Thus, the analysis of synaptic plasticity rules for receptive field changes produced by conditioning procedures that activate small groups of neurons can also elucidate the neural basis of perceptual learning. Previous experimental and theoretical work on cortical plasticity focused mainly on afferent excitatory synaptic plasticity. The novel and unifying theme in this work is self-organization and the use of the lateral inhibitory synaptic plasticity rule. Many cortical properties, e.g., orientation selectivity, motion selectivity, spatial frequency selectivity, etc. are produced or strongly influenced by inhibitory interactions. Thus, changes in these properties could be produced by lateral inhibitory synaptic plasticity.

Kao, Chia-Yu (Katy) (2018)

A Deep Learning Architecture for Histology Image Classification

Under the direction of Leonard McMillan

Electronic copy available

Over the past decade, deep learning, a machine learning technique, has gained prominence in computer vision because of its outstanding performance in extracting semantics from images. However, in contrast to the natural images, medical histology images have particular structures and contain a wide variety of textures. Analyzing histology images often involves the classification of tissue into states, where most of the tissues have similar texture and structure, but differ between states. These significant differences in histology images make transfer learning from natural images to histology images difficult and limit their use for histology image analysis.

This dissertation introduces two novel deep learning architectures, that address these limitations in histology images. Both provide intermediate hints to aid the deep learning models. The first deep learning architecture is constructed based on stacked autoencoders with an additional layer, called a hyperlayer. The hyperlayer is an intermediate hint created to capture features of an image at different scales. The second architecture is a two-tiered Convolutional Neural Networks (CNN), with an intermediate representation, called a pixel/region labeling. The pixel/region labels capture low-level details and textures within objects, providing a more complete semantic description that is used as input to a subsequent image classifier. The dissertation presents pixel/region labeling in both supervised and unsupervised settings and demonstrates that the proposed models generalize to other problems.

The experiments show that by adding the hyperlayer, the architecture substantially outperforms fine-tuned CNN models trained without an intermediate target. In addition, the experiments suggest that the advantages of the labeling classifier are threefold. First, it generalizes to other related vision tasks. Second, image classification does not require extremely accurate pixel labeling. The architecture is robust and not susceptible to the noise. Lastly, labeling model captures low-level texture information in particular and converts them to valuable hints.

Katz, Robert (2002)

“Form Metrics for Interactive Rendering via Figural Models of Perception”
Under the direction of Stephen M. Pizer
Electronic copy available

This work presents a method for quantifying important form cues in 2D polygonal models. These cues are derived from multi-scale medial analysis, and they also draw upon several new form analysis methods developed in this work. Among these new methods are a means of using the Blum Medial Axis Transform for stable computing, a method to decompose objects into a set of parts that includes the ambiguity we find in human perception of objects, and the simultaneous application of both internal and external medial representations of objects. These techniques are combined to create a local saliency measure, a global saliency measure and a measure of object complexity. This work also demonstrates a new approach to simplifying complex polygonal models in order to accelerate interactive rendering. I propose that simplifying a model’s form, based on how it is perceived and processed by the human visual system, offers the potential for more effective simplifications. In this research, I suggest a means of understanding object simplification in these perceptual terms by creating a perceptually based scheduling of simplification operations as well as perceptual measures of the degree of simplification and the visual similarity between a simplified object and its original. A new simplification scheme is based on these measures, and then this perceptual scheme is compared via examples to a geometric simplification method.

Kehs, David R. (1978)

“A Routing Network for a Machine to Execute Reduction Languages”
Under the direction of Gyula A. Mago

A network of cells arranged to form a binary tree with connections between horizontally adjacent cells is investigated. Properties of shortest paths between leaf cells in such a network are established. Algorithms are developed to broadcast copies of a data item from one of the leaf cells to other leaf cells of the network. Using different types of guiding information stored in the cells, these algorithms are modified to avoid broadcasting copies to cells which do not need them. Analysis shows that, at best, these algorithms require “omega”(n) steps to accomplish such patterns as reversal, transposition, or translation of the contents of n leaf cells. A theoretical bound of “theta”( n / log (n) ) steps is established for the problem of reversing the contents of n adjacent leaf cells. This result is generalized to obtain upper bounds for reversal of the contents of non-adjacent leaf cells and for arbitrary movement patterns which do not require multiple copies of data items.

Keyser, John (2000)

“Exact Boundary Evaluation for Curved Solids”
Under the direction of Dinesh Manocha
Electronic copy available

In this dissertation, I describe an algorithm for exact boundary evaluation for curved solids. I prove the thesis that accurate boundary evaluation for low-degree curved solids can be performed efficiently using exact computation. Specifically, I propose exact representations for surfaces, patches, curves, and points. All of the common and standard CSG primitives can be modeled exactly by these representations. I describe kernel operations that provide efficient and exact basic operations on the representations. The kernel operations include new algorithms for curve-curve intersection, curve topology, point generation, and point location. The representations and kernel operations form the basis for the exact boundary evaluation algorithm. I describe each step of the boundary evaluation algorithm in detail. I propose speedups that increase the efficiency of the boundary evaluation algorithm while maintaining exactness. Speedups fall into several categories, including methods based on lazy evaluation, quick rejection, simplified computation, and incorporation of hardware-supported floating-point methods. I also describe the types of degeneracies that can arise in boundary evaluation and perform a preliminary analysis of methods for eliminating degeneracies. The representations, kernel operations, boundary evaluation algorithm, and speedups have been implemented in an exact boundary evaluation system, ESOLID, that exactly converts CSG models specified by the Army Research Lab’s BRL-CAD system to B-rep models. ESOLID has been applied to a model of the Bradley Fighting Vehicle (provided courtesy of the Army Research Lab). I present results and analysis of ESOLID’s performance on data from that model. ESOLID evaluates the exact boundary of Bradley data at speeds less than 1-2 orders of magnitude slower than a similar but inexact boundary evaluation system. ESOLID also correctly evaluates boundaries of objects on which this other system fails.

Kiapour, Mahammad Hadi (2015)

“Large Scale Visual Recognition of Cothing, People, and Styles”
Under the direction of Tamara L. Berg

Clothing recognition is a societally and commercially important yet extremely challenging problem due to large variations in clothing appearance, layering, style, body shape and pose. In this work, we propose novel computational vision approaches that learn to represent and recognize clothing items in images.

First, we present an effective method for parsing clothing in fashion photographs, where we label the regions of an image with their clothing categories. We then extend our approach to tackle the clothing parsing problem using a data-driven methodology: for a query image, we find similar styles from a large database of tagged fashion images and use these examples to recognize clothing items in the query. Along with our novel large fashion datasets, we also present intriguing initial results on using clothing estimates to improve human pose identification.

Second, we examine questions related to fashion styles and identifying the clothing elements associated with each style. We first design an online competitive style rating game called Hipster Wars to crowd source reliable human judgments of clothing styles. We use this game to collect a new dataset of clothing outfits with associated style ratings for different clothing styles. Next, we build visual style descriptors and train models that are able to classify clothing styles and identify the clothing elements are most discriminative in every style.

Finally, we define a new task, Exact Street to Shop, where our goal is to match a query real- world example of a garment item to the same exact garment in an online shop. This is an extremely challenging task due to extreme visual discrepancy between street photos, that are taken of people wearing clothing in everyday uncontrolled settings, and online shop photos, which are captured by professionals in highly controlled settings. We introduce a novel large dataset for this application, collected from the web, and present a deep learning based similarity network that can compare clothing items across visual domains.

Kilpatrick, Paul J. (1976)

“The Use of a Kinesthetic Supplement in an Interactive Graphics System”
Under the direction of Frederick P. Brooks, Jr.

Project GROPE studies how force cues can be used to augment visual cues in an interactive computer graphics system. The GROPE-II system uses the master station of a servo-manipulator as a six-degree-of- freedom input-output device. The user views a picture of a virtual table, several virtual blocks, and a virtual slave manipulator on the CRT screen. The master station is used to manipulate the virtual blocks. Forces acting on the virtual slave are reflected at the master handgrip. An informal experiment was conducted to determine how useful the force cues are in comparison to several visual cues. The conclusion was that those cues are more useful than some widely-used visual cues, and that they improve the operational effectiveness of the visual system. This system also served as the starting point for an analysis of the human factors of more general kinesthetic display systems. The characteristics of a proposed kinesthetic display are delineated.

Kim, Hyo Jin (2018)

“Learning Adaptive Representations for Image Retrieval”

Under the direction of Jan-Michael Frahm

Electronic copy available

Content-based image retrieval is a core problem in computer vision. It has a wide range of application such as object and place recognition, digital library search, organizing image collections, and 3D reconstruction. However, robust and accurate image retrieval from a large-scale image collection still remains an open problem. For particular instance retrieval, challenges come not only from photometric and geometric changes between the query and the database images, but also from severe visual overlap with irrelevant images. On the other hand, large intra-class variation and inter-class similarity represents a major obstacle in semantic image retrieval.

This dissertation explores learning image representations that adaptively focus on specific image content to tackle these challenges. For this purpose, three kinds of factors that contribute to discriminating relevant and irrelevant image content are exploited: (1) mid-level features, (2) within-image contexts, and (3) types of query image (across-image context). Novel models for learning adaptive image representations based on each factors are introduced. Moreover, by training the model, the underlying task-relevant factors are automatically discovered from the data in a self-supervised manner. These include (1) data-driven notion of good local mid-level features, (2) task-relevant image contexts with rich semantic information, and (3) the hierarchy of images. Experimental evaluation illustrate the superiority of the proposed methods in the applications of place recognition, scene categorization, particular object retrieval and object recognition.

Kim, Namhoon (2019)

Combining Hardware Management with Mixed-Criticality Provisioning in Multicore Real-Time Systems

Under the direction of Jim Anderson

Electronic copy available

Safety-critical applications in cyber-physical domains such as avionics and automotive systems require strict timing constraints because loss of life or severe financial repercussions may occur if they fail to produce correct outputs at the right moment. We call such systems “real-time systems.” When designing a real-time system, a multicore platform would be desirable to use because such platforms have advantages in size, weight, and power constraints, especially in embedded systems. However, the multicore revolution is having limited impact in safety-critical application domains. A key reason is the “one-out-of-m” problem: when validating real-time constraints on an m-core platform, excessive analysis pessimism can effectively negate the processing capacity of the additional m-1 cores so that only “one core’s worth” of capacity is available. The root of this problem is that shared hardware resources are not predictably managed. Two approaches have been investigated previously to address this problem: mixed-criticality analysis, which provision less-critical software components less pessimistically, and hardware-management techniques, which make the underlying platform itself more predictable.

The goal of the research presented in this dissertation is to combine both approaches to reduce the capacity loss caused by contention for shared hardware resources in multicore platforms. Towards that goal, fundamentally new criticality-cognizant hardware-management tradeoffs must be explored. Such tradeoffs are investigated in the context of a new variant of a mixed-criticality framework, called MC2, that supports configurable criticality-based hardware management. This framework allows specific DRAM banks and areas of the last-level cache to be allocated to certain groups of tasks to provide criticality-aware isolation. MC2 is further extended to support the sharing of memory locations, which is required to realize the ability to support real-world workloads.

We evaluate the impact of combining mixed-criticality provisioning and hardware-management techniques with both micro-benchmark experiments and schedulability studies. In our micro-benchmark experiments, we evaluate each hardware-management technique and consider tradeoffs that arise when applying them together. The effectiveness of the overall framework in resolving such tradeoffs is investigated via large-scale overhead-aware schedulability studies. Our results demonstrate that mixed-criticality analysis and hardware-management techniques can be much more effective when applied together instead of alone.

Kim, Seon Joo (2008)

“Radiometric Calibration Methods from Image Sequences”
Under the direction of Marc Pollefeys
Electronic copy available

In many computer vision systems, an image of a scene is assumed to directly reflect the scene radiance. However, this is not the case for most cameras as the radiometric response function which is a mapping from the scene radiance to the image brightness is nonlinear. In addition, the exposure settings of the camera are adjusted (often in the auto-exposure mode) according to the dynamic range of the scene changing the appearance of the scene in the images. Vignetting effect which refers to the gradual fading-out of an image at points near its periphery also contributes in changing the scene appearance in images. In this dissertation, I present several algorithms to compute the radiometric properties of a camera which enable us to find the relationship between the image brightness and the scene radiance. First, I introduce an algorithm to compute the vignetting function, the response function, and the exposure values that fully explain the radiometric image formation process from a set of images of a scene taken with different and unknown exposure values. One of the key features of the proposed method is that the movement of the camera is not limited when taking the pictures whereas most existing methods limit the motion of the camera. Then I present a joint feature tracking and radiometric calibration scheme which performs an integrated radiometric calibration in contrast to previous radiometric calibration techniques which require the correspondences as an input which leads to a chicken-and-egg problem as precise tracking requires accurate radiometric calibration. By combining both into an integrated approach we solve this chicken-and-egg problem. Finally, I propose a radiometric calibration method suited for a set of images of an outdoor scene taken at a regular interval over a period of time. This type of data is a challenging problem because the illumination for each image is changing causing the exposure of the camera to change and the conventional radiometric calibration framework cannot be used for this type of data. The proposed methods are applied to radiometrically align images for seamless mosaics and 3D model textures, to create high dynamic range mosaics, and to build an adaptive stereo system.

Kim, Sujeong (2015)

“Velocity-Space Reasoning for Interactive Simulation of Dynamic Crowd Behaviors”
Under the direction of Dinesh Manocha

The problem of simulating a large number of independent entities, interacting with each other and moving through a shared space, has received considerable attention in computer graphics, biomechanics, psychology, robotics, architectural design, and pedestrian dynamics. One of the major challenges is to simulate the dynamic nature, variety, and subtle aspects of real-world crowd motions. Furthermore, many applications require the capabilities to simulate these movements and behaviors at interactive rates.

In this thesis, we present interactive methods for computing trajectory-level behaviors that capture various aspects of human crowds. At a microscopic level, we address the problem of modeling the local interactions. First, we simulate dynamic patterns of crowd behaviors using Attribution theory and General Adaptation Syndrome theory from psychology. Our model accounts for permanent, stable disposition and the dynamic nature of human behaviors that change in response to the situation. Second, we model physics-based interactions in dense crowds by combining velocity-based collision avoidance algorithms with external forces. Our approach is capable of modeling both physical forces and interactions between agents and obstacles, while also allowing the agents to anticipate and avoid upcoming collisions during local navigation.

We also address the problem at macroscopic level by modeling high-level aspects of human crowd behaviors. We present an automated scheme for learning and predicting individual behaviors from real-world crowd trajectories. Our approach is based on Bayesian learning algorithms combined with  a velocity-based local collision avoidance model. We further extend our method to learn time-varying trajectory behavior patterns from pedestrian trajectories. These behavior patterns can be combined with local navigation algorithms to generate crowd behaviors that are similar to those observed in real-world videos.  We highlight their performance for pedestrian navigation, architectural design and generating dynamic behaviors for virtual environments.

Kim, Theodore W. (2006)

“Physically-Based Simulation of Ice Formation”
Under the direction of Ming C. Lin
Electronic copy available

The geometric and optical complexity of ice has been a constant source of wonder and inspiration for scientists and artists. It is a defining seasonal characteristic, so modeling it convincingly is a crucial component of any synthetic winter scene. Like wind and fire, it is also considered elemental, so it has found considerable use as a dramatic tool in visual effects. However, its complex appearance makes it difficult for an artist to model by hand, so physically-based simulation methods are necessary. In this dissertation, I present several methods for visually simulating ice formation. A general description of ice formation has been known for over a hundred years and is referred to as the Stefan Problem. There is no known general solution to the Stefan Problem, but several numerical methods have successfully simulated many of its features. I will focus on three such methods in this dissertation: phase field methods, diffusion limited aggregation, and level set methods. Many different variants of the Stefan problem exist, and each presents unique challenges. Phase field methods excel at simulating the Stefan problem with surface tension anisotropy. Surface tension gives snowflakes their characteristic six arms, so phase field methods provide a way of simulating medium scale detail such as frost and snowflakes. However, phase field methods track the ice as an implicit surface, so it tends to smear away small-scale detail. In order to restore this detail, I present a hybrid method that combines phase fields with diffusion limited aggregation (DLA). DLA is a fractal growth algorithm that simulates the quasi-steady state, zero surface tension Stefan problem, and does not suffer from smearing problems. I demonstrate that combining these two algorithms can produce visual features that neither method could capture alone. Finally, I present a method of simulating icicle formation. Icicle formation corresponds to the thin-film, quasi-steady state Stefan problem, and neither phase fields nor DLA are directly applicable. I instead use level set methods, an alternate implicit front tracking strategy. I derive the necessary velocity equations for level set simulation, and also propose an efficient method of simulating ripple formation across the surface of the icicles.

Kim, Yong-Jik (2003)

“Time Complexity Bounds for Shared-memory Mutual Exclusiony”
Under the direction of James H. Anderson
Electronic copy available

Mutual exclusion algorithms are used to resolve conflicting accesses to shared resources by concurrent processes. The problem of designing such an algorithm is widely regarded as one of the “classic” problems in concurrent programming. Recent work on scalable shared-memory mutual exclusion algorithms has shown that the most crucial factor in determining an algorithmÂ’s performance is the amount of traffic it generates on the processors-to-memory interconnect [23, 38, 61, 84]. In light of this, the RMR (remote-memory-reference) time complexity measure was proposed [84]. Under this measure, an algorithmÂ’s time complexity is defined to be the worst-case number of remote memory references required by one process in order to enter and then exit its critical section. In the study of shared-memory mutual exclusion algorithms, the following fundamental question arises: for a given system model, what is the most efficient mutual exclusion algorithm that can be designed under the RMR measure? This question is important because its answer enables us to compare the cost of synchronization in different systems with greater accuracy. With some primitives, constant-time algorithms are known, so the answer to this question is trivial. However, with other primitives (e.g., under read/write atomicity), there are still gaps between the best known algorithms and lower bounds. In this dissertation, we address this question. The main thesis to be supported by this dissertation can be summarized as follows. The mutual exclusion problem exhibits different time-complexity bounds as a function of both the number of processes (N) and the number of simultaneously active processes (contention, k), depending on the available synchronization primitives and the underlying system model. Moreover, these time-complexity bounds are nontrivial, in that constant-time algorithms are impossible in many cases. In support of this thesis, we present a time-complexity lower bound of O(log N/log log N) for systems using atomic reads, writes, and comparison primitives, such as compare-and-swap. This bound is within a factor of T(log log N) of being optimal. Given that constant-time algorithms based on fetch-and-f primitives exist, this lower bound points to an unexpected weakness of compare-and-swap, which is widely regarded as being the most useful of all primitives to provide in hardware. We also present an adaptive algorithm with T(min(k, log N)) RMR time complexity under read/write atomicity, where k is contention. In addition, we present another lower bound that precludes the possibility of an o(k) algorithm for such systems, even if comparison primitives are allowed. Regarding nonatomic systems, we present a T(log N) nonatomic algorithm, and show that adaptive mutual exclusion is impossible in such systems by proving that any nonatomic algorithm must have a single-process execution that accesses O(log N/log log N) distinct variables. Finally, we present a generic fetch-and-f-based local-spin mutual exclusion algorithm with T(logr N) RMR time complexity. This algorithm is “generic” in the sense that it can be implemented using any fetch-and-f primitive of rank r, where 2 = r < N. The rank of afetch-and-f primitive expresses the extent to which processes may “order themselves” using that primitive. For primitives that meet a certain additional condition, we present a T(log N/ log log N) algorithm, which is time-optimal for certain primitives of constant rank.

Kohli, Luv (2013)

“Redirected Touching”
Under the direction of Frederick P. Brooks, Jr.
Electronic copy available

In immersive virtual environments, virtual objects cannot be touched. One solution is to use passive haptics–physical props to which virtual objects are registered. The result is compelling; when a user reaches out with a virtual hand to touch a virtual object, her real hand touches and feels a real object. However, for every virtual object to be touched, there must be an analogous physical prop. In the limit, an entire real-world infrastructure would need to be built and changed whenever a virtual scene is changed. Virtual objects and passive haptics have historically been mapped one-to-one. I demonstrate that the mapping need not be one-to-one. One can make a single passive real object provide useful haptic feedback for many virtual objects by exploiting human perception. I developed and investigated three categories of such techniques:

  1. Move the virtual world to align different virtual objects in turn with the same real object
  2. Move a virtual object into alignment with a real object
  3. Map real hand motion to different virtual hand motion , e.g., when the real hand traces a real object, the virtual hand traces a differently shaped virtual object.

The first two techniques were investigated for feasibility, and the third was explored more deeply. The first technique (Redirected Passive Haptics) enables users to touch multiple instances of a virtual object, with haptic feedback provided by a single real object. The second technique (The Haptic Hand) attaches a larger-than-hand virtual user interface to the non-dominant hand, mapping the currently relevant part of the interface onto the palm. The third technique (Redirected Touching) warps virtual space to map many differently shaped virtual objects onto a single real object, introducing a discrepancy between real and virtual hand motions. Two studies investigated the technique’s effect on task performance and its potential for use in aircraft cockpit procedures training. Users adapt rather quickly to real-virtual discrepancy, and after adaptation, users perform no worse with discrepant virtual objects than with one-to-one virtual objects. Redirected Touching shows promise for training and entertainment application.

Koltun, Philip L. (1982)

“Evaluation of a Teaching Approach for Introductory Computer Programming”
Under the direction of Donald F. Stanat
Electronic copy available

An objective evaluation is presented of the applicability of Dijkstra’s ideas on program development methodology to the teaching of introductory programming students. The methodology emphasizes development of assertion-based correctness arguments hand-in-hand with the programs themselves and uses a special language to support that approach. Measures of program correctness and programming errors after the first two-thirds of the course indicate that with a batch implementation of the language, the methodology provides a significant advantage over a conventional teaching approach which emphasizes program testing and tracing and uses Pascal on a batch machine. However, that advantage was not maintained when the experimental subjects switched over to Pascal in the latter third of the experiment. A second set of comparisons demonstrated a significant and impressive advantage with respect to program correctness, programming errors, and time expenditure for students being taught with the conventional approach and using Pascal on a microcomputer over students being taught with the conventional approach and using Pascal on the batch machine. Furthermore, the microcomputer effect was noticeably beneficial for students of marginal ability.

Konstantinow, George, Jr. (1983)

“Automated Region of Interest Selection and Decontamination of Time Activity Curves From First Pass Radionuclide Angiocardiographic Data” (Biomedical Engineering and Mathematics, UNC-Chapel Hill)
Under the direction of Stephen M. Pizer

Crosstalk is contamination of radionuclide angiocardiographic (RNA) data inherent in the projection of tracer activity within cardiac structures onto the detector surface of a gamma camera. A new computer method is presented for decontaminating crosstalk by processing RNA time activity curves to describe characteristic activity over time in the isolated chambers of the central circulation. This description is based upon a catenary model for simulating blood flow; modeling tracer activity allows quantification of crosstalk in the field of view and subsequent decontamination of time activity curves from individual picture elements (pixels). The resulting RNA images and time activity curves are relatively free from crosstalk contamination and from them more accurate and reliable parameters of cardiac function can be evaluated. This dissertation reviews RNA data analysis and the problem of crosstalk, presents the theoretical foundation of the decontamination algorithm and considerations for its implementation, and gives the results of testing this automated procedure using clinical RNA studies. It is shown that the method produces functions which accurately reflect characteristic tracer activity in the isolated chambers of the central circulation and that these functions can be used for automated region of interest selection from the RNA data by estimating the spatial distribution of tracer in each underlying chamber alone. Also reviewed are the results of validation tests which showed that values of cardiac function parameters evaluated from decontaminated data in actual patient studies compared favorably with those values derived by accepted clinical processing methods. The closing discussion presents prospects for further enhancements to the computational method and for future research in RNA data processing.

Kosa, Martha J. (1994)

“Consistency Guarantees for Concurrent Shared Objects: Upper and Lower Bounds”
Under the direction of Jennifer Welch
Department of Computer Science Technical Report TR93-067
Electronic copy available

This dissertation explores the costs of providing consistency guarantees for concurrent shared objects in distributed computer systems. Concurrent shared objects are useful for interprocess communication. We study two forms of concurrent shared objects: physical shared objects and virtual shared objects. First we consider physical shared objects called (read/write) registers. Registers are useful for basic interprocess communication and are classified according to their consistency guarantees (safe, regular, atomic) and their capacity. A one-write algorithm is an implementation of a multivalued resister from binary registers where each write of the simulated multivalued register modifies at most one binary register. A one-write algorithm optimizes writes, speeding up applications where writes outnumber reads. We present the first one-write algorithm for implementing a k-valued regular register from binary regular registers. The same algorithm using binary atomic registers is the first one-write algorithm implementing a k-valued atomic register. The algorithm requires C(k, 2) binary registers. Two improved lower bounds on the number of registers required by one-write regular implementations are given. The first lower bound holds for a restricted class and implies that our algorithm is optimal for this class. The second lower bound, 2k-1-[log k], holds for a more general class. The lower bounds also hold for two corresponding classes of one-write atomic implementations. Next we consider virtual shared objects from abstract data types, which are desirable from a software engineering viewpoint. We show that the cost (worst-case time complexity) for operations from abstract data types depends on the amount of synchrony among the processes sharing the objects, the consistency guarantee (sequential consistency, linearizability, hybrid consistency), and algebraic properties of the operations. Sequential consistency and linearizability are equally costly in perfectly synchronized systems under certain assumptions. Hybrid consistency is not necessarily cheaper than sequential consistency or linearizability. In perfectly synchronized systems, operations satisfying certain algebraic properties cost “omega”(d) (d is the network message delay). To contrast, in systems with only approximately synchronized clocks, for certain classes of abstract data types, sequential consistency is cheaper than linearizability. Our results generalize known results for specific object types (read/write objects, queues, and stacks).

Koster, V. Alexis (1977)

“Execution Time and Storage Requirements of Reduction Language Programs on a Reduction Machine”
Under the direction of Gyula A. Mago

Reduction languages are applicative programming languages whose syntax and semantics are simple and rigorously defined. Their main features are the use of functional operations (primitive operation, regular composition, meta composition), the complete absence of imperative features, and the possibility of unbounded parallelism. A methodology is developed to analyze the execution time and the storage requirements of reduction language programs on a new type of computer called a reduction machine; it is applied to matrix multiplication programs and a tree traversing program. Variations in the degree of parallelism of programs allow one to investigate their space/time trade-off. Parallel programs execute very efficiently, but they usually require a large amount of storage. Reduction language operators that minimize data movement are introduced. A program that uses some of these operators multiplies n*n matrices in time O(n^2) and with space-time product C(n^4). A tree traversing program operates in time O(n) on trees of height n.

Kotliar, Michael S. (1989)

“The Right Stuff–Techniques for High Speed CMOS Circuit Synthesis”
Under the direction of Kye S. Hedlund
Department of Computer Science Technical Report TR89-046
Electronic copy available

This dissertation presents a method for improving the speed of combinational CMOS circuits. The original circuit is replaced by a faster circuit with equivalent function by restructuring the gates and their connections in the circuit. This restructuring has more flexibility and greater potential for reducing delays than other methods such as transistor sizing. However, this increased flexibility comes at a cost of increased problem complexity, so to solve this problem in an efficient manner, we use greedy heuristics that have been augmented with knowledge of the problem. The restructuring is accomplished by a set of transformations that are applied to the circuit in both a local and global manner. One of the transformations described, path resynthesis, is a new technique and its operation is described. The amount of speed-up possible using restructuring is illustrated using a set of representative circuits from a standard benchmark set. Since the reduction in delay is not without cost, the cost associated with the increased speed, namely increased circuit area, is also studied using the set of representative circuits.

Krajcevski, Pavel (2016)

“Improved Encoding for Compressed Textures”
Under the direction of Dinesh Manocha

For the past few decades, graphics hardware has supported mapping a two dimensional image, or texture, onto a three dimensional surface to add detail during rendering. The complexity of modern applications using interactive graphics hardware have created an explosion of the amount of data needed to represent these images. In order to alleviate the amount of memory required to store and transmit textures, graphics hardware manufacturers have introduced hardware decompression units into the texturing pipeline. Textures may now be stored as compressed in memory and decoded at run-time in order to access the pixel data. In order to encode images to be used with these hardware features, many compression algorithms are run offline as a preprocessing step, often times the most time-consuming step in the asset preparation pipeline.

This research presents several techniques to quickly serve compressed texture data. With the goal of interactive compression rates while maintaining compression quality, three algorithms are presented in the class of endpoint compression formats. The first uses intensity dilation to estimate compression parameters for low-frequency signal-modulated compressed textures and offers up to a 3X improvement in compression speed. The second, FasTC, shows that by estimating the final compression parameters, partition-based formats can choose an approximate partitioning and offer orders of magnitude faster encoding speed. The third, SegTC, shows additional improvement over selecting a partitioning by using a global segmentation to find the boundaries between image features. This segmentation offers an additional 2X improvement over FasTC while maintaining similar compressed quality.

Also presented is a case study in using texture compression to benefit two dimensional concave path rendering. Compressing pixel coverage textures used for compositing yields both an increase in rendering speed and a decrease in storage overhead. Additionally an algorithm is presented that uses a single layer of indirection to adaptively select the block size compressed for each texture, giving a 2X increase in compression ratio for textures of mixed detail. Finally, a texture storage representation that is decoded at runtime on the GPU is presented. The decoded texture is still compressed for graphics hardware but uses 2X fewer bytes for storage and network bandwidth.

Krishnan, Shankar (1997)

“Efficient and Accurate Boundary Evaluation Algorithms for Boolean Combinations of Sculptured Solids”
Under the direction of Dinesh Manocha
Electronic copy available

This dissertation presents techniques to effectively compute Boolean combinations of solids whose boundary is described using a collection of parametric spline surfaces. It also describes a surface intersection algorithm based on a lower dimensional formulation of the intersection curve that evaluates all its components and guarantees its correct topology. It presents algorithms and data structures to support the thesis that the lower dimensional formulation of the intersection curve is an effective representation to perform Boolean operations on sculptured solids. The thesis also analyzes different sources of problems associated with computing the intersection curve of two high degree parametric surfaces, and presents techniques to solve them. More specifically, the intersection algorithm utilizes a combination of algebraic and numeric techniques to evaluate the curve, thereby providing a solution with greater accuracy and efficiency. Given two parametric surfaces, the intersection curve can be formulated as the simultaneous solution of three equations in four unknowns. This is an algebraic curve in R^4. This curve is then projected into the domain of one of the surfaces using resultants. The intersection curve in the plane is represented as the singular set of a bivariate matrix polynomial. We present algorithms to perform loop and singularity detection, use curve tracing methods to find all the components of the intersection curve and guarantee its correct topology. The matrix representation, combined with numerical matrix computations like singular value decomposition, eigenvalue methods, and inverse iteration, is used to evaluate the intersection curve. We will describe a system BOOLE, a portable implementation of our algorithms, that generates the B-reps of solids given as a CSG expression in the form of trimmed Bezier patches. Given two solids, the system first computes the intersection curve between the two solids using our surface intersection algorithm. Using the topological information of each solid, we compute various components within each solid generated by the intersection curve and their connectivity. The component classification step is performed by using ray-shooting. Depending on the Boolean operation performed, appropriate components are put together to obtain the final solid. The system has been successfully used to generate B-reps for a number of large industrial models including a notional submarine storage and handling room (courtesy – Electric Boat Inc.) and Bradley fighting vehicle (courtesy – Army Research Labs). Each of these models is composed of over 8000 Boolean operations and is represented using over 50,000 trimmed Bezier patches. Our exact representation of the intersection curve and use of stable numerical algorithms facilitate an accurate boundary evaluation at every Boolean set operation and generation of topologically consistent solids.

Kum, Hye-Chung (Monica) (2004)

“Approximate Mining of Consensus Sequential Pattern”
Under the direction of Wei Wang and Dean Duncan
Electronic copy available

Sequential pattern mining finds interesting patterns in ordered lists of sets. The purpose is to find recurring patterns in a collection of sequences in which the elements are sets of items. Mining sequential patterns in large databases of such data has become an important data mining task with broad applications. For example, sequences of monthly welfare services provided over time can be mined for policy analysis. Sequential pattern mining is commonly defined as finding the complete set of frequent subsequences. However, such a conventional paradigm may suffer from inherent difficulties in mining databases with long sequences and noise. It may generate a huge number of short trivial patterns but fail to find the underlying interesting patterns. Motivated by these observations, I examine an entirely different paradigm for analyzing sequential data. In this dissertation: • I propose a novel paradigm, multiple alignment sequential pattern mining, to mine databases with long sequences. A paradigm, which defines patterns operationally, should ultimately lead to useful and understandable patterns. By lining up similar sequences and detecting the general trend, the multiple alignment paradigm effectively finds consensus patterns that are approximately shared by many sequences. • I develop an efficient and effective method. ApproxMAP (for APPROXimate Multiple Alignment Pattern mining) clusters the database into similar sequences, and then mines the consensus patterns in each cluster directly through multiple alignment. • I design a general evaluation method that can assess the quality of results produced by sequential pattern mining methods empirically. • I conduct an extensive performance study. The trend is clear and consistent. Together the consensus patterns form a succinct but comprehensive and accurate summary of the sequential data. Furthermore, ApproxMAP is robust to its input parameters, robust to noise and outliers in the data, scalable with respect to the size of the database, and outperforms the conventional paradigm. I conclude by reporting on a successful case study. I illustrate some interesting patterns, including a previously unknown practice pattern, found in the NC welfare services database. The results show that the alignment paradigm can find interesting and understandable patterns that are highly promising in many applications.

Kum, Sang-Uok (2008)

“Encoding of Multiple Depth Streams”
Under the direction of Ketan Mayer-Patel
Electronic copy available

With advances in hardware, interests in systems for capturing, representing, and transmitting dynamic real world scenes have been growing. Examples of such systems include 3D immersive systems, tele-immersion video conferencing systems, 3D TV, and medical consultation and surgical training. These systems use multiple cameras for dynamic scene acquisition. One approach for scene acquisition is to use multiple video streams captured from varying viewpoints in order to form a dynamic light field and then to use image based rendering (IBR) techniques for generating virtual views of the scene. Another approach is to use the imagery from the multiple cameras to derive a 3D representation of the environment that is then transmitted with color. The dynamic light field approach handles complex scenes where 3D reconstruction would be difficult. Additionally, the captured video streams can be transmitted with little further processing, while deriving 3D information requires extensive processing before transmission. However, more cameras are needed for dynamic light fields since light fields need to be acquired more densely to be as effective. One common 3D representation for the dynamic environments is multiple depth streams – streams of images with color and depth per pixel. A large number of depth streams are needed to properly represent a dynamic scene. Without compression, multiple depth streams, even sparsely sampled, requires significant storage and transmission bandwidth. Fortunately, the strong spatial coherence between streams and temporal coherence between frames allows for an effective encoding of multiple depth streams. In this dissertation, I present an effective encoding algorithm for multiple depth streams that uses approximate per pixel depth information to predict color for each pixel.

Kumar, Subodh (1996)

“Interactive Rendering of Parametric Spline Surfaces”
Under the direction of Dinesh Manocha
Department of Computer Science Technical Report TR96-039
Electronic copy available

This dissertation presents techniques for fast rendering of parametric spline surfaces. It presents algorithms and data structures needed to support the thesis that real-time display of surfaces (represented parametrically) is indeed possible on current graphics systems that are optimized to display triangles. It analyzes the sources of bottleneck in surface rendering and derives techniques to display tens of thousands of Bezier surfaces, 10-20 times a second, by efficiently utilizing the graphics hardware. In a more general framework, this work demonstrates the effectiveness of using higher-order surfaces, as opposed to polygons. Analytic representation of surfaces retains information that is often lost in a static translation to polygons. We meaningfully use this analytic information to obtain better images than those generated from purely polygonal models. On the other hand, since current graphics systems are optimized for displaying triangles, we perform on-line triangulation to still be able to make effective use of the hardware capabilities of current systems. For specificity, we concentrate on trimmed Non-Uniform Rational B-Splines (NURBS) surfaces. The trimming curves that define the boundary of the surfaces may be NURBS or piecewise linear. The surfaces are off-line transformed to Bernstein basis and stored as trimmed Bezier patches. At rendering time, each Bezier patch is tessellated into triangles appropriate for the current rendering of the scene from the user’s view point. The overall algorithm first culls away the patches facing away from the user, or lying outside the view-frustum. The next step determines the sampling density required for each on-screen patch and its trimming curves, for a smooth display with Gouraud or Phong shading. Next, the sample points on the surface, and their normals, are evaluated and efficiently triangulated into strips. Finally the triangle strips are sent to the graphics pipeline. The algorithm exploits temporal coherence by performing only incremental triangulation of patches at each frame. The triangles used in a frame are cached in memory. For the next frame to be displayed, only a small update of detail is performed to maintain a smooth image. This dissertation presents efficient techniques to prevent tessellation cracks across surface boundaries. It also discusses issues in parallel implementation of the tessellation algorithm on multi-processor systems. This dissertation describes SPEED, a portable implementation of our algorithms, and reports its performance on some well-known graphics systems. While our algorithm is primarily designed for static models, many of the techniques are applicable to dynamic scenes as well. The performance of our prototype Bezier patch rendering system has been quite encouraging. We are able to render about five thousand Bezier patches, more than ten times a second, on a Silicon Graphics Onyx with four 200MHz R4400 CPUs and a Reality Engine II graphics pipeline. This is more than an order of magnitude improvement over the state of the art (e.g. SGI GL library). The major factors of this improvement include efficient visibility computation, tighter bound computation, efficient trimming, incremental triangulation and parallelism. Our results demonstrate that we can indeed generate high-fidelity images of large engineering models on current graphics systems, and obtain interactive rendering performance at the same time.

Kuntz, Alan (2019)

Integrating Optimization and Sampling for Robot Motion Planning with Applications in Healthcare

Under the direction of Ron Alterovitz

Electronic copy available

Robots deployed in human-centric environments, such as a person’s home in a home-assistance setting or inside a person’s body in a surgical setting, have the potential to have a large, positive impact on human quality of life. However, for robots to operate in such environments they must be able to move efficiently while avoiding colliding with obstacles such as objects in the person’s home or sensitive anatomical structures in the person’s body. Robot motion planning aims to compute safe and efficient motions for robots that avoid obstacles, but home assistance and surgical robots come with unique challenges that can make this difficult. For instance, many state of the art surgical robots have computationally expensive kinematic models, i.e., it can be computationally expensive to predict their shape as they move. Some of these robots have hybrid dynamics, i.e., they consist of multiple stages that behave differently. Additionally, it can be difficult to plan motions for robots while leveraging real-world sensor data, such as point clouds.

 

In this dissertation, we demonstrate and empirically evaluate methods for overcoming these challenges to compute high-quality and safe motions for robots in home-assistance and surgical settings. First, we present a motion planning method for a continuum, parallel surgical manipulator that accounts for its computationally expensive kinematics. We then leverage this motion planner to optimize its kinematic design chosen prior to a surgical procedure. Next, we present a motion planning method for a 3-stage lung tumor biopsy robot that accounts for its hybrid dynamics and evaluate the robot and planner in simulation and in inflated porcine lung tissue. Next, we present a motion planning method for a home-assistance robot that leverages real-world, point-cloud obstacle representations. We then expand this method to work with a type of continuum surgical manipulator, a concentric tube robot, with point-cloud anatomical representations. Finally, we present a data-driven machine learning method for more accurately estimating the shape of concentric tube robots. By effectively addressing challenges associated with home assistance and surgical robots operating in human-centric environments, we take steps toward enabling robots to have a positive impact on human quality of life.

L

Ladd, Brian C. (2000)

“Lingua Graphica: A Language for Concise Expression of Graph-Based Algorithms”
Under the direction of John B. Smith

Graph-based data structures are at the heart of many computer programs; identifiable graphs, nodes, and links can be found in automated organization charts, airline scheduling programs, and even software engineering tools such as make and rcs. They are also found in many persistent, graph-based data stores from object-oriented databases to hypermedia storage systems. Traditional, procedural languages lack support for graph-based data structures, making the programmer’s job harder, changing the focus from the graph-based algorithm to the details of implementing an internal graph structure. A new language with strong support for graph-based data structures and algorithms is needed to address these shortcomings. Lingua Graphica, the language of graphs, is a specialized language for expressing graph-based algorithms and applying the algorithms to different graph-based data stores. It permits programmers to focus on the graph-based data structures at the center of their programs by providing a simple, powerful abstraction of graph-based data types: graphs, nodes, and links. Using a rule-based specification and an execution model with backtracking, Lingua Graphica permits programmers to specify “what” graph-based data objects are of interest rather than requiring them to specify “how” to find the objects. In addition to the new language, this research presents an interpreter for Lingua Graphica that runs atop the Web and the UNC Distributed Graph Store (DGS). Implementing a selected group of applications in the language shows the applicability of the new language to graph-based algorithms; these programs run atop both graph-based data stores without modification. The simplicity and performance of the system is demonstrated by comparing Lingua Graphica and traditional language implementations of graph-based algorithms interacting with the Web and DGS.

Larsen, E. Scott (2008)

“Modified Belief Propagqation for Reconstruction of Office Environments”
Under the direction of Henry Fuchs
Electronic copy available

Belief Propagation (BP) is an algorithm that has found broad application in many areas of computer science. The range of these areas includes Error Correcting Codes, Kalman filters, particle filters, and – most relevantly – stereo computer vision. Many of the currently best algorithms for stereo vision benchmarks, e.g. the Middlebury dataset, use Belief Propagation. This dissertation describes improvements to the core algorithm to improve its applicability and usefulness for computer vision applications. A Belief Propagation solution to a computer vision problem is commonly based on specification of a Markov Random Field that it optimizes. Both Markov Random Fields and Belief Propagation have at their core some definition of nodes and \neighborhoods” for each node. Each node has a subset of the other nodes defined to be its neighborhood. In common usages for stereo computer vision, the neighborhoods are defined as a pixel’s immediate four spatial neighbors. For any given node, this neighborhood definition may or may not be correct for the specific scene. In a setting with video cameras, I expand the neighborhood definition to include corresponding nodes in temporal neighborhoods in addition to spatial neighborhoods. This amplifies the problem of erroneous neighborhood assignments. Part of this dissertation addresses the erroneous neighborhood assignment problem. Often, no single algorithm is always the best. The Markov Random Field formulation appears amiable to integration of other algorithms: I explore that potential here by integrating priors from independent algorithms. This dissertation makes core improvements to BP such that it is more robust to erroneous neighborhood assignments, is more robust in regions with inputs that are near-uniform, and can be biased in a sensitive manner towards higher level priors. These core improvements are demonstrated by the presented results: application to office environments, real-world datasets, and benchmark datasets.

Lauterbach, Christian (2010)

“Interactive Ray Tracing of Massive and Deformable Models”
Under the direction of Dinesh Manocha
Electronic copy available

Ray tracing is a fundamental algorithm used for many applications such as computer graphics, geometric simulation, collision detection and line-of-sight computation. Even though the performance of ray tracing algorithms scales with the model complexity, the high memory requirements and the use of static hierarchical structures pose problems with massive models and dynamic data-sets. We present several approaches to address these problems based on new acceleration structures and traversal algorithms. We introduce a compact representation for storing the model and hierarchy while ray tracing triangle meshes that can reduce the memory footprint by up to 80%, while maintaining high performance. As a result, can ray trace massive models with hundreds of millions of triangles on workstations with a few giga- bytes of memory. We also show how to use bounding volume hierarchies for ray tracing complex models with interactive performance. In order to handle dynamic scenes, we use refitting algorithms and also present highly-parallel GPU-based algorithms to reconstruct the hierarchies. In practice, our method can construct hierarchies for models with hundreds of thousands of triangles at interactive speeds. Finally, we demonstrate several applications that are enabled by these algorithms. Using deformable BVH and fast data parallel techniques, we introduce a geometric sound propagation algorithm that can run on complex deformable scenes interactively and orders of magnitude faster than comparable previous approaches. In addition, we also use these hierarchical algorithms for fast collision detection between deformable models and GPU rendering of shadows on massive models by employing our compact representations for hybrid ray tracing and rasterization.

Le, Nguyen Tuong Long (2005)

“Investigating the Effects of Active Queue Management on the Performance of TCP Applications”
Under the direction of Kevin Jeffay
Electronic copy available

Congestion occurs in the Internet when queues at routers fill to capacity and arriving packets are dropped (“lost”). Today, congestion is controlled by an adaptive mechanism built into TCP that regulates the transmission rate of a TCP connection. This mechanism dictates that each connection should detect instances of packet loss, interpret such instances as an indication of congestion, and respond to loss by reducing its transmission rate. After the rate has been reduced, the connection probes for additional bandwidth by slowly increasing its transmission rate. This adaptive behavior, applied independently on each end system, has been one of the keys to the operational success of the Internet. Nevertheless, as the Internet has grown, networking researchers and the Internet Engineering Task Force (IETF) have expressed concern about the scalability of pure end systems’ congestion control. For example, pure end systems’ congestion control mechanism only detects and reacts to a congestion event after a router queue has overflowed. In response to these concerns, active queue management (AQM) has been proposed as a router-based mechanism for early detection of congestion inside the network. AQM algorithms execute on network routers and detect incipient congestion by monitoring some function of the instantaneous or average queue size in the router. When an AQM algorithm detects congestion on a link, the router signals end systems and provide an “early warning” of congestion. This signaling is performed either explicitly, by setting a specific bit in the header of a packet, or implicitly by dropping some number of arriving packets. Many AQM algorithms have been proposed in recent years but none of them have been thoroughly investigated under comparable (or realistic) conditions in a real network. Moreover, existing performance studies have concentrated on network-centric measures of performance and have not considered application-centric performance measures such as response time. In this dissertation, I investigated the effects of a large collection of existing AQM algorithms on the performance of TCP applications under realistic conditions in a real network. At a high-level, the primary results are that many existing AQM algorithms do not perform as well as expected when they are used with packet dropping. Moreover, a detailed investigation of the classical random early detection, or RED algorithm, has uncovered a number of design flaws in the algorithm. I have proposed and investigated a number of modifications to RED and other algorithms and have shown that my variants significantly outperform existing algorithms. Overall, this dissertation shows promising results for AQM. When combined with packet marking, AQM algorithms significantly improve network and application performance over conventional drop-tail queues. Moreover, AQM enables network operators to run their networks near saturation levels with only modest increases in average response times. If packet marking is not possible, the dissertation also shows how a form of differential treatment of flows that I invented can achieve a similar positive performance improvement. Further, I also developed a new AQM algorithm that can balance between loss rate and queuing delay to improve the overall system performance.

Leaver-Fay, Andrew (2006)

“Capturing Atomic Interactions with a Graphical Framework in Computational Protein Design”
Under the direction of Jack Snoeyink
Electronic copy available

A protein’s amino acid sequence determines both its chemical and its physical structures, and together these two structures determine its function. Protein designers seek new amino acid sequences with chemical and physical structures capable of performing some function. The vast size of sequence space frustrates efforts to find useful sequences. Protein designers model proteins on computers and search through amino acid sequence space computationally. They represent the three-dimensional structures for the sequences they examine, specifying the location of each atom, and evaluate the stability of these structures. Good structures are tightly packed but are free of collisions. Designers seek a sequence with a stable structure that meets the geometric and chemical requirements to function as desired; they frame their search as an optimization problem. In this dissertation, I present a graphical model of the central optimization problem in protein design, the side-chain-placement problem. This model allows the formulation of a dynamic programming solution, thus connecting side-chain placement with the class of NP-complete problems for which certain instances admit polynomial time solutions. Moreover, the graphical model suggests a natural data structure for storing the energies used in design. With this data structure, I have created an extensible framework for the representation of energies during side-chain-placement optimization and have incorporated this framework into the Rosetta molecular modeling program. I present one extension that incorporates a new degree of structural variability into the optimization process. I present another extension that includes a non-pairwise decomposable energy function, the first of its kind in protein design, laying the ground-work to capture aspects of protein stability that could not previously be incorporated into the optimization of side-chain placement.

Lee, Huai-Ping (2012)

“Simulation-Based Joint Estimation of Body Deformation and Elasticity Parameters for Medical Image Analysis”
Under the direction of Ming C. Lin
Electronic copy available

Elasticity parameter estimation is essential for generating accurate and controlled simulation results for computer animation and medical image analysis. However, finding the optimal parameters for a particular simulation often requires iterations of simulation, assessment, and adjustment and can become a tedious process. Elasticity values are especially important in medical image analysis, since cancerous tissues tend to be stiffer. Elastography is a popular type of method for finding stiffness values by reconstructing a dense displacement field from medical images taken during the application of forces or vibrations. These methods, however, are limited by the imaging modality and the force exertion or vibration actuation mechanisms, which can be complicated for deep-seated organs. In this thesis, I present a novel method for reconstructing elasticity parameters without requiring a dense displacement field or a force exertion device. The method makes use of natural deformations within the patient and relies on surface information from segmented images taken on different days. The elasticity value of the target organ and boundary forces acting on surrounding organs are optimized with an iterative optimizer, within which the deformation is always generated by a physically-based simulator. Experimental results on real patient data are presented to show the positive correlation between recovered elasticity values and clinical prostate cancer stages. Furthermore, to resolve the performance issue arising from the high dimensionality of boundary forces, I propose to use a reduced finite element model to improve the convergence of the optimizer. To find the set of bases to represent the dimensions for forces, a statistical training based on real patient data is performed. I demonstrate the trade-off between accuracy and performance by using different numbers of bases in the optimization using synthetic data. A speedup of more than an order of magnitude is observed without sacrificing too much accuracy in recovered elasticity.

Lee, Joohwi (2015)

Group-Wise Image Registration and Its Application to Rodent Brain Analysis

Under the direction of Martin Styner

Electronic copy available

The use of structural magnetic resonance imaging (sMRI) in animal models such as rodent models is of increasing interest to medical research and pharmaceutical science communities. As advanced MR imaging and corresponding image analysis technology has been developed and used in human neuroimaging studies, it is common to apply these human image analysis techniques to analyze rodent brain images. However, due the brain shape, organization and appearance differences between species, the mere adaptation of image analysis techniques developed for the human brain can be suboptimal. 

This dissertation proposes a group-wise image registration method that incorporates the minimum description length principle into its objective function. By using freely moving particles and the group-wise similarity metric, the proposed method registers a group of subjects without a designated template and thus avoids potential biases from the use of a single template. Furthermore, a multi-atlas segmentation method is developed from the group-wise registration method to improve the segmentation accuracy for low contrast images, common in animal model MRIs. The proposed segmentation method also takes the anatomical component of the rodent brain into account by combining a hierarchical active shape model based on multilevel component analysis and a label fusion method.  The performance of the proposed registration and segmentation methods is objectively compared with other methods using a set of evaluation metrics.

The application of the proposed methods is demonstrated within a novel cortical thickness analysis pipeline for the rodent brain. The pipeline consists of automatic image and shape processing and analysis components that measure the cortical thickness accurately using the definition of Laplace PDE-based thickness. The pipeline can be more accurate than a previous pipeline due to an explicit surface point correspondence step and a streamline-based thickness measurement step that achieves sub-voxel accuracy.

Lee, Shie-Jue (1990)

“CLIN: An Automated Reasoning System Using Clause Linking”
Under the direction of David A. Plaisted
Department of Computer Science Technical Report TR90-029

The efficiency of many theorem provers suffers from a phenomenon called duplication of instances of clauses. This duplication occurs in almost all theorem proving methods. Clause linking is a novel technique to avoid such duplication. An automated reasoning system called CLIN is an implementation of clause linking. A variety of strategies and refinements are implemented. We show the effectiveness of the system by comparing it with other major provers. The system is particularly effective for logic puzzles, near-propositional problems, set theory problems, and temporal logic theorems. We have also obtained proofs of some moderately hard mathematical theorems using this system. A top-level supervisor is provided to serve as a friendly interface between CLIN and the user. This interface accepts the user’s problems and directs the system to find proofs without intervention from the user. The supervisor works by trying a sequence of strategies in a specified order, somewhat like an expert system. It frees the user from the necessity of understanding the system or the merits of different strategies in order to use it. CLIN can work as a problem solver. It is able to find solutions for a given problem, such as constraint satisfaction problems of AI or logic puzzles, in a fully declarative way. CLIN presents a new paradigm of proving propositional temporal logic theorems. The proofs obtained are human-oriented and understandable. CLIN also provides potential applications to database design and axiom debugging.

Leler, William J. (1987)

“Specification and Generation of Constraint Satisfaction Systems”
Under the direction of Bharadwaj Jayaraman
Department of Computer Science Technical Report TR87-006
Electronic copy available

Constraint languages are declarative languages that have been used in various applications such as simulation, modeling, and graphics. Unfortunately, despite their benefits, existing constraint languages tend to be application-specific, have limited extensibility, and are difficult to implement. This dissertation presents a general-purpose computer language that makes it much easier to describe and implement constraint satisfaction systems. This language, called Bertrand, supports a rule-based programming methodology and also includes a form of abstract datatype. It is implemented using a new inference mechanism called augmented term rewriting, which is an extension of standard term rewriting. Using rules, a Bertrand programmer can describe new constraint satisfaction mechanisms, including equation solvers. Rules can also be used to define new types of objects and new constraints on these objects. Augmented term rewriting uses these rules to find a model that satisfies a set of user-specified constraints. The simple semantics of augmented term rewriting makes it possible to take fast pattern matching, compilation, constant propagation, and concurrency. Consequently, Bertrand programs can be executed efficiently using an interpreter or compiled to run on a conventional or parallel processor. This dissertation surveys existing constraint satisfaction techniques and languages, and shows how they can be implemented in Bertrand. It also gives a precise operational semantics for augmented term rewriting. Techniques for efficient execution, including interpretation and compilation, are presented. Finally, examples are given using Bertrand to solve problems in algebra such as word problems and computer aided engineering, and problems in graphics, such as computer aided design, illustration, and mapping.

Leontyev, Hennadiy (2010)

“Compositional Analysis Techniques For Multiprocessor Soft Real-Time Scheduling”
Under the direction of James Anderson
Electronic copy available

The design of systems in which timing constraints must be met (real-time systems) is being affected by three trends in hardware and software development. First, in the past few years, multiprocessor and multicore platforms have become standard in desktop and server systems and continue to expand in the domain of embedded systems. Second, real-time concepts are being applied in the design of general-purpose operating systems (like Linux) and attempts are being made to tailor these systems to support tasks with timing constraints. Third, in many embedded systems, it is now more economical to use a single multiprocessor instead of several uniprocessor elements; this motivates the need to share the increasing processing capacity of multiprocessor platforms among several applications supplied by different vendors and each having different timing constraints in a manner that ensures that these constraints were met. These trends suggest the need for mechanisms that enable real-time tasks to be bundled into multiple components and integrated in larger settings. There is a substantial body of prior work on the multiprocessor schedulability analysis of real-time systems modeled as periodic and sporadic task systems. Unfortunately, these standard task models can be pessimistic if long chains of dependent tasks are being analyzed. In work that introduces less pessimistic and more sophisticated workload models, only partitioned scheduling is assumed so that each task is statically assigned to some processor. This results in pessimism in the amount of needed processing resources. We extend prior work on multiprocessor soft real-time scheduling and construct new analysis tools that can be used to design component-based soft real-time systems. These tools allow multiprocessor real-time systems to be designed and analyzed for which standard workload and platform models are inapplicable and for which state-of-the-art uniprocessor and multiprocessor analysis techniques give results that are too pessimistic.

Levoy, Marc S. (1989)

“Display of Surfaces From Volume Data”
Under the direction of Henry Fuchs
Department of Computer Science Technical Report TR89-022
Electronic copy available

Volume rendering is a technique for visualizing sampled scalar fields of three spatial dimensions without fitting geometric primitives to the data. A color and a partial transparency are computed for each data sample, and images are formed by blending together contributions made by samples projecting to the same pixel on the picture plane. Quantization and aliasing artifacts are reduced by avoiding thresholding during data classification and by carefully resampling the data during projection. This thesis presents an image-order volume rendering algorithm, demonstrates that it generates image of comparable quality to existing object-order algorithms, and offers several improvements. In particular, methods are presented for displaying isovalue contour surfaces and region boundary surfaces, for rendering mixtures of analytically defined geometry and sampled field, and for adding shadows and textures. Three techniques for reducing rendering cost are also presented: hierarchical spatial enumeration, adaptive termination of ray tracing, and adaptive image sampling. Case studies from two applications are given: medical imaging and molecular graphics.

Levy, Joshua H. (2008)

“Refinement of Object-Based Segmentation”
Under the direction of Stephen M. Pizer
Electronic copy available

Automated object-based segmentation methods calculate the shape and pose of anatomical structures of interest. These methods require modeling both the geometry and object-relative image intensity patterns of target structures. Many object-based segmentation methods min- imize a non-convex function and risk failure due to convergence to a local minimum. This dissertation presents three refinements to existing object-based segmentation methods. The first refinement mitigates the risk of local minima by initializing the segmentation closely to the correct answer. The initialization searches pose- and shape-spaces for the object that best matches user specified points on three designated image slices. Thus-initialized m-rep based segmentations of the bladder from CT are frequently better than segmentations reported elsewhere. The second refinement is a statistical test on object-relative intensity patterns that allows estimation of the local credibility of segmentation. This test effectively identifies regions with local segmentation errors in m-rep based segmentations of the bladder and prostate from CT. The third refinement is a method for shape interpolation that is based on changes in the position and orientation of samples and that tends to be more shape-preserving than a competing linear method. This interpolation can be used with dynamic structures. It can also be used to transform an atlas image to a target image and thus to transfer the segmentations of other nearby anatomical regions from the atlas to the target image.

Li, Haohan (2013)

“Scheduling Mixed-Critically Real-Time Systems”
Under the direction of Sanjoy Baruah

This dissertation addresses the following question to the design of scheduling policies and resource allocation mechanisms in contemporary embedded systems that are implemented on integrated computing platforms: in a multitasking system where it is hard to estimate a task’s worst-case execution time, how do we assign task priorities so that 1) the safety-critical tasks are asserted to be completed within a specified length of time, and 2) the non-critical tasks are also guaranteed to be completed within a predictable length of time if no task is actually consuming time at the worst case? This dissertation tries to answer this question based on the mixed-criticality real-time system model, which defines multiple worst-case execution scenarios, and demands a scheduling policy to provide provable timing guarantees to each level of critical tasks with respect to each type of scenario. Two scheduling algorithms are proposed to serve this model. The OCBP algorithm is aimed at discrete one-shot tasks with an arbitrary number of criticality levels. The EDF-VD algorithm is aimed at recurrent tasks with two criticality levels (safety-critical and non-critical). Both algorithms are proved to optimally minimize the percentage of computational resource waste within two criticality levels. More in-depth investigations to the relationship among the computational resource requirement of different criticality levels are also provided for both algorithms.

Li, Peng (2014)

“Replication and Placement for Security in Distributed Systems”
Under the direction of Mike Reiter

In this thesis we show how the security of replicated objects in distributed systems, in terms of either the objects’ confidentiality or availability, can be improved through the placement of objects’ replicas so as to carefully manage the nodes on which objects’ replicas overlap.

In the first part of this thesis we present StopWatch, a system that defends against timing based side-channel attacks that arise from coresidency of victims and attackers in infrastructure-as-a-service clouds and threaten confidentiality of victims’ data. StopWatch triplicates each cloudresident guest virtual machine (VM) and places replicas so that the three replicas of a guest VM are coresident with nonoverlapping sets of (replicas of) other VMs. StopWatch uses the timing of I/O events at a VM’s replicas collectively to determine the timings observed by each one or by an external observer, so that observable timing behaviors are similarly likely in the absence of any other individual, coresident VM. We detail the design and implementation of StopWatch in Xen, evaluate the factors that influence its performance, demonstrate its advantages relative to alternative defenses against timing side-channels with commodity hardware, and address the problem of placing VM replicas in a cloud under the constraints of StopWatch so as to still enable adequate cloud utilization.

We then explore the problem of placing object replicas on nodes in a distributed system to maximize system availability (the number of objects that remain available) when node failures occur. In our model, failing (the nodes hosting) a given threshold of replicas is sufficient to disable each object, and the adversary selects which nodes to fail to minimize the number of objects that remain available. We specifically explore placement strategies based on combinatorial structures called t-packings, and propose an efficient algorithm for computing combinations of t-packings that maximize availability in this model. We compare the availability offered by our approach to that offered by random replica placement, owing to the popularity of the latter approach in previous work. After quantifying the availability offered by random replica placement in our model, we show that our combinatorial strategy yields object replica placements with better availability than random replica placement for many realistic parameter values.

Li, Weizi (2019)

“Simulation and Learning for Urban Mobility: City-Scale Traffic and Autonomous Driving”

Under the direction of Ming C. Lin

Electronic copy available

Traffic congestion has become one of the most critical issues worldwide. The extra costs due to traffic gridlock are approximately 160 billion dollars in the United States, more than 13 billion pounds in the United Kingdom, and over one trillion dollars across the globe at the moment. As more metropolitan areas will experience increasingly severe traffic conditions, the ability to analyze, understand, and improve traffic dynamics is thus critical. This dissertation is an effort towards this goal. In particular, I propose techniques that combine simulation and learning to tackle the problem of traffic from two aspects: city-scale traffic and autonomous driving.   

Traffic is formed of individual vehicles propagating through space and time. Its dynamics can only be studied in an aggregate form. In this dissertation, I address the problem of efficient and accurate estimation and reconstruction city-scale traffic, as the reconstructed traffic can be used to analyze congestion causes, identify network bottlenecks, and experiment novel policies. This problem has proven to be challenging for two reasons: first, traffic conditions depending on individual drivers are intrinsically stochastic; second, the availability and quality of traffic data vary to a great extent. In order to estimate and reconstruct traffic, data are necessary. Traditional traffic monitoring systems that exist mostly on highways and major roads are insufficient to recover city-scale traffic. GPS data, on the other hand, provide much broader coverage of a city thus are more promising sources for the task. Despite that, such data are limited by their spatial-temporal sparsity in practice. I have developed a framework to statically estimate and dynamically reconstruct traffic over a large-scale road network by addressing the sparsity issue within GPS traces.  

Recent advancements in automation and its implication on socio-economic benefits have prompted widespread research of autonomous vehicles. Autonomous driving is a complex task considering the dynamics of an environment and the lack of accurate definitions of various driving behaviors. Thus, obtaining a robust control policy for driving remains challenging as it requires an effective policy architecture, an efficient learning mechanism, and training data from a variety of scenarios, including rare situations (e.g., accidents). I have developed a framework, named ADAPS, for producing robust control policies for autonomous driving. ADAPS consists of a memory-enabled hierarchical control policy and two simulation platforms which are used to generate and analyze accidents while automatically producing labeled training data. Additionally, ADAPS offers a more efficient online learning mechanism, which can reduce the number of iterations required in learning a policy.

Lifshitz, Lawrence M. (1987)

“Image Segmentation via Multiresolution Extrema Following”
Under the direction of Stephen M. Pizer
Department of Computer Science Technical Report TR87-012
Electronic copy available

The aim of this research has been to create a computer algorithm to segment greyscale images into regions of interest (objects). These regions can provide the basis for scene analysis (including shape parameter calculation) or surface-based shaded graphics display. The algorithm creates a tree structure for image description by defining a linking relationship between pixels in successively blurred versions of the initial image. The image description describes the image in terms of nested light and dark regions. This algorithm can theoretically work in any number of dimensions; the implementation works in one, two, or three dimensions. Starting from a mathematical model describing the technique, this research has shown that:

  • by explicitly addressing the problems peculiar to the discreteness of computer representations the segmentation described by the mathematical model can be successfully approximated.
  • although the image segmentation performed sometimes contradicts semantic and visual information about the image (e.g., part of the kidney is associated with the liver instead of the rest of the kidney), simple interactive post-processing can often improve the segmentation results to an extent sufficient to segment the region desired.
  • the theoretical nesting properties of regions, originally thought to hold in all circumstances, does not necessarily apply to all pixels in a region.

The interactive post-processing developed selects regions from the descriptive tree for display in several ways: pointing to a branch of the image description tree (which is shown on a vector display), specifying by sliders the range of scale and/or intensity of all regions which should be displayed, and pointing (on the original image) to any pixel in the desired region. The algorithm has been applied to about 15 CT images of the abdomen. While performance is frequently good, work needs to be done to improve performance and to identify and extend the range of images the algorithm will segment successfully.

Lin, Wei-Jyh (1991)

“Boundary Estimation in Ultrasound Images”
Under the direction of Stephen M. Pizer
Department of Computer Science Technical Report TR91-037
Electronic copy available

A Bayesian approach is proposed for characterizing boundaries in ultrasound images. This approach determines the boundary probabilities by estimating the posterior means of a statistical model for boundary believability and normal direction using a Gibbs prior. There are two essential components in the Bayesian formulation: The likelihood function and the prior. The likelihood function is based on the outputs of specially designed filters. These filters take into account ultrasound image noise properties and produce reliable edge measurements. Traditional Gibbs priors can only capture the smoothness properties of local shape. A method was designed to include global shape properties in the Gibbs prior, an important step in producing robust boundary estimation from noisy images. This method uses a pyramid algorithm and multiscale edge filters for global boundary analysis. The results of the global analysis are incorporated into the Gibbs prior using a data augmentation scheme, thus efficiently handling long range interactions. This approach was implemented and applied to a variety of ultrasound images with promising results. The results show that the global attributes help close gaps and suppress the spurious edge response of speckle spots. This approach can be extended to three dimensions to produce surface believability and surface normals, which could be the input to a three-dimensional rendering algorithm. Each component of the solution algorithm has been carefully designed to allow a parallel implementation, an important design criterion for the algorithm to be useful in real-time three-dimensional ultrasound image display.

Lincoln, Peter (2017)

Low Latency Displays for Augmented Reality

Under the direction of Henry Fuchs & Greg Welch

Electronic copy available

The primary, long-standing goal for Augmented Reality (AR) is bringing the real and virtual together into a common space. To maintain the illusion that these two worlds coexist, however, requires that they maintain pose registration among related objects spatially and temporally consistent (i.e., objects at the same location should stay connected despite changes in user or object pose). The greatest source of registration error in this regard is from latency—the delay between when a pose changes and the display changes in response—which breaks temporal consistency. Furthermore, the real world varies greatly in brightness; ranging from bright sunlight to deep shadows. A compelling AR system must maintain its virtual objects’ appearance both spatially temporally consistent with the real world as well.

     This dissertation presents new methods for low latency displays, primarily in the context of Optical See-through, AR, Head Mounted Displays, which has the least tolerance for latency in displays, focusing on temporal consistency in registration, High-Dynamic Range (HDR) color support, and spatial and temporal consistency in brightness:

  1. For registration temporal consistency, the primary insight is breaking the conventional display paradigm: computers render imagery, frame by frame, and then transmit it to the display for emission. Instead, the display must also contribute towards rendering by performing a post-rendering, post-transmission warp of the computer-supplied imagery in the display hardware. By correcting for latency in the processing and transmission of the imagery using the latest tracking information in the display, much of the system latency can be short-circuited. Furthermore, the low latency display must support ultra-high frequency (multiple kHz) updates in order to take full advantage of the imagery updates.
  2. For HDR color support in low latency displays, the primary insight is the development of new display modulation techniques. Modern ultra-high frequency displays generally emit binary output, which requires modulation to produce analog values. Conventional modulation would break the low latency guarantees, and modulation of bright LEDs illuminators at frequencies to support HDR at ultra-high display frequencies exceeds their capabilities; thus one must directly synthesize the necessary variation in brightness.
  3. For brightness spatial and temporal consistency, the primary insight is integrating HDR light sensors into the display hardware: the same processes which correct the pose and generate the HDR output can also modify it in response to the spatially sensed brightness of the real world.

     This dissertation presents implementations, results (both visual and performance), and future steps towards improving the methods above.

Lipscomb, James S. (1981)

“Three-Dimensional Cues for a Molecular Computer Graphics System”
Under the direction of Frederick P. Brooks, Jr.
Electronic copy available

The perception of depth in three-dimensional objects presented on a two-dimensional display can be enhanced by many techniques. This thesis develops programs for implementing a number of such techniques flexibly and efficiently, including rotation, intensity modulation, and stereo. These techniques are applied to the GRIP-75 molecular computer graphics system with which crystallographers fit proteins or nucleic acids to their electron density maps. A fragment of a molecule is superimposed on a contoured electron density map and presented to the user for manual manipulation. My thesis is that motion decomposition pervades surprisingly many aspects of our computer-graphics-guided manual manipulation, that smooth rotation appears to be the single best depth cue currently available, that a variety of multiple image effects, due to disparity between refresh and update rates, can plague moving pictures, that the good 3-D perception provided by stereo can hinder manipulation, and that achieving orthogonality among functions is surprisingly important and difficult. The programs run on a satellite graphics system consisting of a DEC PDP-11/45 computer driving a Vector General display system, and a high-speed selector channel which provides communication with a time-shared host computer, an IBM System/360 Model 75. The programs developed are written in PL/I and reside mainly in the satellite.

Liu, Alan Ve-Ming (1998)

“3D/2D Registration and Reconstruction in Image-Guided Surgery”
Under the direction of Stephen M. Pizer

Percutaneous surgical guidance by 2D fluoroscopic images is impeded by the incomplete information available from projected views. 3D image guided surgery can provide better comprehension of 3D anatomy and present objects not apparent by fluoroscopy. However, current solutions for 3D surgical guidance typically rely on a small number of external markers or landmarks. This is convenient but can lead to navigational errors at the target site. This dissertation explores the feasibility of using the digital images of anatomical structures as navigational references for surgical guidance. A register-and-reconstruct paradigm is proposed. Fluoroscopic images are registered with preoperative patient CT. Given registered bi-plane images, the instrument pose can be reconstructed. My 3D/2D registration algorithm uses the medial curves of tubular structures and has the advantage of mathematical and implementation simplicity. Theoretical analysis and experiments show this algorithm to be robust and accurate. A method for synthesizing tubular structures extends the algorithm, permitting arbitrary anatomical structures to be used. The synthesis is a novel application of the parallax effect on digitally reconstructed radiographs. A separate algorithm was developed for stereotactic instrument pose recovery from projection images. The method is based on the geometric properties of bi-plane imaging. An error model was developed to analyze performance characteristics. Both theoretical analysis and experiments using the Visible Human dataset and skull phantoms indicate that the register-and-reconstruct paradigm is accurate to at least 1.5mm, well within the requirements of the driving problem of percutaneous rhizotomy.

Liu, Cong (2013)

“Efficient Design, Analysis, and Implementation of Complex Multiprocessor Real-Time Systems”
Under the direction of Jim Anderson

The advent of multicore technologies is a fundamental development that is impacting software design processes across a wide range of application domains, including an important category of such applications, namely, those that have real-time constraints. This development has led to much recent work on multicore-oriented resource management frameworks for real-time applications. Unfortunately, most of this work focuses on simple task models where complex but practical runtime behaviors among tasks do not arise. In practice, however, many factors such as programming methodologies, interactions with external devices, and resource sharing often result in complex runtime behaviors that can negatively impact timing correctness. The goal of this dissertation is to support such more realistic and complex applications in multicore-based real-time systems. The thesis of this dissertation is: Capacity loss (i.e., over provisioning) can be significantly reduced on multiprocessors while providing soft and hard real-time guarantees for real-time applications that exhibit complex runtime behaviors such as self-suspensions, graph-based precedence constraints, non-preemptive sections, and parallel execution segments by designing new real-time scheduling algorithms and developing new schedulability tests. The above thesis is established by developing new multiprocessor scheduling algorithms and schedulability tests that are sufficient to provide real-time guarantees for task systems containing each of the above mentioned complex runtime behaviors individually and in combination. First, we present several efficient multiprocessor schedulability tests for both soft and hard real-time sporadic self-suspending task systems. For the past 20 years, the unsolved problem of supporting real-time systems with suspensions has impeded research progress on many related research topics such as analyzing and implementing I/O-intensive applications in multiprocessor systems. The impact of this work is demonstrated by the fact that it provides a first set of practically efficient solutions that can fundamentally solve this problem. To better handle graph-based precedence constraints, we propose a scheduling algorithm that can achieve no capacity loss for scheduling general task graphs on a multiprocessor while providing soft real-time correctness guarantees. We also extend this result to support task graphs in a distributed system containing multiple multiprocessor-based clusters. The impact of this work is demonstrated by the fact that we closed a problem that stood open for 12 years. By achieving no capacity loss, our research results can provide a set of analytically correct and practically efficient methodologies to designers of real-time systems that execute task graphs, such as signal processing and multimedia application systems. Furthermore, besides handling runtime behaviors such as self-suspensions and graph-based precedence constraints independently, we also investigate how to support sophisticated real-time task systems containing mixed types of complex runtime behaviors. Specifically, we derive a sufficient schedulability test for soft real-time sporadic task systems in which non-preemptive sections, self-suspensions, and graph-based precedence constraints co-exist. We present a method for transforming such a sophisticated task system into a simpler sporadic task system with only self-suspensions. The transformation allows maximum response-time bounds derived for the transformed sporadic self-suspending task system to be applied to the original task system. Finally, we present schedulability analysis for sporadic parallel task systems. The proposed analysis shows that such systems can be efficiently supported on multiprocessors with bounded response times. In particular, on a two-processor platform, no capacity loss results for any parallel task system. Despite this special case, on a platform with more than two processors, capacity loss is fundamental. By observing that capacity loss can be reduced by restructuring tasks to reduce intra-task parallelism, we propose optimization techniques that can be applied to determine such a restructuring.

Liu, Guodong (2007)

“A Data-driven, Piecewise Linear Approach to Modeling Human Motions”
Under the direction of Leonard McMillan
Electronic copy available

Motion capture, or mocap, is a prevalent technique for capturing and analyzing human articulations. Nowadays, mocap data are becoming one of the primary sources of realistic human motions for computer animation as well as education, training, sports medicine, video games, and special effects in movies. As more and more applications rely on high-quality mocap data and huge amounts of mocap data become available, there are imperative needs for more effective and robust motion capture techniques, better ways of organizing motion databases, as well as more efficient methods to compress motion sequences. I propose a data-driven, segment-based, piecewise linear modeling approach to exploit the redundancy and local linearity exhibited by human motions and describe human motions with a small number of parameters. This approach models human motions with a collection of low-dimensional local linear models. I first segment motion sequences into subsequences, i.e. motion segments, of simple behaviors. Motion segments of similar behaviors are then grouped together and modeled with a unique local linear model. I demonstrate this approach’s utility in four challenging driving problems: estimating human motions from a reduced marker set; missing marker estimation; motion retrieval; and motion compression.

Liu, Jinze (2006)

“New Approaches for Clustering High Dimensional Data.”
Under the direction of Wei Wang
Electronic copy available

Clustering is one of the most effective methods for analyzing datasets that contain a large number of objects with numerous attributes. Clustering seeks to identify groups, or clusters, of similar objects. In low dimensional space, the similarity between objects is often evaluated by summing the difference across all of their attributes. High dimensional data, however, may contain irrelevant attributes which mask the existence of clusters. The discovery of groups of objects that are highly similar within some subsets of relevant attributes becomes an important but challenging task. My thesis focuses on various models and algorithms for this task. We first present a flexible clustering model, namely OP-Cluster (Order Preserving Cluster). Under this model, two objects are similar on a subset of attributes if the values of these two objects induce the same relative ordering of these attributes. OPClustering algorithm has demonstrated to be useful to identify co-regulated genes in gene expression data. We also propose a semi-supervised approach to discover biologically meaningful OP-Clusters by incorporating existing gene function classifications into the clustering process. This semi-supervised algorithm yields only OP-clusters that are significantly enriched by genes from specific functional categories. Real datasets are often noisy. We propose a noise-tolerant clustering algorithm for mining frequently occuring itemsets. This algorithm is called approximate frequent itemsets (AFI). Both the theoretical and experimental results demonstrate that our AFI mining algorithm has higher recoverability of real clusters than any other existing itemset mining approaches. Pair-wise dissimilarities are often derived from original data to reduce the complexities of high dimensional data. Traditional clustering algorithms taking pair-wise dissimilarities as input often generate disjoint clusters from pair-wise dissimilarities. It is well known that the classification model represented by disjoint clusters is inconsistent with many real classifications, such gene function classifications. We develop a Poclustering algorithm, which generates overlapping clusters from pair-wise dissimilarities. We prove that by allowing overlapping clusters, Poclustering fully preserves the information of any dissimilarity matrices while traditional partitioning algorithms may cause significant information loss.

Liu, Wei (2016)

Localizing Objects Fast and Accurately

Under the direction of Alex Berg

Electronic copy available

A fundamental problem in computer vision is knowing what is in the image and where it is. We develop models to localize objects of multiple categories, such as person and car, fast and accurately. In particular, we focus on designing deep convolutional neural networks (CNN) for object detection and semantic segmentation. A central theme of this dissertation is to explore the design choices of network structure to combine the full power of CNN and the natural of each tasks to not only achieve high quality results but also keep the model relatively simple and fast. 

At the heart of object detection is the question of how to search efficiently through a continuous 2D bounding boxes space of various scales and aspect ratios at every possible locations in an image. A brute force way would be searching over all possibilities, but it is apparently not scalable and is quite difficult. An alternative is to propose some hypothesis of locations which might contain objects, and then classify each of the hypothesis. A first contribution of this dissertation is to show that using a much more powerful classifier can boost the detection performance significantly, thanks to the much smaller search space after the proposal step.

A second contribution of this dissertation is that we show that the search can be approximated by discretizing the search space and then regress the residual difference between a discrete box and a target box. This is a departure from the proposal and post classification series of methods. We present a single stage framework, SSD, which can simultaneously detect and classify objects fast and accurately. SSD splits the small boxes space more densely but more sparsely for large boxes space. As a result, it can discretize the space more efficiently and easy the training notably. We have empirically shown that it is as accurate as or even better than the two stage methods and yet is much faster.

Unlike object detection, semantic segmentation is usually treated as a per-pixel classification problem, especially in the era of deep network. However a major issue is how to incorporate global semantic context information when making local decision. Although there are concurrent works on using techniques from graphical models such as conditional random field (CRF) to introduce context and structure information, we present a simple yet effective method, ParseNet, by using the average feature for a layer to augment the features at each location. Experimental results show that it can be as effective as a method which uses CRF as a post-processing step to include context information.

In order to make above methods useful for many real-time systems, such as mobile devices or self driving cars, we have collected large scale video datasets for multiple categories, and hope that temporal consistency information within video can help even further boost the performance and speed up the operations with low power consumption.

Liu, Xiaoxiao (2010)

“Shape-correlated Statistical Modeling and Analysis for Respiratory Motion Estimation”
Under the direction of Stephen M. Pizer
Electronic copy available

Respiratory motion challenges image-guided radiation therapy (IGRT) with location uncertainties of important anatomical structures in the thorax. Effective and accurate respiration estimation is crucial to account for the motion effects on the radiation dose to tumors and organs at risk. Moreover, serious image artifacts present in treatment-guidance images such as 4D cone-beam CT cause difficulties in identifying spatial variations. Commonly used non-linear dense image matching methods easily fail in regions where artifacts interfere. Learning-based linear motion modeling techniques have the advantage that training knowledge can be introduced to reveal the true motion. In this research shape-correlation deformation statistics (SCDS) capture strong correlations between the shape of the lung and the dense deformation field under breathing. Dimension reduction and linear regression techniques are used to extract the correlation statistics. Based on the assumption that the deformation correlations are consistent between planning and treatment time, patient-specific SCDS trained from a 4D planning image sequence is used to predict the respiratory motion in the patient’s artifact-laden 4D treatment image sequence. Furthermore, the SCDS-prediction results combined with intensity-matching forces are integrated into a prediction-driven atlas formation framework. The strategy of balancing between the prediction constraints and the intensity-matching forces makes the method less sensitive to variation in the correlation and utilizes intensity information besides the lung boundaries. This strategy thus provides improved motion estimation accuracy and robustness. . The SCDS-based methods are shown to be effective in modeling and estimating respiratory motion in lung, with evaluations and comparisons carried out on both simulated images and patient images.

Liu, Yuanxin (2008)

“Computations of Delaunay and higher order triangulations, with applications to splines”
Under the direction of Jack Snoeyink
Electronic copy available

Digital data that consist of discrete points are frequently captured and processed by scientific and engineering applications. Due to the rapid advance of new data gathering technologies, data set sizes are increasing, and the data distributions are becoming more irregular. These trends call for new computational tools that are both efficient enough to handle large data sets and flexible enough to accommodate irregularity. A mathematical foundation that is well-suited for developing such tools is triangulation, which can be defined for discrete point sets with little assumption about their distribution. The potential benefits from using triangulation are not fully exploited. The challenges fundamentally stem from the complexity of the triangulation structure, which generally takes more space to represent than the input points. This complexity makes developing a triangulation program a delicate task, particularly when it is important that the program runs fast and robustly over large data. This thesis addresses these challenges in two parts. The first part concentrates on techniques designed for efficiently and robustly computing Delaunay triangulations of three kinds of practical data: the terrain data from LIDAR sensors commonly found in GIS, the atom coordinate data used for biological applications, and the time varying volume data generated from from scientific simulations. The second part addresses the problem of defining spline spaces over triangulations in two dimensions. It does so by generalizing Delaunay configurations, defined as follows. For a given point set P in two dimensions, a Delaunay configuration is a pair of subsets (T, I) from P, where T, called the boundary set, is a triplet and I, called the interior set, is the set of points that fall in the circumcircle through T. The size of the interior set is the degree of the configuration. As recently discovered by Neamtu (2004), for a chosen point set, the set of all degree k Delaunay configurations can be associated with a set of degree k + 1 splines that form the basis of a spline space. In particular, for the trivial case of k = 0, the spline space coincides with the PL interpolation functions over the Delaunay triangulation. Neamtu’s definition of the spline space relies only on a few structural properties of the Delaunay configurations. This raises the question whether there exist other sets of configurations with identical structural properties. If there are, then these sets of configurations–let us call them generalized configurations from hereon–can be substituted for Delaunay configurations in Neamtu’s definition of spline space thereby yielding a family of splines over the same point set.

Liu, Yi (2013)

“Fast and Accurate Haplotype Inference with Hidden Markov Model”
Under the direction of Wei Wang and Yun Li
Electronic copy available

The genome of human and other diploid organisms consists of paired chromosomes. The haplotype information (DNA constellation on one single chromosome), which is crucial for disease association analysis and population genetic inference among many others, is however hidden in the data generated for diploid organisms (including human) by modern high-throughput technologies which cannot distinguish information from two homologous chromosomes. Here, I consider the haplotype inference problem in two common scenarios of genetic studies:

  1. Model organisms (such as laboratory mice): Individuals are bred through prescribed pedigree design.
  2. Out-bred organisms (such as human): Individuals (mostly unrelated) are drawn from one or more populations or continental groups.

In the two scenarios, one individual may share short blocks of chromosomes with other individual(s) or with founder(s) if available. I have developed and implemented methods, by identifying the shared blocks statistically, to accurately and more rapidly reconstruct the haplotypes for individuals under study and to solve important related problems including genotype imputation and ancestry inference. My methods, based on hidden Markov model, can scale up to tens of thousands of individuals. Analysis based on my method leads to a new genetic map in mouse population which reveals important biological properties of the recombination process. I have also explored the study design and empirical quality control for imputation tasks with large scale datasets from admixed population.

Livingston, Mark A. (1998)

“Vision-Based Tracking with Dynamic Structured Light for Video-See-Through Augmented Reality”
Under the direction of Henry Fuchs
Electronic copy available

Tracking has proven a difficult problem to solve accurately without limiting the user or the application. Vision-based systems have shown promise, but are limited by occlusion of the landmarks. We introduce a new approach to vision-based tracking using structured light to generate landmarks. The novel aspect of this approach is the system need not know the 3D locations of landmarks. This implies that motion within the field of view of the camera does not disturb tracking as long as landmarks are reflected off any surface into the camera. This dissertation specifies an algorithm which tracks a camera using structured light. A simulator demonstrates excellent performance on user motion data from an application currently limited by inaccurate tracking. Further analysis reveals directions for implementation of the system, theoretical limitations, and potential extensions to the algorithm. The term augmented reality (AR) has been given to applications that merge computer graphics with images of the user’s surroundings. AR could give a doctor “X-ray vision” with which to examine the patient before or during surgery. At this point in time, AR systems have not been used in place of the traditional methods of performing medical or other tasks. One important problem that limits acceptance of AR systems is lack of precise registration–alignment—between real and synthetic objects. There are many components of an AR system that contribute to registration. One of the most important is the tracking system. The tracking data must be accurate, so that the real and synthetic objects are aligned properly. Our work in augmented reality focuses on medical applications. These require precise alignment of medical imagery with the physician’s view of the patient. Although many technologies have been applied, including mechanical, magnetic, optical, et al, we have yet to find a system sufficiently accurate and robust to provide correct and reliable registration. We believe the system specified here contributes to tracking in AR applications in two key ways: it takes advantage of equipment already used for AR, and it has the potential to provide sufficient registration for demanding AR applications without imposing the limitations of current vision-based tracking systems.

Lloyd, David Brandon (2007)

“Logarithmic Perspective Shadow Maps”
Under the direction of Dinesh Manocha
Electronic copy available

The shadow map algorithm is a popular approach for generating shadows for real-time applications. Shadow maps are exible and easy to implement, but they are prone to aliasing artifacts. To reduce aliasing artifacts we introduce logarithmic perspective shadow maps (LogPSMs). LogPSMs are based on a novel shadow map parameterization that consists of a perspective projection and a logarithmic transformation. They can be used for both point and directional light sources to produce hard shadows. To establish the benefits of LogPSMs, we perform an in-depth analysis of shadow map aliasing error and the error characteristics of existing algorithms. Using this analysis we compute a parameterization that produces near-optimal perspective aliasing error. This parameterization has high arithmetical complexity which makes it less practical than existing methods. We show, however, that over all light positions, the simpler LogPSM parameterization produces the same maximum error as the near-optimal parameterization. We also show that compared with competing algorithms, LogPSMs produce significantly less aliasing error. Equivalently, for the same error as competing algorithms, LogPSMs require significantly less storage and bandwidth. We demonstrate difference in shadowquality achieved with LogPSMs on several models of varying complexity. LogPSMs are rendered using logarithmic rasterization. We show how current GPU architectures can be modified incrementally to perform logarithmic rasterization at current GPU fill rates. Specifically, we modify the rasterizer to support rendering to a nonuniform grid with the same watertight rasterization properties as current rasterizers. We also describe a novel depth compression scheme to handle the nonlinear primitives produced by logarithmic rasterization. Our proposed architecture enhancements align with current trends of decreasing cost for on-chip computation relative to ochip bandwidth and storage. For only a modest increase in computation, logarithmic rasterization can greatly reduce shadow map bandwidth and storage costs.

Lok, Benjamin Chak Lum (2002)

“Interacting With Dynamic Real Objects in Virtual Environments”
Under the direction of Frederick P. Brooks Jr
Department of Computer Science Technical Report TR02-021
Electronic copy available

Suppose one has a virtual model of a car engine and wants to use an immersive virtual environment (VE) to determine whether both a large man and a petite woman can readily replace the oil filter. This real world problem is difficult to solve efficiently with current modeling, tracking, and rendering techniques. Hybrid environments, systems that incorporate real and virtual objects within the VE, can greatly assist in studying this question. We present algorithms to generate virtual representations, avatars, of dynamic real objects at interactive rates. Further, we present algorithms to allow virtual objects to interact with and respond to the real-object avatars. This allows dynamic real objects, such as the user, tools, and parts, to be visually and physically incorporated into the VE. The system uses image-based object reconstruction and a volume-querying mechanism to detect collisions and to determine plausible collision responses between virtual objects and the real-time avatars. This allows our system to provide the user natural interactions with the VE and visually faithful avatars. But is incorporating real objects even useful for VE tasks? We conducted a user study that showed that for spatial cognitive manual tasks, hybrid environments provide a significant improvement in task performance measures. Also, participant responses show promise of our improving sense- of-presence over customary VE rendering and interaction approaches. Finally, we have begun a collaboration with NASA Langley Research Center to apply the hybrid environment system to a satellite payload assembly verification task. In an informal case study, NASA LaRC payload designers and engineers conducted common assembly tasks on payload models. The results suggest that hybrid environments could provide significant advantages for assembly verification and layout evaluation tasks.

Lorenzen, Peter Jonathan (2006)

“Multi-Modal Image Registration and Atlas Formation”
Under the direction of Sarang C. Joshi
Electronic copy available

Medical images of human anatomy can be produced from a wide range of sensor technologies and imaging techniques resulting in a diverse array of imaging modalities, such as magnetic resonance and computed tomography. The physical properties of the image acquisition process for different modalities elicit different tissue structures. Images from multiple modalities provide complementary information about underlying anatomical structure. Understanding anatomical variability is often important in studying disparate population groups and typically requires robust dense image registration. Traditional image registration methods involve finding a mapping between two scalar images. Such methods do not exploit the complementary information provided by sets of multi-modal images. This dissertation presents a Bayesian framework for generating inter-subject large deformation transformations between two multi-modal image sets of the brain. The estimated transformations are generated using maximal information about the underlying neuroanatomy present in each of the different modalities. This modality independent registration framework is achieved by jointly estimating the posterior probabilities associated with the multi-modal image sets and the high-dimensional registration transformations relating these posteriors. To maximally use the information present in all the modalities for registration, Kullback-Leibler divergence between the estimated posteriors is minimized. This framework is extended to large deformation multi-class posterior atlas estimation. The method generates a representative anatomical template from an arbitrary number of topologically similar multi-modal image sets. The generated atlas is the class posterior that requires the least amount of deformation energy to be transformed into every class posterior (each characterizing a multi-modal image set). This method is computationally practical in that computation time grows linearly with the number of image sets. The multi-class posterior atlas formation method is applied to a database of multi-modal images from ninety-five adult brains as part of a healthy aging study to produce 4D spatiotemporal atlases for the female and male subpopulations. The stability of the atlases is evaluated based on the entropy of their class posteriors. Global volumetric trends and local volumetric change are evaluated. This multi-modal framework has potential applications in many natural multi-modal imaging environments.

Low, Kok-Lim (2006)

“View Planning for Range Acquisition of Indoor Environments”
Under the direction of Anselmo A. Lastra
Electronic copy available

This dissertation presents a new and efficient next-best-view algorithm for 3D reconstruction of indoor environments using active range sensing. A major challenge in range acquisition for 3D reconstruction is an efficient automated view planning algorithm to determine a sequence of scanning locations or views such that a set of acquisition constraints and requirements is satisfied and the object or environment of interest can be satisfactorily reconstructed. Due to the intractability of the view planning problem and the lack of global geometric information, a greedy approach is adopted to approximate the solution. A practical view metric is formulated to include many real-world acquisition constraints and reconstruction quality requirements. This view metric is flexible to allow trade-offs between different requirements of the reconstruction quality. A major contribution of this work is the application of a hierarchical approach to greatly accelerate the evaluation of the view metric for a large set of views. This is achieved by exploiting the various spatial coherences in the acquisition constraints and reconstruction quality requirements when evaluating the view metric. The hierarchical view evaluation algorithm is implemented in a view planning system targeted for the acquisition of indoor environments using a monostatic range scanner with 3D pose. The results show great speedups over the straightforward method used in many previous algorithms. The view planning system has also been shown to be robust for realworld application. The dissertation also describes how the view metric can be generalized to incorporate general acquisition constraints and requirements, and how the hierarchical view evaluation algorithm can be generalized to scanners with general pose, and to scanners with bistatic sensors. A simple extension is also proposed to enable the hierarchical view evaluation algorithm to take into account each view’s sensitivity to the potential pose errors in the physical positioning of the scanner. A computed new view must produce a range image that can be accurately registered to the previous scans. In this work, a metric is developed to estimate the registration accuracy of the views. This metric considers the amount of overlap, the range measurement errors, and the shape complexity of the surfaces..

Luebke, David P. (1998)

“View-Dependent Simplification of Arbitrary Polygonal Environments”
Under the direction of Frederick P. Brooks, Jr.
Electronic copy available

This dissertation describes hierarchical dynamic simplification (HDS), a new approach to the problem of simplifying arbitrary polygonal environments. HDS is dynamic, retessellating the scene continually as the user’s viewing position shifts, and global, processing the entire database without first decomposing the environment into individual objects. The resulting system enables real-time display of very complex polygonal CAD models consisting of thousands of parts and millions of polygons. HDS supports various preprocessing algorithms and various run-time criteria, providing a general framework for dynamic view-dependent simplification. Briefly, HDS works by clustering vertices together in a hierarchical fashion. The simplification process continually queries this hierarchy to generate a scene containing only those polygons that are important from the current viewpoint. When the volume of space associated with a vertex cluster occupies less than a user-specified amount of the screen, all vertices within that cluster are collapsed together and degenerate polygons filtered out. HDS maintains an active list of visible polygons for rendering. Since frame-to-frame movements typically involve small changes in viewpoint, and therefore modify this list by only a few polygons, the method takes advantage of temporal coherence for greater speed.

Lyu, Ilwoo (2016)

Cortical Surface Registration and Shape Analysis

Under the direction of Martin Styner

Electronic copy available

A population analysis of human cortical morphometry is critical for insights into brain development or degeneration, which allows for investigating sulcal and gyral folding patterns. In general such a population analysis requires both a well-established cortical correspondence and a well-defined quantification of the cortical morphometry. The highly folded and convoluted structures render a reliable and consistent population analysis challenging. Three key challenges have been identified for such an analysis: 1) consistent sulcal landmark extraction from the cortical surface to guide better cortical correspondence, 2) a correspondence establishment for a reliable and stable population analysis, and 3) quantification of the cortical folding in a more reliable and biologically meaningful fashion.

The main focus of this dissertation is to develop a fully automatic pipeline that supports a population analysis of local cortical folding changes. My proposed pipeline consists of three novel components I developed to overcome the challenges in the population analysis: 1) automatic sulcal curve extraction for stable/reliable anatomical landmark selection, 2) group-wise registration for establishing cortical shape correspondence across a population with no template selection bias, and 3) quantification of local cortical folding using a novel cortical shape adaptive kernel.

To evaluate my methodological contributions, I applied all of them in an application to early postnatal brain development. I studied the human cortical morphological development using the proposed local gyrification index from neonate age to 1 year and 2 years of age, with quantitative developmental assessments. This study revealed a novel pattern of associations between the cortical gyrification and cognitive development.