Skip to main content
Held 20 April 2018 in 011 Sitterson Hall

Schedule

Click on a presentation title to read the abstract and presenter bio.

3:30 – Session 1

“Automatic Extra-Axial Cerebrospinal Fluid” by Han Bit Yoon, supervised by Martin Styner

“A Framework for the Statistical Shape Analysis using SPHARM-PDM combined with ITK Conformal Flattening Filter” by Zhengyang Fang, supervised by Martin Styner

“Adapting the CIVET Pipeline to Macaque Brains” by Charles Tang, supervised by Martin Styner

4:20 – Session 2

“Scaling Up: The Validation of Empirically Derived Scheduling Rules on NVIDIA GPUs” by Joshua Bakita, supervised by Don Smith

“High-quality 3D Reconstruction of Textureless Objects in Confined Space Using Stereo Vision and Structured Light” by Siqing Xu, supervised by Jan-Michael Frahm

“Using Algorithmic Approaches to View Photo-Generated Models” by Isabel Uzsoy, supervised by Jan-Michael Frahm

“Behavioral Biometric Security: Brainwave Authentication Methods” by Rachel Schomp, supervised by Mike Reiter

5:25 – Session 3

“Data Science Rosetta Stone: A Tutorial of and Translation between Data Science Programming Languages” by Elaine Kearney, supervised by Stanley Ahalt

“Generalized Redirected Walking with Bimodal Distractors” by Nick Rewkowski, supervised by Mary Whitton

“Robust Machine Comprehension Models via Adversarial Training” by Yicheng Wang, supervised by Mohit Bansal

“Generative Adversarial Network for Image Captioning” by Yichen Jiang, supervised by Mohit Bansal

6:30 – Reception

Presentation Abstracts and Presenter Bios

Automatic Extra-Axial Cerebrospinal Fluid

Han Bit Yoon, supervised by Martin Styner

Automatic Extra-Axial Cerebrospinal Fluid (Auto EACSF) is an open-source, interactive tool for automatic computation of brain extra-axial cerebrospinal fluid (EA-CSF) in magnetic resonance image (MRI) scans of infants. Elevated extra-axial fluid volume is a possible biomarker for Autism Spectrum Disorder (ASD). Auto EACSF aims to automatically calculate the volume of EA-CSF and could therefore be used for early diagnosis of Autism. Auto EACSF is a user-friendly application that generates a Qt application to calculate the volume of EA-CSF. The application is run through a GUI, but also provides an advanced use mode that allows execution of different steps by themselves via Python and XML scripts.

Han Bit Yoon is a senior, pursuing a degree in computer science. She is a member of Carolina Firsts, an organization to support and celebrate first-generation college students.

A Framework for the Statistical Shape Analysis using SPHARM-PDM combined with ITK Conformal Flattening Filter

Zhengyang Fang, supervised by Martin Styner

Shape analysis is an important and powerful method used in the neuroimaging research community due to its potential to precisely locate morphological changes between healthy and pathological structures. A popular shape analysis in the neuroimaging community is based on the encoding surface locations as spherical harmonics for a representation called SPHARM–PDM, which converts a set of brain segmentations of a single brain structure (e.g., the hippocampus) into a corresponding spherical harmonic description (SPHARM), which is then sampled into triangulated surface (SPHARM-PDM). In the case of objects with complex shape, the current SPHARM-PDM pipeline can suffer from a high degree of mapping distortion. We propose the use of an alternative initialization based on the ITK Conformal Flattening filter. After quantitative measures of shape calculated from various complex structures, we conclude that in most cases, the new pipeline produced dramatically better results than the old pipeline.

Zhengyang “Tony” Fang is a senior computer science major at UNC. He works with Dr. Martin Styner and UNC PhD student Mahmoud Mostapha at the Neuro Image Research and Analysis Laboratories (NIRAL). He enjoys conducting research, making new findings, and turning ideas into realities. He believes in what Dr. Gary Bishop once said: ”Geeks making the world a bit better”.

Adapting the CIVET Pipeline to Macaque Brains

Charles Tang, supervised by Martin Styner

In order to create a better surface of the macaque brain through the use of the CIVET pipeline, made for imaging the human brain, we had to apply certain hacks to it. Starting with the raw T1 imageset of the macaque brain we created our own PVE segmentations and other images to replace the ones created by CIVET at certain points in the pipeline. After transforming the image to fit closer to that of a human brain we tested it through multiple runs of the pipeline. Results produced a satisfactory white matter surface and we are currently working on creating a more expansive grey matter surface.

Charles Tang is a junior Computer Science major at UNC.

Scaling Up: The Validation of Empirically Derived Scheduling Rules on NVIDIA GPUs

Joshua Bakita, supervised by Don Smith

Embedded systems augmented with graphics processing units (GPUs) are seeing increased use in safety-critical real-time systems such as autonomous vehicles. The current black-box and proprietary nature of these GPUs has made it difficult to determine their behavior in worst-case scenarios, threatening the safety of autonomous systems. In this work, we introduce a new validation framework to analyze GPU execution traces and determine if the internal scheduling policies of the black-box hardware and software match the assumptions from prior work. This work specifically focuses on NVIDIA CUDA devices because of their prevalent use in industry.

Joshua Bakita has researched with the Real-Time Systems Group since last September and has contributed to two other GPU research papers. Outside of school, he has completed internships at Capital One, Microsoft, and the UK Parliament. He also presently runs the Computer Science Club at UNC. This fall, he will begin his masters in computer science at UNC.

High-quality 3D Reconstruction of Textureless Objects in Confined Space Using Stereo Vision and Structured Light

Siqing Xu, supervised by Jan-Michael Frahm

To improve Augmented Reality applications’ performance in laparoscopic surgeries, we look at using stereo vision setup to produce high-quality 3D reconstruction of textureless objects in a confined space. First, a primitive stereo reconstruction system is implemented with basic local block matching and is analyzed by observing changes in reconstruction when various variables are tuned. Then, a sequence of additional refining algorithms is proposed to be applied to the system and improvement in reconstruction quality is observed. Structured light is added to help reconstruction of textureless objects. After multiple setups are analyzed, the most suitable structured light component is added to the original system. Experiments were conducted with the new system to help analyze the relationship between variables in projected patterns and reconstruction performance. The current system still has difficulty in reconstructing objects such as thin curve ones and possible solutions are proposed.

Siqing Xu is a junior Computer Science and Mathematics major. He is an undergraduate Research Assistant in Computer Science Department and is passionate about converting computer technologies to real-world applications.

Using Algorithmic Approaches to View Photo-Generated Models

Isabel Uzsoy, supervised by Jan-Michael Frahm

Improved imaging and graphics technologies allow us to render digital recreations of scenes from around the world with increasing intricacy. I worked on developing software that automatically creates a virtual “flythrough” video, showing a viewer the most important aspects of the scene. The models I use are constructed from sets of photographs. The method represents the cameras that generate a graph from the original scene image and determines a plausible motion path to traverse all the important viewpoints of the model. I used Dijkstra’s algorithm to find an initial, basic path for traversing the input images. I made the initial path more efficient by requiring a certain degree of visual dissimilarity between the points added to the graph before the path is calculated. The output visualization renders the model from the viewpoints defined by the nodes of the graph and uses Hermite splines to smoothly interpolate between them. To scale the approach to large scenes with many registered cameras, I investigated using clustering algorithms to group similar camera views and calculate a single, representative view to be used in the visualization.

Isabel Uzsoy is a double major in computer science and English literature from Cary, NC.

Behavioral Biometric Security: Brainwave Authentication Methods

Rachel Schomp, supervised by Mike Reiter

With approximately 100 billion neurons, each brain is identifiably unique in the way it reacts to and processes incoming information. It is this characteristic that can turn brainwaves into an authentication metric. However, neurological information is a growing and critical information class. The commerciality of neural devices is a contributing factor to this expansion with neural implants for clinical patients, at-home neurostimulators, and Brain-Computer Interfacing smartphone applications. A multitude of new access points carrying neural information has now been established and this continued multiplication in data extraction will lead to great strides in behavioral-biometrics, but also inevitable security and privacy concerns. This project investigates the possibility of creating an authentication system based on the measurements of the human brain. Discussed will be an evaluation of the feasibility of brainwave authentication based on brain anatomy and behavior characteristics, conventional vs. dynamic authentication methods, the possibility of continuous authentication, and biometric ethical and security concerns.

Rachel Schomp is a senior Business and Computer Science double major. After graduation she will be moving to New York to work at JPMorgan Chase & Co. as a Technology Analyst in Software Engineering.

Data Science Rosetta Stone: A Tutorial of and Translation between Data Science
Programming Languages

Elaine Kearney, supervised by Stanley Ahalt

Different programming languages are used in data science analysis both in academic and industrial fields. However, it is difficult for researchers to switch between languages or learn a new language which is needed in working across interdisciplinary fields and when working with colleagues from diverse backgrounds. This paper discusses a resource created to demonstrate the commonalities between four different programming languages commonly used in Data Science: MATLAB, Python, R, and SAS. The work for this came out of research completed at the Australia New Zealand Banking Group headquarters in Melbourne, Australia, and at the University of North Carolina at Chapel Hill. This research resulted in an online set of tutorials, which we refer to as the Data Science Rosetta Stone, which demonstrates common data science tasks in the same progression for each programming language. We demonstrate that all of these languages achieve similar results, though with different syntax and/or simplicity of code, and efficiency of execution.

Elaine Kearney is a senior Computer Science and Biostatistics major, interested in data science and its applications. She will be continuing her education at UNC Chapel Hill in the fall by pursuing a master’s degree in Biostatistics.

Generalized Redirected Walking with Bimodal Distractors

Nick Rewkowski, supervised by Mary Whitton

We present the development and evaluation of a redirected walking system that employs a new distractor-trigger mechanism, and can be used with a wide variety of virtual environments and in tracked spaces of the sizes available in current commercial VR systems. Reorientation to prevent users from leaving the tracked space is accomplished with video and/or audio distractors and a new method of triggering the appearance of distractors. We evaluated user performance with video-only, audio-only, and audio plus visual distractors. We found the bimodal distractors in our generalized redirected walking system to be effective, making full environment traversal possible while rotational distortion of the environment remained imperceptible to most users.

Nick Rewkowski works in GAMMA under Dr. Ming Lin on multimodal interactions in VR (especially 3D spatialized audio and haptics), telepresence under Dr. Henry Fuchs on 3D skeletal and environmental reconstruction, and EVE under Mary Whitton on virtual locomotion interfaces.

Robust Machine Comprehension Models via Adversarial Training

Yicheng Wang, supervised by Mohit Bansal

It is shown that many published models for the Stanford Question Answering Dataset (SQuAD (Rajpurkar et al., 2016)) lack robustness, suffering an over 50% decrease in F1 score during adversarial evaluation based on the AddSent (Jia and Liang, 2017) algorithm. It has also been shown that retraining models on data generated by AddSent has limited effect on their robustness. We propose a novel alternative adversary-generation algorithm, AddSentTrain, that significantly increases the variance within the training data by providing effective examples that punish the model for making certain superficial assumptions. We demonstrate that by retraining with examples generated by AddSentTrain, we can make state-of-the-art models significantly more robust, achieving a 35% increase in F1 score under many different types of adversarial evaluation while maintaining performance on the regular SQuAD task.

Yicheng Wang is a second year student at UNC majoring in computer science and mathematics. He is interested in building and evaluating systems in natural language processing, especially with respect to reasoning tasks such as question answering.

Generative Adversarial Network for Image Captioning

Yichen Jiang, supervised by Mohit Bansal

Most image captioning models are trained with maximum likelihood or reinforcement learning with metric scores as rewards. Traditional teacher-forcing models suffer from exposure bias and the lack of supervised data. Reinforced models use human-crafted metrics scores as the reward function. We propose a framework based on Conditional Generative Adversarial Network, (CGAN) which jointly trains a generator that produces captions conditioned on the image and a discriminator that evaluates the probability of the caption being generated or not. By optimizing the generator via policy gradient, we overcome bottlenecks of teacher-forcing models. By approximating the reward function with the discriminator, we avoid using human-crafted metric scores during training. We present a series of experiments to show that our CGAN-based model consistently and significantly outperforms strong baseline trained with maximum likelihood, not achieved by previous work that adopted a similar approach.

Yichen Jiang is a senior in Computer Science Department. His research focuses on Natural Language Processing and Structured Deep Learning.