General Computing Environment
The department’s computing environment includes over 1000 computers, ranging from older systems used for generating network traffic for simulated Internet experiments to state-of-the-art workstations and clusters for graphics and compute-intensive research. Departmental servers provide computing services, disk space, email, source code repositories, web services, database services, backups, in addition other services. All systems are integrated by means of high-speed networks, described below, and are supported by highly skilled technical staff who provide a consistent computing environment throughout the department. Most students are assigned to a two- or three-person office, though we also have larger offices that can hold more students. Each student is assigned a computer, with computer assignments based on the students’ research or teaching assignments and their seniority. Currently the minimum configuration for student computers is a desktop system with dual core 64 bit 2 GHz processor, 4 GB RAM, and 240 GB hard drive, though many students have computers that exceed these specifications depending on their job and seniority. In addition to the departmental servers and office systems, our research laboratories contain a wide variety of specialized equipment and facilities. We also have an extensive virtual machine infrastructure spanning two machine rooms.
General computing systems include 800+ Intel-based computers plus about 50 Macintosh systems, plus a variety of servers and virtual machines.
Our systems primarily run the Windows 10 operating system, with some still running Windows 7, We have a large number of systems, including many of the servers, running Ubuntu or Red Hat Linux, and MacOS. We use the AFS file system for central file storage. Languages most commonly used include J++, C++, Java, Python, PHP, and C. Document preparation is usually accomplished with standard applications on PC systems. Our extensive software holdings are continually evolving. We are a google shop and have unlimited storage in Google Drive and use all core Google applications. UNC also offers Office365 cloud services.
The network infrastructure available to Computer Science is extensive. In the Frederick P. Brooks, Jr. Building and Sitterson Hall every office and common space is equipped with category 6 or better twisted pair cable. All offices in Sitterson also have coaxial and fiber optic cable. All data connections in the both buildings supports 1 Gigabit per second, and we have a growing number of systems using 10 Gigabit connections. Extensive riser connections enable the department to create multiple separate physical networks between any points in the two buildings. Wifi networks over 802.11 A and N are available throughout both buildings.
A 10 Gigabit link connects the department’s network to the UNC-CH campus network and the North Carolina Research and Education Network (NC-REN), allowing users to access the National LambdaRail/Internet 2 network and commodity Internet. We also have a 10 gigabit connection through the Renaissance Computing Institute (RENCI) to the Global Environment for Network Innovations (GENI) project. The department’s network is connected to the North Carolina Research and Education Network (NC-REN), a statewide network that links research and educational institutions. Our two-way video classroom and teleconference rooms allow connection to any institution served by the network.
UNC provided Computation Infrastructure
The University of North Carolina at Chapel Hill provides computing resources for researchers at UNC-Chapel Hill. Among these are:
- The Longleaf cluster is a Linux-based computing system available to researchers across the campus free of charge. With nearly 6500 conventional compute cores delivering 13,000 threads (hyperthreading is enabled) and a large, fast scratch disk space, it provides an environment that is optimized for memory and I/O intensive, loosely coupled workloads with an emphasis on aggregate job throughput over individual job performance. In particular workloads consisting of a large quantity of jobs each requiring a single compute host are best suited to Longleaf. The Longleaf cluster is targeted for data science and statistical computing workloads, very large memory jobs, and GPU computing. Resources are managed through a fair-share algorithm using SLURM as the resource manager/job scheduler.
- The Dogwood cluster is a Linux-based computing system available to researchers across the campus free of charge. With over 11,000 computing cores, a low latency, high bandwidth Infiniband EDR switching fabric and a large scratch disk space, it provides an environment that is optimized for large, multi-node, tightly coupled computational models typically using a message passing (e.g. MPI) or hybrid (OpenMP + MPI) programming model. Most of the cluster is comprised of nodes with Intel Xeon processors with the Broadwell-EP micro-architecture and 44 cores per node. There is a partition with some newer Intel Skylake nodes as well as a smaller knl partition, with Intel Xeon Phi processors (Knight’s Landing) for development purposes. Resources are managed through a fair-share algorithm using SLURM as the resource manager/job scheduler.
- Carolina Cloud Apps, a web application development platform powered by RedHat’s OpenShift Enterprise software.
- Virtual Computing Lab (VCL). Originally developed by N.C. State University in collaboration with IBM, the VCL provides users with anytime, anywhere access to custom application environments created specifically for their use. VCL provides users remote access to hardware and software that they would otherwise have to install themselves on their own systems, or visit a computer lab to use. It also reduces the burden on computer labs to maintain large numbers of applications on individual lab computers, where in many cases it’s difficult for some applications to coexist on the same machine. In the VCL, operating system images with the desired applications and custom configurations are stored in an image library, and deployed to a server on-demand when a user requests it.
- Participation in the RHEDcloud Foundation, which stands for Research, Healthcare and Higher Education Cloud Foundation. The RHEDcloud Project brings people together from research universities, pharma, healthcare providers, cloud service providers, and cloud platform providers to design and implement better security, automation, and integration for cloud computing.
- The Secure Research Workspace, which contains computational and storage resources specifically designed for management and interaction with high-risk data. The SRW is used for storage and access to Electronic Health Records (EHR) and other highly sensitive or regulated data; it includes technical and administrative controls that satisfy applicable institutional policies. SRW is specifically designed to be an enclave that minimizes the risk of storing and computing on regulated or sensitive data.