General Computing Environment
The department’s computing environment includes over 1000 computers, ranging from older systems used for generating network traffic for simulated Internet experiments to state-of-the-art workstations and clusters for graphics- and compute-intensive research. Departmental servers provide compute service, disk space, email, source code repositories, web service, database services, backups, and many other services. All systems are integrated by means of high-speed networks, described below, and are supported by a highly skilled technical staff who provide a consistent computing environment throughout the department. Most students are assigned to a two- or three-person office, though we also larger office that can hold more students. Each student is assigned a computer, with computer assignments based on the students’ research or teaching assignments and their seniority within the department. Currently the minimum configuration for student computers is a desktop system with dual core 64 bit 2 GHz processor, 4 GB RAM, and 160 GB hard drive, though many students have computers that exceed these specifications, depending on their job and seniority. In addition to the departmental servers and office systems, our research laboratories contain a wide variety of specialized equipment and facilities.
General computing systems include 800+ Intel-based computers plus about 50 Macintosh systems, plus a variety of servers and virtual machines.
Our systems primarily run the Windows 10 operating system, with some still running Windows 7 and a large number of systems, including many of the servers, running Ubuntu or Red Hat Linux. We use the AFS file system for central file storage. Languages most commonly used include J++, C++, Java, Python, PHP, and C. Document preparation is usually accomplished with standard applications on PC systems. Our extensive software holdings are continually evolving. We are also a google shop and have unlimited storage in Google Drive and use all core Google applications.
The network infrastructure available to Computer Science is extensive. In the Frederick P. Brooks, Jr. Building and Sitterson Hall every office and common space is equipped with category 6 or better twisted pair cable. All office in Sitterson also have coaxial and fiber optic cable. All data connections in the both buildings supports 1 Gigabit per second. Extensive riser connections enable the department to create multiple separate physical networks between any points in the two buildings. Wifi networks over 802.11 A and N are available throughout both buildings.
A 10 Gigabit link connects the department’s network to the UNC-CH campus network and the North Carolina Research and Education Network (NC-REN), allowing users to access the National LambdaRail/Internet 2 network and commodity Internet. We also have a 10 gigabit connection through the Renaissance Computing Institute (RENCI) to the Global Environment for Network Innovations (GENI) project. The department’s network is connected to the North Carolina Research and Education Network (NC-REN), a statewide network that links research and educational institutions. Our two-way video classroom and teleconference room allow connection to any institution served by the network.
UNC provided Computation Infrastructure
The University of North Carolina at Chapel Hill provides computing resources for researchers at UNC-Chapel Hill. Among these are the KillDevil and Longleaf clusters
The KillDevil cluster is a Linux-based computing system available to researchers across the campus. With more than 9500 computing cores across 774 servers and a large scratch disk space, it provides an environment that can accommodate many types of computational problems. The compute nodes are interconnected with a high speed Infiniband network, making this especially appropriate for large parallel jobs. KillDevil is a heterogeneous cluster with at least 48 GB of memory per node. In addition, there are nodes with extended memory, extremely large memory, and GPGPU computing. The KillDevil compute cluster is designed for high speed interconnect intensive, tightly coupled parallel workloads making use of the MPI framework. In particular, workloads where a single job requires multiple compute hosts are best suited to Killdevil.
The Longleaf compute cluster is designed for memory and I/O intensive, loosely coupled workloads with an emphasis on aggregate job throughput over individual job performance. In particular, workloads consisting of a large quantity of jobs each requiring a single compute host are best suited to Longleaf.
Longleaf includes General Purpose nodes (120), Big Data nodes (30), Big Memory nodes (5 @ 3TB each), GPU nodes (5 @ 8 GeForce GTX 1080s each). The filesystem, named “Pine” is purpose-built filesystem designed explicitly for intensive I/O activity, with a large filesystem space. Although some of the customary directories are present on Longleaf, we have created some new/different directories on the purpose-built filesystem and home directories are new. We expect regularly to add compute capability and storage capability. The scheduler is different as well: Longleaf uses SLURM instead of LSF. A large set of software applications, libraries and compilers are available.