UMIACS Home
PeopleResearchpublicationsInfrastructureAbout Us
  

UMIACS Computer and Network Infrastructure

 

UMIACS fosters innovative interdisciplinary research by supporting an advanced research-computing infrastructure with services and resources for:

High Performance Computing on a variety of architectures including clusters of computers acting as a distributed-memory parallel systems, Symmetric Multi-Processing systems, and General-Purpose GPU systems.

Distributed Computing on a high-speed Local Area Network with load-balancing services and fast Wide Area Network connectivity through the Mid Atlantic Crossroads (MAX), the Next Generation Internet Exchange (NGIX), the Internet2, and the National Lambda Rail.

Data Intensive Computing on a variety of disk, tape, and file-system platforms that host major data collections for our labs and the national research community. The Institute currently maintains over 400TB of long-term persistent data in the areas of Medical Imaging, Computational Biology, Computational Linguistics, Computational Fluid Dynamics, and Computer Vision.

Visual Computing on very high-resolution tiled displays, immersive visualization environments, and 3D stereoscopic displays.

Private Cloud and Web Hosting environments that run on the VMware and KVM hypervisors that support Applications, Platforms, and Infrastructure as services to our labs.

Each of the Institute’s facilities is lead by distinguished faculty members who direct the work of our researchers, systems administrators, network engineers, and programmers. These interactions between academic researchers and information technology professionals equip the Institute’s leading research programs with cutting edge technologies and forward-thinking infrastructure.

While no single system is extremely large in scale, the site as a whole is a large installation with 1200 supported computers, 2500 network ports, and 400 Terabytes of managed data. Some of our deployments include:

  • The Chimera Cluster is a high-performance computing and visualization cluster that takes advantage of the synergies afforded by coupling central processing units (CPUs), graphics processing units (GPUs), displays, and storage under an infrastructure grant from the National Science Foundation. The infrastructure is being used to support a broad program of computing research that revolves around understanding, augmenting, and leveraging the power of heterogeneous vector computing enabled by GPU co-processors. The cluster is comprised of: a 128-processor Linux-based visualization cluster built on the Intel Xeon platform and interconnected with Infiniband. Each node has twenty-four GB of memory and NVIDIA Tesla S1070 GPU. The nodes are coupled with an 8 Terabyte shared file system and a 50-megapixel display wall that is made up of twenty-five LCD monitors.
  • The Skoll cluster, supported by funding from the Office of Naval Research (ONR) and DARPA, is dedicated to exploring the possibilities of Distributed, Continuous Quality Assurance, and is comprised of: a 120-processor cluster running a mix of Operating Systems including Linux and Windows built on the Intel Pentium platform and coupled with a 3-terabyte Network Attached Storage System.
  • The CBCB Computing Facilities. Scientists at the Center for Bioinformatics and Computational Biology are involved in many different genome sequencing projects, both as principal investigators and as collaborators. CBCB brings together scientists and engineers from many fields, including computer science, molecular biology, genomics, genetics, mathematics, statistics, and physics, all of whom share a common interest in gaining a better understanding of how life works. The primary computing cluster consists of: a 288-core Linux-based cluster built on the 64-bit AMD Opteron platform and interconnected with Gigabit Ethernet. These nodes are coupled with a thirty terabyte network attached storage system, a Mysql relational database that provide access to data stored on a 3par Inserv Storage Server, and a 128 processor Hadoop cluster with 20TB of usable storage. The cluster also includes several large memory computing nodes built on the AMD Opteron platform: Four dual-processor nodes each have eight Gigabytes of memory and three quad-processor nodes each have thirty-two Gigabytes of memory.
People    |    Research    |    Publications    |    Infrastructure    |    About Us
Copyright 1995 - 2011 UMIACS