The following information can be used to complete grant requests that will include funding for access to Advanced Research Computing’s (ARC) high performance computing, storage, and cloud computing resources.
Data Centers
ARC’s professional dedicated IT staff monitor and maintain all systems in the following facilities for security and stability. They also provide end-user support, training, and outreach.
Modular Data Center
The Modular Data Center (MDC) is a 1 MW data center owned by the University of Michigan (U-M). At 1,000 square feet, the MDC provides high power density of 24 KW per rack. The MDC utilizes ambient air cooling and is able to cool research systems with much higher efficiencies than traditional closed loop data centers, reducing electricity demand and impact on the environment. The facility provides conditioned and short-term backup power to protect all internal systems and is connected to the U-M research backbone via two redundant high-speed connections.
Michigan Academic Computing Center
The Michigan Academic Computing Center (MACC) is a 2 MW data center operated by U-M in a leased building. The MACC provides a common space where multiple research IT systems are able to connect over high-speed networks to facilitate collaboration and interaction. The facility provides conditioned and backup power to protect all internal systems and is connected to the U-M research backbone via two redundant high-speed connections.
Administrative Services Building
The Administrative Services Building (ASB) is a U-M owned and operated data center. The ASB provides a common space where multiple research IT systems are able to connect over high-speed networks to facilitate collaboration and interaction. The facility provides conditioned and backup power to protect all internal systems and is connected to the U-M research backbone via two redundant high-speed connections.
High Performance Computing
Great Lakes
Great Lakes is U-M's primary High Performance Computing (HPC) shared resource. A Linux cluster of over 16,000 cores, Great Lakes provides flexibility to faculty and projects. The primary features are standard compute, large memory of over 1.5TB/node, and GPU accelerators. For a user environment Great Lakes provides a full application and software development library of over 100 titles of both open source and commercial tools.
Great Lakes is housed in the Michigan Academic Computing Center (MACC), a U-M owned facility. All Great Lakes nodes are connected by two networks, an Ethernet management and data network and a high speed Infiniband fabric supporting parallel computation and data movement at up to 100 Gbps for each host. Great Lakes connects to the U-M research backbone with 160 Gbps of bandwidth. For working storage, a Spectrum Scale (GPFS) filesystem provides 2.0PB of scratch space capable of over 7GB/s of read and write performance and 100TB Infinite Memory Engine (IME) system providing over 80GB/s read and write.
Armis2
Armis2 is a shared Linux High Performance Computing (HPC) cluster. With over 2000 cores, large memory (1.5TB or more) and GPU accelerators, Armis2’s primary use case is for sensitive data such as export controlled, human subject, clinical (PHI/HIPAA) or proprietary data. For a user environment, Armis2 provides a full application and software development library of over 100 titles of both open source and commercial tools.
Armis2 is housed in the Modular Data Center (MDC), a U-M owned facility. All Armis2 nodes are connected by two networks, an Ethernet management and data network and a high speed InfiniBand fabric supporting parallel computation and data movement at 100 Gbps for each host, and is connected to the U-M research backbone with 160 Gbps of bandwidth. For working storage a scratch file system provides over 150TB of scratch space capable of up to 10GB/s of read and write performance, per stream.
Access to Armis2 is open to all researchers and their collaborators with access controlled through a virtual private network (VPN) and two factor authentication.
Lighthouse
Lighthouse provides an HPC co-location (aka condo) service for nodes owned by faculty and specific research teams. Lighthouse provides a professionally managed environment and all the components required to augment these owned nodes and create a complete HPC ecosystem to accelerate science. The cluster currently has several thousand CPU cores and hundreds of GPUs.
Lighthouse is located in the Modular Data Center (MDC), a U-M owned facility. All Lighthouse nodes are connected by two networks, an Ethernet management and data network and a high speed Infiniband fabric supporting parallel computation and data movement at up to 200 Gbps for each host. Lighthouse connects to the U-M research backbone with 400 Gbps of bandwidth. For working storage, a high performance file system provides ample capacity for in-flight work and is capable of up to 10GB/s of read and write performance, per stream.
Storage
Turbo
Turbo is a high-capacity, fast, reliable, and secure data storage service that allows investigators across U-M to connect their data to the computing resources necessary for their research. Currently over 10PB of capacity, Turbo is optimized for performance of all file sizes used in research.
Turbo is acceptable for use with sensitive data such as export controlled, human subject, clinical (PHI/HIPAA) and proprietary while paired with a comparable computational resource. Every year Turbo undergoes two external security reviews. The first is a policy and access control review based on NIST 800-53r4. The second is a penetration test, also known as a white hat hacker, where a third party attempts to gain unauthorized access to the environment or data.
Located in the Michigan Academic Computing Center (MACC) and replica data in ASB, with dedicated access to the high speed research backbone, researchers are able to utilize Turbo as a central data storage location between computer clusters, workstations, and lab devices.
Locker
Locker is a high-capacity, reliable, and secure data storage service optimized for bulk data that allows U-M to connect their data to the computing resources necessary for their research. Currently over 5PB of capacity, Locker is optimized for performance of file sizes averaging 1 MByte or larger.
Locker is intended for use with sensitive data such as export controlled, human subject, clinical (PHI/HIPAA) and proprietary while paired with a comparable computational resource. Every year Locker undergoes two external security reviews. The first is a policy and access control review based on NIST 800-53r4. The second is a penetration test, also known as a white hat hacker, where a third party attempts to gain unauthorized access to the environment or data.
Located in the Michigan Academic Computing Center (MACC) and replica data in ASB, with dedicated access to the high speed research backbone, researchers are able to utilize Locker for bulk data not requiring peak performance.
Data Den
Data Den is a massive-capacity, reliable, and secure data archive service. Currently, over 20PB of replicated capacity, Data Den allows faculty-directed access control to data for public sharing or collaborators access worldwide.
Data Den is intended for use with sensitive data such as export-controlled, human subject, clinical (PHI/HIPAA), and proprietary data while paired with a comparable computational resource. Every year Data Den undergoes two external security reviews. The first is a policy and access control review based on NIST 800-53r4. The second is a penetration test, also known as a white-hat hacker, where a third party attempts to gain unauthorized access to the environment or data.
Located in the Michigan Academic Computing Center (MACC) and replica data in ASB, with dedicated access to the high-speed research backbone, researchers are able to utilize Data Den for bulk cold data that requires long-term preservation or as an active archive backend for Locker volumes.
Cloud Computing
Secure Enclave Services
The Secure Enclave Services (SES) provides researchers with high-performance, secure, and flexible computing environments that enable the analysis of sensitive data sets restricted by federal privacy laws, proprietary access agreements, or confidentiality requirements.
Currently, SES is approved for PHI/HIPAA and Controlled Unclassified Information (CUI) governed by NIST 800-171 and many other forms of sensitive data with strict access and audit controls.
Hosted in the Michigan Academic Computing Center (MACC) SES consists of 2560 hyperthreaded CPU cores, 20TB of RAM, 640TB of NVME storage, and 10 A100 GPUs. Each member node is connected by redundant 100Gbps Ethernet and utilizes software-defined networking (SDN) to create isolation between each SES environment for data protection. For global network access, the SES site is connected by dual 80Gbps Ethernet to the U-M research backbone.
Networking
The research backbone provides redundant multiple 400Gbps connections between U-M data centers and key research resources. The backbone provides no single point of failure and is managed by a professional networking team located in Information Technology Services, the central IT provider for U-M. This network provides multiple paths to both the commodity Internet with over 100 Gbps of capacity, but also dedicated 100 Gbps connection to Internet2 national research network.
AWS Secure Enclave
The ARC AWS Secure Enclave is a secure and flexible environment for the analysis of sensitive data sets restricted by proprietary access agreements, or confidentiality requirements.
Hosted in Amazon Web Services, these enclaves consist of a virtual private cloud with a deny-all, allow-specific firewall and a bastion host to prevent accidental or purposeful egress of data from the enclave. The resources within the enclave are configurable to the research problem at hand, but generally consist of linux or windows workstations for statistical analysis. Before allowing any data into the environment, each enclave undergoes a review with the data provider to ensure that the enclave security meets the data provider’s security requirements.