IPCS has been established and consinuously extended to meet the needs of LiuRG for scientific computing, such as multiphysics simulations with extremely nonlinear material properties, computational fluid dynamics, molecular dynamics, and smoothed particle hydrodynamics. IPCS is current equipped with one self-essembled micro cluster (Phenix) and a 64-core workstation (Ostrich). The plan of another hybrid research and educational cluster is under way. These computing facilities are featured with high flexibility in its infrastructure. All of them are connected with LICOP, LUP, and REL. When necessary, access to larger public clusters on MTU campus, Superior and Portage, is obtained to accommodate additional needs.
In Greek mythology, a phoenix or phenix (Greek: φοῖνιξ phoinix) is a long-lived bird that is cyclically regenerated or reborn. Associated with the sun, a phoenix obtains new life by arising from the ashes of its predecessor (Wikipedia). This micro-scale cluster was built with retired computers from Michigan Tech. So the birth of this new computing facility from obsolete machines is believed to be a phenix, which is more powerful than its previous incarnations. The retired desktops with dual-core (Intel 2.13 GHz, 4 GB RAM) were connected with 1 Gbps ethernet and 20 Gbps infiniband network for communcation and data, respectively. This cluster currently has one head node and five computing nodes. It serves as a testbed for new code and new ditactics.
The TitanUS A450 is and example of "no one telling the system-builders that something can’t be done, so they went ahead and did it anyway". Sixty-four physical CPU cores (32 as standard) in a full-tower case and (unlike our competitors) no penalty in terms of airflow or thermal envelope. The custom chassis modifications ensure that heat and cable management issues are eliminated in this top-end powerhouse. The goal is achieved by using a quad-socket Supermicro motherboard designed for a huge server chassis, but with a little design magic (and a lot of napkin sketches) it not only all fits together, but leaves room for expansion.
Case:Rosewill BLACKHAWK-ULTRA Super Tower Computer Case with Six Fans
Processor:4x AMD Opteron 6386 SE Abu Dhabi 2.8GHz (3.5GHz TC) 16MB L2 Socket G34 140W 16-Core (64 Cores Total)
Motherboard:Supermicro H8QG7-LN4F Motherboard G34 Quad AMD 45nm Opteron 6300 CPU w/ LSI 8 port SAS controller
Memory:128GB (4 x 32GB) DDR3 SDRAM ECC Registered DDR3 1866 Quad Channel Server Memory
Power:Rosewill HERCULES Continuous 1600W@50 Degrees C 80 PLUS SILVER Modular Active PFC Power Supply
Video Card 1:NVIDIA Quadro K620 2GB 128-bit DDR3 PCI Express 2.0 x16 Workstation Video Card
Hard Drive 1:Western Digital WD VelociRaptor 1TB 10000 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive
Hard Drive 2:Western Digital WD VelociRaptor 1TB 10000 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive
Hard Drive 3:Western Digital WD VelociRaptor 1TB 10000 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive
Hard Drive 4:Western Digital WD VelociRaptor 1TB 10000 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive
CD/DVD:Black 2MB Cache 24X DVD Burner drive
NetWork:Intel X520-DA2 Dual Ports 10 Gigabit Ethernet Converged Network Adapter, PCI Express 2.0 x8
USB/Firewire:Inateck 5 Ports USB 3.0 Expansion PCI-E Card
Wireless:TP-LINK TL-WDN3200 Dual Band Wireless N600 USB Adapter, 2.4GHz 300Mbps/5GHz 300Mbps, w/WPS Button
OS:Linux Ubuntu Server 12.04 LTS 64 bit - Media - No Technical Support
CPU Fans:4x Noctua NH-U9DO A3 AMD Opteron, 4 Dual Heat-pipe SSO Bearing Quiet CPU Cooler
FREE Gift:SYBA 7.1 Channels 24-bit 48KHz PCIe x1 Interface Surround Sound Card
Warranty:4 Yr Standard Parts + Lifetime Labor & Business hour tech support
The real-time status of Ostrich is as follows (whenever it is working):
On-Campus Computing Facilities
Named after the greatest of the great lakes, Lake Superior, and built with Rocks Cluster Distribution 6.1.1 (with CentOS 6.3), Superior is a central high performance computing cluster.
It is used for a variety of research projects including the ones that involve confidential and sponsored data.
It has 1 front end, 2 login nodes, one 48 TB RAID60 NAS node (with 33 TB usable space), 88 CPU compute nodes [each having 16 CPU cores (Intel Sandy Bridge E5-2670 2.60 GHz) and 64 GB RAM] and 5 GPU compute nodes [each having 16 CPU cores (Intel Sandy Bridge E5-2670 2.60 GHz), 64 GB RAM and 4 NVIDIA Tesla M2090 GPUs].
A gigabit ethernet backend network serves the administrative needs of this cluster, and a 56 Gb/s InfiniBand network serves its computing needs.