Instructional HPC & Clustered Computing Resources

Computation has joined theory and experiment as the third pillar in scientific discovery.  Multi-threaded and parallel programs are needed to drive manycore devices of the future.  Modern applications produce or require huge volumes of data requiring computing systems that have data-intensive capabilities.

 

Through the application of Georgia Tech Technology Fee grants, the College of Computing has acquired and operates three high-performance computing (HPC) clusters for use in courses in parallel/distributed algorithms, large-scale data analysis, multi-core programming, and other computing fields.  Faculty teaching courses in the School of Computational Science and Engineering (CSE), the School of Computer Science (CS) and affiliated faculty in the School of Electrical and Computer Engineering may request access to these HPC resources prior to the semester for their class projects.  TSO uses the following guidelines for granting requests to access these HPC resources:

 

  • To maintain software reliability for all courses involved, only faculty requests sent prior to the start of semester to academicresources@cc will be considered.
  • To optimize resource utilization, special projects course needs are considered only after all core and breadth course needs are met.
  • To optimize resource utilization, research jobs will be considered on a case-by-case basis when HPC resources are not in use by courses.  In most cases, our HPC systems automatically enforce this policy by the job scheduling software configuration.
  • Because HPC systems must often be specialized for particular classes of problems or specific programming tools, we will not be able to accommodate all requests.

These HPC resources are located in the College of Computing Building Room 247 Data Center.  For details, please refer to the individual cluster pages linked in the table below:

 

Instructional HPC Resources
HPC CLUSTER NAME NODES CORES DESCRIPTION OPERATING
SYSTEM
ACCESS
Jinx 12

12

6

144

144

48

HP Proliant SL390

(2-socket, 6-core, Intel X5650, 24GB RAM, 380GB local scratch disk, QDR Infiniband, 2 NVIDIA Tesla M2090 GPU cards)

 

HP Proliant SL390

(2-socket, 6-core, Intel X5650, 24GB RAM, 380GB local scratch disk, QDR Infiniband, 2 NVIDIA Tesla M2070 GPU cards)

 

Dell PowerEdge R710

(2 socket, 4-core, 2.66 GHz Intel Xeon X5570, 48 GB RAM, 2TB local scratch disk, QDR Infiniband)

 

 

Red Hat Enterprise Linux 6 Supports CSE, CS and ECE courses requiring parallel and/or distributed compute jobs and/or medium- to large-scale data processing jobs
Deepthought 19 640 SuperMicro
(2-socket, 16-core, 2.4GHz AMD Opertron 6378, 256GB RAM, 128GB SSD, QDR Infiniband to the Dune Storage Cluster, 10Gbps IP network cards.
Red Hat Enterprise Linux 6 Supports CSE, CS and ECE courses requiring parallel and/or distributed compute jobs and/or medium- to large-scale data processing jobs
Dune
Storage Cluster
5   SuperMicro

(2-sockets, 6-core, Intel Xeon, 32 GB RAM, 10Gbps Ethernet, serving a 500TB GlusterFS distributed file system) 

Red Hat Enterprise Linux 6 Available for computational jobs on the Deepthought and Jinx clusters.
Factor 21 216

Dell PowerEdge R610

(2-socket, 4-core, 2.67GHz Intel Xeon X5550, 48 GB RAM, 7TB of storage)

 

Dell PowerEdge R620

(2-socket, 6-core, 2.0GHz Intel ES-2630L, 128GB RAM)

OpenStack

Virtual Machine (VM) farm in support of School of CS courses in the areas of energy-aware, distributed, and multi-threaded programming, as well as linux kernel hacking.

 

Total: 75 1192