NOTICE: Deepthought has been updated to use the slurm queuing system. We will be updating the information here to reflect that shortly.
Provided through a Fiscal Year 2014 Technology Fee grant, the College of Computing operates Deepthought, a high-performance computing cluster which provides service to several CoC and ECE courses. Using this cluster, students learn high-performance and data-intensive computing through core, breadth, and elective courses. The Deepthought cluster is located in the College of Computing Building Data Center (CCB 247).
Usage:
- How to Access the Deepthought Cluster
- Compiling Programs on the Deepthought Cluster
- How to Run Jobs on the Deepthought Cluster
- How to Forward X from Deepthought Nodes
- Torque to Slurm Command Conversion (Temporary link to new slurm commands while we work on updating our documentation)
Hardware Features:
The Deepthought cluster consists of 19 compute nodes, with identical hardware configurations:
- 19 Supermicro nodes, each with: 2 AMD Abu Dhabi 16-core processors, 256GB memory, 10Gb Ethernet, QDR Infiniband
Software Features:
- Red Hat Enterprise Linux 6
- Slurm Workload Manager
- OpenMPI 1.8.1
- GCC 4.4
User Account Access:
- Authentication via Georgia Tech Enterprise Directory (GTED) credentials (not your CoC account).
- Only students in current semester CoC and ECE courses where the faculty member has requested access.
- Home directories are used by Instructional HPC clusters only.
- Student home directories are wiped at the end of each semester. Please ensure that you transfer your data to a more permanent location before your access expires.