How to Compile Programs on Instructional HPC Clusters

The Instructional HPC clusters have several compilers available on their head nodes, deepthought-login and jinx-login, as well as all of it's compute nodes.  You may compile your code either on a compute node (via interactive qsub -I) or on your personal machine (Please do not use the login node).  For example,

{deepthought-login}$ qsub -I -q class
or
{deepthought-login}$ srun -p class --pty /bin/bash

Then

{deepthought2}$ gcc -o hello hello-world.c
{deepthought2}$ hello
hello world!
{deepthought2}$ exit

GCC 4.4

This is the latest version of GCC, which supports the OpenMP 3.0 standard.  It is installed to the default path, and its compilers can be run by executing "gcc," "g++," "gfortran," etc.

Intel C/C++ Compiler 11.1 (Jinx only)

This is the latest version of Intel's optimizing C/C++ compiler, which also supports the OpenMP 3.0 standard.  Its executables are installed in /opt/intel/Compiler/11.1/059/bin/intel64, which is in the default path.  

nVidia CUDA 6.5 Compilers (Jinx only)

The CUDA compiler can be run by executing /usr/local/cuda-6.5/bin/nvcc. The directory /usr/local/cuda-6.5/bin is not in the default path and should be added to your PATH environment variable if you are using CUDA. Also, be sure to add /usr/local/cuda-6.5/lib64 to LD_LIBRARY_PATH. Older versions of CUDA can be found in /opt.