Computing Facilities

(For CoC grant writers, this document is also available in PDF Format.)

 

Georgia Tech

Located in Atlanta, Georgia, the Georgia Institute of Technology is one of the top research universities in the United States.  Georgia Tech is a science and technology-focused learning institute renowned for our deeply held commitment to improving the human condition. Our faculty and students are solving some of the world’s most pressing challenges: clean and sustainable energy; disease diagnosis and treatment; and national defense and security, among others.  Georgia Tech is an innovative intellectual environment with nearly 1,000 full-time instructional faculty and more than 21,500 undergraduate and graduate students.

 

Our bachelor's, master's and doctoral degree programs are consistently recognized among the best. Georgia Tech students are equipped for success in a world where technology touches every aspect of our daily lives. Degrees are offered through the Institute’s six colleges: Architecture, Computing, Engineering, Sciences, Scheller College of Business, and the Ivan Allen College of Liberal Arts.  Year after year, Georgia Tech is consistently the only technological university ranked in U.S. News & World Report's listing of America's top ten public universities. In addition, our College of Engineering is consistently ranked in the nation's top five by U.S. News. In terms of producing African American engineering graduates, Diverse: Issues in Higher Education ranks Tech No. 1 at the doctoral level and No. 2 at the bachelor's level, based on the most recent rankings for 2012. These impressive national rankings reflect the academic prestige long associated with the Georgia Tech curriculum.

 

The College of Computing at Georgia Tech:

The College of Computing at Georgia Tech is a national leader in the research and creation of real-world computing breakthroughs that drive social and scientific progress. With its graduate CS program ranked 9th (2014) nationally by U.S. News and World Report, the College's unconventional approach to education is pioneering the new era of computing by expanding the horizons of traditional computer science students through interdisciplinary collaboration and a focus on human centered solutions.  

 

The College resides and operates computing facilities in three buildings (College of Computing Building, Technology Square Research Building, and Klaus Advanced Computing Building), including over 3,500 computers in over 50 networks servicing 3 Schools, 8 Research Centers, and 60 Research Labs. Our data centers hosts more than 900 servers of various makes (Dell, HP, IBM, Penguin Computing, and SuperMicro), most of which are multi-processor, multi-core machines, providing over 1 PB of networked disk storage. There are 16 Linux-based high performance computing clusters totaling more than 1,000 physical servers and 6,000 computing processors/cores. All of the College's facilities are linked via local area networks that provide 1 Gigabit per second (Gbps) to the desktop. The College's network employs an internal high-performance, 40 Gbps Ethernet backbone to each of its buildings with external connectivity to the campus network by a 40 Gbps Ethernet uplink.  The Georgia Tech network is an Ethernet based IP network spanning the 150 buildings on the main campus in Atlanta, as well as remote campuses in Savannah, GA, and Metz, France.  Internet services are purchased from transit providers as well as connections to research networks and transit peering services.  Georgia Tech has peering with Peachnet, Southern Crossroads (SoX), TransitRail, Cogent, and Qwest at speeds of up to 100 Gbps. The Georgia Tech Research Network, through its peering with SoX, has connectivity to NLR Packetnet, Internet2 network, Oak Ridge National Labs, the Department of Energy’s Energy Sciences Network (ESNet), NCREN, MREN, FLR, LONI, and 3ROX, as well as other SoX participants in the SouthEast.

 

Buildings

The College of Computing Building (CCB) houses administrative offices for the College, instructional classrooms and labs, and the Institute for Robotics & Intelligent Machines (IRIM) at Georgia Tech, as well as meeting space for undergraduate and graduate student organizations.  CCB is the instructional center of the College, housing 9 classrooms and 3 instructional spaces for meetings between students, teaching assistants, and instructors. The general clusters provided by OIT are available to students taking a CS course requiring specialized resources. A spacious Commons Area provides ample seating and computer networking which fosters both formal and informal learning opportunities and collaboration.  A highly visible conference room is equipped with a Cisco TelePresence C40 system and is available to all faculty, staff and students for conducting meetings with remote collaborators.  CCB also houses a 2000 sq. ft. data center providing over 500 Kilowatts of power and cooling capacity for the College’s research and instructional computational servers.  The building's advanced infrastructure provides 1 Gbps networking to all ports with a 40 Gbps uplink to the campus network as well as high-density 802.11ac wireless networking support.

 

The Technology Square Research Building (TSRB) is located in the innovative and pedestrian-friendly, mixed-use Technology Square district of Georgia Tech and is home to the College's School of Interactive Computing, the GVU Center as well as over 15 CoC research labs spanning multiple research groups including Human Computer Interaction, Cognitive Science, Mobile Robotics, Graphics and Animation, Information Visualization, Learning Sciences and Technology, Computing Education, Social Computing, Ubiquitous and Wearable Computing, and Virtual and Augmented Environments. TSRB also houses state of the art conference facilities that accommodate several of the College's special events, lectures and meetings.  The College manages a 400 sq. ft. data center in the building providing 100 Kilowatts of power and cooling capacity for several research computational servers.  The building's advanced infrastructure provides 1 Gbps networking to all ports with a  10 Gbps uplink to the campus network as well as high-density 802.11n wireless networking support.

 

The Klaus Advanced Computing Building (KACB), dedicated in 2006, is located in the heart of the Georgia Tech campus and houses some of the most advanced computing labs and innovative educational technology in the world.  The 414,000 square-foot building consists of some 70 research laboratories, 6 instructional labs, 5 large classrooms and a 200-seat auditorium.  The building has a substantial number of environmental and sustainable features achieving the prestigious LEED Gold rating from the U.S. Green Building Council.  Environmentally friendly features include creative use of the 6-acre urban campus site to preserve over 50 percent of the site as green space, a storm water collection system to provide water for irrigation, energy efficient heating, cooling and lighting systems, and extensive use of recyclable materials. 

 

KACB is home to the College's School of Computer Science, the School of Computational Science and Engineering, 6 research centers (GTISC, CERCS, C21U, Fodava, IDH, and ARC), over 20 CoC research labs spanning multiple research groups including High Performance Computing, Information Security, Software Engineering, Databases, Systems, Theory, Computer Architecture, Networking, Programming/Algorithms, Data Analytics, and Embedded Systems.  KACB houses state of the art conference facilities that accommodate several of the College's special events, lectures and meetings.  The building features open collaboration spaces, study lounges, conference rooms and graduate student offices, all with ample power and networking ports.  All conference rooms are equipped with projection technology, table networking and power.  A highly visible conference room is equipped with a Polycom HDX 8000 video conferencing system and is available to all faculty, staff and students for conducting meetings with remote collaborators.  The College manages a 500 sq. ft. data center in the building providing 80 Kilowatts of power and cooling capacity for critical enterprise servers.  The building's advanced infrastructure provides 1 Gpbs networking to all ports with a  40 Gbps uplink to the campus network as well as high-density 802.11n wireless networking support.

 

Instructional Facilities

In addition to general instructional facilities provided by the Institute, the College of Computing provides specialized instructional facilities for it's advanced curriculum needs.  All of the College's instructional labs and servers are located in the College of Computing Building.

  • Shuttles UNIX Remote Access:  Supporting general-purpose UNIX shell remote access, a 5-node Virtual Server cluster (4 Virtual CPUs, 8 GB RAM, Prism Home Directories).
  • Digital Media and Gaming Lab:  Supporting graphics, digital media and gaming courses, a 14-seat cluster of 12 SuperMicro workstations (4-core, 2.4GHz Intel Xeon X3430, 16GB RAM) workstations running Windows and 2 Apple 27" iMac workstations (4-core, 3.4GHz Intel i7) running Mac OS X and Windows.
  • Networking Instruction Lab:  Supporting networking course assignments, 2 racks with 8 Cisco 2911 routers and an assortment of network switches and Intel-based PC end-hosts.
  • Information Security Instruction Lab:  Supporting information security courses, a 16-seat cluster of SuperMicro workstations.  Student teams are provided access to the latest information security hardware and software in an isolated environment allowing for study, analysis, and simulation of current threats without risk to production facilities.
  • Dune Instructional Storage Cluster:  Supporting HPC and large-scale data analysis courses, a 5-node GlusterFS storage cluster (2-socket, 6-core Xeon, 32 GB RAM each) providing over 500 TB of data storage with 10 Gbps IP network interface controllers and with high-speed QDR Infiniband to the Jinx HPC cluster.
  • Jinx Instructional HPC Cluster: Supporting advanced programming courses, a 30-node, 336-core, GPU accelerated Torque/Maui cluster consisting of 24 HP Proliant SL390 servers and 6 Dell PowerEdge R710 servers.  The HP servers have two Intel Xeon X5650 6-core processors and 24GB of RAM.  Twelve of the HP servers are equipped with two NVIDIA Fermi-based Tesla M2090s and the other twelve with M2070s.  The Dell servers have two Intel Xeon X5570 4-core processors and 48 GB of RAM.  The entire cluster is connected with QDR Infiniband to the Dune Storage Cluster and to the IP network with Gigabit Ethernet.
  • Deepthought Instructional HPC Cluster: Supporting advanced programming courses, a 20-node, 640-core, compute cluster consisting of SuperMicro servers (2-socket, 16-core, 2.4GHz AMD Opertron 6378, 256GB RAM, 128GB SSD, QDR Infiniband to the Dune Storage Cluster, 10Gbps IP network cards).
  • Factor Instructional HPC Cluster:  Supporting operating and distributed systems courses, a 21-node, 216-core server cluster of Dell PowerEdge R610 servers (2-socket, 4-core, Intel X5550, 48GB RAM) and Dell PowerEdge R620 servers (2-socket, 6-core 2.0GHz Intel ES-2630L, 128GB RAM) with 2 Dell PowerEdge R710 file servers with 7TB of disk storage.  The cluster resources are managed with Openstack.

Research Facilities

An abundance of research facilities are housed in the College's Schools and Research Centers:

  • The School of Computational Science and Engineering (CSE) is located in KACB and supports substantial computational facilities related to both education and research.  The School is affiliated with several research centers, initiatives and labs including the Institute for Data and High Performance Computing (IDH), the Keeneland project, the FODAVA Center, and the specialized High Performance Computing (HPC) Laboratory.  Through industrial partnerships, the HPC Lab operates or supports several state-of-the-art parallel computers and future technologies, which are readily available for teaching and research and provide a diverse collection of resources for algorithmic exploration:  
  • Ion Cluster:  an 8-node, 64-core, GPU-accelerated Torque/Maui cluster consisting of Appro 1424x Twin-Servers (each with 2-socket, Intel X5550 4-core, 24GB RAM, QDR Infiniband, and 2 NVIDIA C1060 cards).
  • Bugs Cluster:  a 6-node, 48-core cluster of Dell PowerEdge 1950 and 2950 servers (each with 2-socket, 4-core, Intel E5420 processors, 16GB RAM) configured with Hadoop.
  • Convey FPGA servers:  The HPC Lab also utilizes two Convey HC-1 hybrid-core servers featuring field Programmable Gate Arrays (FPGAs) coupled with multi-core Intel Xeon processors.  
  • Topaz Cluster:  a 36-node, 288-core cluster of TeamHPC servers (each with 2-socket, 4-core processors).
  • Sun Fire X4470 M2 large memory servers:  Two, dual socket, 8-core Intel Xeon E7-4820 processors with large memory configurations:  1TB RAM and 500GB RAM.

  • Intel large memory server: an Intel Server System QSSC-SR4 with four E7-8870 10-core Intel Xeon processors and 256 GB of RAM, the highest-ranked single-node system on the Graph500 benchmark.

     Additional CSE high performance computing resources include:

  • Keeneland Initial Delivery (ID) system (installed in 2010):  a 120-node, 240 CPU and 360 GPU cluster, composed of HP Proliant SL-390 (Ariston) servers with Intel Westmere 6-core CPUs, NVIDIA 6GB Fermi GPUs, and a Qlogic QDR InfiniBand interconnect.
  • Cray Supercomputers:  Through multiple projects and collaborations, the HPC Lab has access to massively multithreaded Cray XMT-series supercomputers.  The Cray XMT series is similar to the Cray XT series supercomputers, but replaces the commodity x86 processors with unique latency tolerant processors that allow for fine-grained parallelism through 128 hardware thread contexts per processor.  These processors scale memory bandwidth across multiple terabytes of RAM.  The HPC Lab currently uses a Cray XMT located at Pacific Northwest National Laboratory with 128 processors and 1 TB of RAM as well as a next-generation Cray XMT2 at the Swiss National Supercomputing Center with 64 processors and 2 TB of RAM.
  • System Biology Center – Myriad Cluster: a 10,000-core Penguin Computing cluster with a 100 TFLOP (teraflop) theoretical maximum performance, ranking within the top 100 supercomputers in the world.
  • FoRCE Research Computing Environment: a Georgia Tech community resource that includes a mixture of compute nodes, some with attached GPUs, some with large memory capacity and some with local storage (56 total compute nodes, 1,592 total CPU cores).
  • The Georgia Tech Information Security Center (GTISC) is located in KACB and is comprised of the Information Security Lab and the Converging Infrastructures Security (CISEC) Lab.  GTISC operates a substantial number of network, computational server and storage resources to support its research activities in the area of information security.
  • The Networking and Telecommunications Group (NTG) is located in KACB and includes the GT Network Operations and Internet Security Lab (GT Noise).   NTG operates a substantial number of network, computational server and storage resources to support its research activities in the area of networking and security.  The Noise lab also hosts a 70-node, Dell PowerEdge R410 compute cluster (840 CPU cores) as part of the multi-institute, VICCI programmable cloud-computing research testbed.
     
  • The Center for 21st Century Universities (C21U) is located in KACB and operates the C21U Studio and Innovation Lab which includes a highly connected classroom, control room, and broadcast quality studio, as well as a dedicated support team to support experimental teaching and research in fundamental change in higher education.
  • The Center for Experimental Research in Computer Science (CERCS), is located in KACB and includes the Interactive High-Performance Computing Lab (IHPCL), serving as a focus for interdisciplinary research and instruction involving high-performance computer systems. These facilities are linked by a dedicated high-performance backbone utilizing 10 Gbps Ethernet, and include:
    • Whitestar Cluster: an 840-node, 3360-core IBM BladeCenter LS20 cluster (2-socket, 2-core AMD Opteron 270, 4GB RAM each) configured with VMware VCenter.
    • Jedi Cluster: a 80-node, 760-core, Penguin Computing cluster with 30 Relion 1752 servers (2-socket, 6-core, 2.66GHz Intel X5650, 48GB RAM each) and 50 Relion 1702 servers (2-socket, 4-core, 2.4GHz Intel E5530, 24GB RAM each), configured with Openstack.
    • Maquis Cluster: a 20-node, 160-core IBM BladeCenter H cluster (2 socket, 4-core, 1.86GHz Intel E5320 processors).
    • Vogue Cluster:  an 11-node, 88-core cluster of 4 Dell PowerEdge R610 servers (2-socket, 4-core, Intel E5550 processors, 12GB RAM) and 7 Penguin Computing Relion 1700 servers (2-socket, 4-core, Intel E5506 processors, 12GB RAM).
    • Rohan Cluster: a 51-node, 102-core cluster of Dell PowerEdge 1850 Linux servers (2-socket, Intel Pentium4 Xeon EMT64 processers, Infiniband interconnects).
    • Polynesia/Samoa Cluster: a 20-node, 180-core cluster of Dell PowerEdge 1950 servers (2-socket, 4-core, 2.5GHz Intel E5420 processors, 1GB RAM)
    • Poster Printer (HP DesignJet 800).
       
  • The Computer Architecture research group is located in KACB and conducts research on all aspects of future microprocessor technology including performance, power, multi-threading, chip-multiprocessing, security, programmability, reliability, interaction with compilers and software, and the impact of future technologies.
    • Pasta Cluster: a 35-node, 312-core cluster of 25 Dell PowerEdge 1950 servers (2-socket, 4-core, 3.0GHz Intel X5650 processors, 16GB RAM), and 2 Dell PowerEdge R410 (2-socket, 3GHz Intel X5450 processors, 24GB RAM) and an IBM BladeCenter HS22 (8-blades, 2-socket, 6-core, 2.93GHz Intel X5670 processors, 48GB RAM).
    • Sushi Cluster: a 14-node, 112-core cluster of 10 Dell PowerEdge 1950 servers (2-socket, 2-core, 3.0GHz Intel E5160 processors, 8GB RAM) and 4 Dell PowerEdge 1950 servers (2-socket, 4-core, 3.0GHz Intel E5450, 16GB RAM).
       
  • The GVU Center is located in the TSRB Building and houses a variety of research labs in a multi-facility collection of workplaces. Total GVU lab space comprises more than 8,000 square feet. In addition, GVU affiliated laboratories are operated by non-CoC faculty in the College of Architecture; the School of Literature, Culture, and Communication; the School of Psychology; and the Interactive Media and Technology Center. GVU facilities utilize state-of-the-art high-performance servers and graphics workstations from major manufacturers such as Dell, HP, Apple and Sun.  GVU is also a partner in the Aware Home Research Initiative (AHRI).  A partial list of specialized GVU resources includes:
    • The Aware Home: a 3-story, 5,040 sq. ft. house and living laboratory for interdisciplinary research in design and social questions.
    • The App Lab: a “hackerspace” devoted to the creation of mobile applications and technologies across a range of platforms.

      • Numerous iOS, Android and Kindle devices for checkout.

      • Several dual-booted OSX and Windows workstations with current, mobile and gaming software dev environments, including Unity for publishing.

    • The Proto Lab: a 1,200 sq. ft. lab devoted to the prototyping of experimental devices such as wearable computers and equipped with devices such as:
      • 3D Printer (Dimension SST 768)
      • 3D Scanner

      • Laser Cutter/Etcher (Epilog)

      • Circuit Mill (LPKF ProtoMat S62)

      • CNC Router (K2CNC 4’x8’)

      • Shop equipment (band saw, table saw, drill press)

      • Surface mount and through-hole soldering stations

      • Bench Equipment (Power Supplies, Oscilloscope/Logic Analyzer, RF Generator and spectrum analyzer)

      • Arduino Development/Test Circuit Boards

      • Silk Screening Equipment (for conductive ink)

      • Embroidery Machines (for conductive thread), sewing equipment, leather stitcher

    • The Usability Lab: complete with separated viewing area, for conducting and capturing video and screen recordings of computer-based studies.
    • A High-Definition (HDTV) Video Conferencing System (LifeSize).
    • A Video Webcasting AV Cart with High-Definition (HD) capture capability.
    • Sony Bloggie HD Cameras for field recording
    • 12 Camera IR Motion Capture System (Vicom)
    • Several Polhemus, Ascension, and Intersense tracking systems and head-mounted displays
    • Several Smartboards
    • A Segway Human Transporter
    • A Poster Printer (HP DesignJet 800)
       
  • The Institute for Robotics and Intelligent Machines (IRIM) is located in the College of Computing Building and houses a variety of research labs in a multi-facility collection of workplaces.  In addition, IRIM affiliated laboratories are operated by non-CoC faculty in the College of Architecture, College of Engineering: Schools of Aerospace Engineering, Biomedical Engineering, Mechanical Engineering, Electrical and Computer Engineering, College of Science: School of Physics and the Georgia Tech Research Institute.  A partial list of specialized IRIM and robotics equipment includes:
    • Vehicles:
    • Robots:
      • 3 cells with KR210 KUKA robots, material handling equipment, and AGVs.
      • Several Kuka robotic arms
      • A Schunk robotic arm (LWA3) with dexterous hand (SDH2).
      • Golem Krang: a mobile manipulator designed and built by the Humanoid Robotics Lab, featuring a Schunk robotic arm mounted on a custom Segway Human Transporter.
      • Simon: a face-to-face, robotic research platform featuring an upper-torso humanoid social robot with two 7-DOF arms, two 4-DOF hands, and a socially expressive head and neck, including two 2-DOF ears with full RGB spectrum LEDs.
      • A Segway RMP200 Research Mobility Platform.
      • A Mobile Robotics PeopleBot.
      • A PR2 Willow Garage robot.
      • Rovio WowWee mobile webcam.
      • 18 Sony AIBO legged robots
      • 2 iRobot ATRV minis, 1 IS Robotics Pebbles III robots
      • 4 Pioneer 2-DXE, 3 Pioneer AT robots
      • 1 Evolution Scorpion, 1 Evolution ER1, 1 Segway, 1 Denning DRV-I robot
      • 3 RWI ATRV-Jr, 5 ActivMedia Amigobots, 1 Nomad 200, 5 Nomad 150, 1 Hermes II Hexipod, 3 Blizzard robots 
      • several SICK scanners, various lasers, vision/motion systems, cameras, and associated equipment.
    • Fabrication Shop: a lab with band saws, drill presses, lathes, presses, grinders, etc. for the fabrication of robotic components.
    • Electronics Shop: a lab with oscilloscopes, logic analyzers, programmable power supplies, soldering equipment, etc.
    • Wilks Cluster: a 15-node, 180-core Supermicro 6016T-NTF compute cluster (2-socket, 6-core, 2.67GHz Intel 5650, 96GB RAM).
    • A Segway Human Transporter.
    • A Poster Printer (HP DesignJet 800).

Georgia Tech Research Networking Capabilities

Georgia Tech's state-of-the-art network provides capabilities with few parallels in academia or industry, delivering unique and sustained competitive advantage to Georgia Tech faculty, students, and staff. Since the mid-80’s Georgia Tech and OIT have provided instrumental leadership in high-performance networks for research and education (R&E) regionally, nationally, and internationally.

 

A founding member of Internet2 (I2) and National LambdaRail (NLR) – high bandwidth networks dedicated to the needs of the research and education community – Georgia Tech manages and operates Southern Crossroads (SoX, the I2 regional GigaPOP). We work within six Southeastern states to make affordable high-performance network access and network services available to researchers and faculty at Georgia Tech, their collaborators, other higher-education systems, K-12 systems, and beyond.

 

Georgia Tech's network has high-performance connectivity to other members of the research and education community world-wide through 100 Gbps (gigabits per second) links to SoX/SLR, which is peered with Internet2 Network, TransitRail, Oak Ridge National Labs (ORNL), the Department of Energy's Energy Sciences Network (ESNet), NCREN, NASA's NREN, MREN, FLR, Peachnet, LONI, 3ROX, as well as other SoX participants in the Southeast.

 

In addition to the exceptional R&E network connectivity provided to all Georgia Tech faculty, students, and staff, dedicated bandwidth in support of specific collaborations and research is also possible.