Computing Facilities

(For CoC grant writers, this document is also available in PDF Format.)

 

 

Georgia Tech

Located in Atlanta, Georgia, the Georgia Institute of Technology is one of the top research universities in the United States. Georgia Tech is a science- and technology-focused learning institute renowned for our deeply held commitment to improving the human condition. Our faculty and students are solving some of the world’s most pressing challenges: clean and sustainable energy; disease diagnosis and treatment; and national defense and security, among others. Georgia Tech is an innovative intellectual environment with nearly 1,000 full-time instructional faculty and more than 26,000 undergraduate and graduate students.

 

Our bachelor's, master's and doctoral degree programs are consistently recognized among the best. Georgia Tech students are equipped for success in a world where technology touches every aspect of our daily lives. Degrees are offered through the Institute’s six colleges: Computing, Design, Engineering, Sciences, Scheller College of Business, and the Ivan Allen College of Liberal Arts.  Year after year, Georgia Tech is consistently the only technological university ranked in U.S. News & World Report's listing of America's top ten public universities. In addition, our College of Engineering is consistently ranked in the nation's top five by U.S. News. In terms of producing African American engineering graduates, Diverse: Issues in Higher Education ranks Tech No. 1 at the doctoral level and No. 2 at the bachelor's level, based on the most recent rankings for 2017. These impressive national rankings reflect the academic prestige long associated with the Georgia Tech curriculum.

 

The College of Computing at Georgia Tech:

The College of Computing at Georgia Tech is a national leader in the research and creation of real-world computing breakthroughs that drive social and scientific progress. With its graduate CS program ranked 8th (2018) nationally by U.S. News and World Report, the College's unconventional approach to education is pioneering the new era of computing by expanding the horizons of traditional computer science students through interdisciplinary collaboration and a focus on human centered solutions.

 

The College resides and operates computing facilities in four buildings (College of Computing Building, Technology Square Research Building, Klaus Advanced Computing Building, and Coda), including more than 3,500 computers in more than 50 networks servicing three Schools, eight Research Centers, and 60 Research Labs. Faculty, staff and students are currently moving into our newest additional building, Coda, which contains a $5.3 million high-performance computing resource that will support the work of dozens of faculty, more than 50 research scientists and 200 graduate students.

 

Our data centers host more than 900 servers of various makes (Dell, HP, IBM, Penguin Computing, and SuperMicro), most of which are multi-processor, multi-core machines, providing over 1 PB of networked disk storage. There are several Linux-based high performance computing clusters totaling more than 1,100 physical servers and 5,600 computing processors/cores. All of the College's facilities are linked via local area networks that provide 1 Gigabit per second (Gbps) to the desktop. The College's network employs an internal high- performance, 40 Gbps Ethernet backbone to each of its buildings with external connectivity to the campus network by a 40 Gbps Ethernet uplink. Connectivity within the data center is available at 100Gbps. The Georgia Tech network is an Ethernet based IP network spanning the 150 buildings on the main campus in Atlanta, as well as remote campuses in Savannah, GA, and Metz, France. Internet services are purchased from transit providers as well as connections to research networks and transit peering services. Georgia Tech has peering with Peachnet, Southern Crossroads (SoX), TransitRail, Cogent, and Qwest at speeds of up to 100 Gbps. The Georgia Tech Research Network, through its peering with SoX, has connectivity to NLR Packetnet, Internet2 network, Oak Ridge National Labs, the Department of Energy’s Energy Sciences Network (ESNet), NCREN, MREN, FLR, LONI, and 3ROX, as well as other SoX participants in the SouthEast.

 

 

Buildings

The College of Computing Building (CCB) houses administrative offices for the College, instructional classrooms and labs, and the Institute for Robotics & Intelligent Machines (IRIM) at Georgia Tech, as well as meeting space for undergraduate and graduate student organizations. CCB is the instructional center of the College, housing 6 classrooms and 16 instructional spaces for meetings between students, teaching assistants, and instructors. The general clusters provided by OIT are available to students taking a CS course requiring specialized resources. A spacious Commons Area provides ample seating and computer networking which fosters both formal and informal learning opportunities and collaboration. CCB also houses a 2000 sq. ft. data center providing over 1.25 Megawatts of power and cooling capacity for the College’s research and instructional computational servers. The building's advanced infrastructure provides 1 Gbps networking to all ports with a 40 Gbps uplink to the campus network as well as high-density 802.11ac wireless networking support.

 

 

The Technology Square Research Building (TSRB) is located in the innovative and pedestrian-friendly mixed-use Technology Square district of Georgia Tech and is home to the College's School of Interactive Computing, the GVU Center as well as more than 15 CoC research labs spanning multiple research groups including Human Computer Interaction, Cognitive Science, Mobile Robotics, Graphics and Animation, Information Visualization, Learning Sciences and Technology, Computing Education, Social Computing, Ubiquitous and Wearable Computing, and Virtual and Augmented Environments. TSRB also houses state of the art conference facilities that accommodate several of the College's special events, lectures and meetings. The College manages a 400 sq. ft. data center in the building providing 100 Kilowatts of power and cooling capacity for several research computational servers. The building's advanced infrastructure provides 1 Gbps networking to all ports with a 10 Gbps uplink to the campus network as well as high-density 802.11n wireless networking support.

 

 

The Klaus Advanced Computing Building (KACB), dedicated in 2006, is located in the heart of the Georgia Tech campus and houses some of the most advanced computing labs and innovative educational technology in the world. The 414,000 square-foot building consists of some 70 research laboratories, 6 instructional labs, 5 large classrooms and a 200-seat auditorium. The building has a substantial number of environmental and sustainable features achieving the prestigious LEED Gold rating from the U.S. Green Building Council.  Environmentally friendly features include creative use of the 6-acre urban campus site to preserve over 50 percent of the site as green space, a storm water collection system to provide water for irrigation, energy efficient heating, cooling and lighting systems, and extensive use of recyclable materials.

KACB is home to the College's School of Computer Science, the School of Computational Science and Engineering, 5 research centers (IISP, CERCS, C21U, IDH, and ARC), over 20 CoC research labs spanning multiple research groups including High Performance Computing, Information Security, Software Engineering, Databases, Systems, Theory, Computer Architecture, Networking, Programming/Algorithms, Data Analytics, and Embedded Systems. KACB houses state of the art conference facilities that accommodate several of the College's special events, lectures and meetings. The building features open collaboration spaces, study lounges, conference rooms and graduate student offices, all with ample power and networking ports. All conference rooms are equipped with projection technology, table networking and power. A highly visible conference room is equipped with a Polycom HDX 8000 video conferencing system and is available to all faculty, staff and students for conducting meetings with remote collaborators. The College manages a 500 sq. ft. data center in the building providing 80 Kilowatts of power and cooling capacity for critical enterprise servers. The building's advanced infrastructure provides 1 Gpbs networking to all ports with a 40 Gbps uplink to the campus network as well as high-density 802.11n wireless networking support.

 

 

The Coda Building, completed in 2019, is 21 story mixed use facility home to a 93,000 SF high-performance computing data center and 605,000 SF of office space. Designed to help foster collaboration between Georgia Tech and other research and industry partners, Coda will be occupied by approximately half Georgia Tech entities, and half private companies. Groups affiliated with the College of Computing located in Coda include the School of Computational Science  & Engineering (CSE), the Institute for Data Engineering and Science (IDEaS), the Institute for Information Security & Privacy (IISP), and the Center for Machine Learning (ML@GT).

 

 

 

Instructional Facilities

 

In addition to general instructional facilities provided by the Institute, the College of Computing provides specialized instructional facilities for its advanced curriculum needs. All of the College's instructional labs and servers are located in the College of Computing Building.

  • Shuttles UNIX Remote Access: Supporting general-purpose UNIX shell remote access, a 5-node Virtual Server cluster (4 Virtual CPUs, 8 GB RAM, Prism Home Directories).
  • Networking Instruction Lab: Supporting networking course assignments, 2 racks with 8 Cisco 2911 routers and an assortment of network switches and Intel-based PC end- hosts.
  • Information Security Instruction Lab: Supporting information security courses, a 16- seat cluster of SuperMicro workstations. Student teams are provided access to the latest information security hardware and software in an isolated environment allowing for study, analysis, and simulation of current threats without risk to production facilities.
  • Dune Instructional Storage Cluster: Supporting HPC and large-scale data analysis courses, a 5-node GlusterFS storage cluster (2-socket, 6-core Xeon, 32 GB RAM each) providing over 500 TB of data storage with 10 Gbps IP network interface controllers and with high-speed QDR Infiniband to the Jinx HPC cluster.
  • Deepthought Instructional HPC Cluster: Supporting advanced programming courses, a 20-node, 640-core, compute cluster consisting of SuperMicro servers (2-socket, 16- core, 2.4GHz AMD Opertron 6378, 256GB RAM, 128GB SSD, QDR Infiniband to the Dune Storage Cluster, 10Gbps IP network cards).
  • Wingtip HPC and Data Cluster: Two Penguin Relion 2903GT GPU systems, one with an NVIDIA P100 GPU card and the second with a NVIDIA Titan Xp Pascal and Tesla K40c GPU cards. Two high-memory/multi-core Dell PowerEdge R930 systems with four Intel Xeon E7-4850 v3 (14-core) processors and 2TiB of DDR4 RAM each.
  • Newell Cluster: Three IBM Power System Accelerated Compute Servers (AC922). Each computer has two POWER9 16-core 2.7 GHz (3.3 GHz turbo) processor modules, SMT4 yielding 64 threads per processor module. 256 GiB DDR4 ECC memory. Two 32 GiB SMX2 NVIDIA Tesla V100 GPUs with NVLink. Dual 100Gbps Mellanox InfiniBand ConnectX5 CAPI-capable EDR adapter.
  • NVIDIA DGX-1: NVIDIA's flagship data science appliance with eight 32 GiB SMX2 NVIDIA Tesla V100 GPUs. It comes configured with a suite of GPU-optimized software and simplified management tools for quick deployment and state-of-the-art computing environment for AI, analytics, machine learning, and deep learning.
  • COC-ICE (Instructional Cluster Environment): Funded by a technology fee grant by the College of Computing and created in partnership with the Partnership for an Advanced Computing Environment (PACE), COC-ICE is one of two ICE clusters that provides service specifically for classes in the College. Initiated by PACE in 2018, ICE offers an educational environment mirroring production research clusters that provide undergraduate and graduate students experience working directly with scientific computing including HPC and GPU programming.

Research Facilities

 

An abundance of research facilities are housed in the College's Schools and Research Centers:

  • The School of Computational Science and Engineering (CSE) is located in KACB and supports substantial computational facilities related to both education and research.  The School is affiliated with several research centers, initiatives and labs including the Center for Research into Novel Computing Hierarchies (CRNCH), the Center for Health Analytics and Informatics (CHAI), and the Center for High Performance Computing (HPC) Laboratory.  Through industrial partnerships, the HPC Lab operates or supports several state-of-the-art parallel computers and future technologies, which are readily available for teaching and research and provide a diverse collection of resources for exploration along application and computing dimensions:
  • SunLab Deep Learning Lab, with deep reach into clinical health care data analysis:
    • NVIDIA server with two Tesla K80 GPU cards, two Intel Xeon E5-2630 v3 (8-core) processors, and 256GiB DDR4 RAM
    • NVIDIA server with four Titan X Pascal GPU cards, two Intel Xeon E5-2630 v3 (8-core) processors, and 256GiB DDR4 RAM
    • NVIDIA Deep Learning DevBox with eight Titan X Maxwell GPU cards, two Intel Xeon E5-2630 (14-core) v4 processors and 256GiB of DDR4 RAM
    • NVIDIA Deep Learning DevBox with eight Titan X Pascal GPU cards, two Intel Xeon E5-2630 v4 processors (14-core) and 256GiB of DDR4 RAM
    • NVIDIA Deep Learning DevBox workstation with four Titan X Maxwell GPU cards, single Intel Xeon E5-2630 v3 (8-core) processor, and 256GiB DDR4 RAM
    • NVIDIA server with two Tesla K80 GPU cards, two Intel Xeon E52697 v3 (14-core) processors, and 128GiB of DDR4 RAM
    • High-memory/multi-core server with four E5-4620 v4 (10-core) processors and 1TiB of DDR4 RAM
    • NVIDIA Deep Learning DevBox workstation with four Titan X Maxwell GPU cards, an Intel Core i7-5930K processor, and 128GiB DDR4 RAM
    • NVIDIA NVLINK research system with two Tesla P100 GPU cards (16GiB HBM2 mem), two Intel E5-2683 v4 (16-core) processors, and 768GiB DDR4 RAM and 800GB Intel NVMe SSD
  • The CRNCH Rogues Gallery, a CSE/CoC/ECE/IDEaS joint effort with and part of the Georgia Tech Center for Research into Novel Computing Hierarchies.  These unusual and ground-breaking systems enable fundamentally new research in applications, algorithms, and systems.  Current Rogues include the following:
    • Emu Chick: A migratory thread system from Emu Technologies, the Chick combines eight nodes.  Each node has one stationary core for servicing OS requests, eight multi-core highly multithreaded Gossamer processors, 64GiB of local memory, and a 1TB SSD.  The Gossamer processors provide a single unified address space across the total of 512GiB RAM, and threads migrate so all reads are local to a node.
    • Micron EX700 with AC-520 HMC + FPGA module, Nallatech 385A Arria 10, Xilinx Ultrascale MpSOC: These platforms combine reconfigurable processing (FPGAs) with either physical or emulated 3D stacked memories, enabling rapid evaluation of novel combinations of memory and logic.
    • SoC Field Programmable Analog Arrays: A locally created analog version of FPGAs combines reconfigurable analogous computing with a modern toolchain to enable experiments in fundamentally new architectures like neuromorphic computing.
    • Terra cluster: five-node, 160-core parallel Lenovo NextScale system with two Intel Xeon E5-2683 v4 (16-core) processors, 256GiB DDR4 RAM per nx360 M5 compute node
  • NVIDIA DGX Station: NVIDIA's flagship workstation for data science and AI workloads. Four 32GiB Tesla V100 GPUs.
  • Penguin Relion XE2112: 20TB all-flash NVMe storage server for fast "scratch" storage throughout heterogeneous compute environments.
  • Machine Learning Cluster: Four Deep Learning DevBox systems each with two Intel Xeon Silver 4116 processors, 256 GiB of RAM, eight NVIDIA GeForce RTX 2080 Ti GPUs.
  • ML and Topology Optimization System: Two Intel Xeon Silver 4116 processors, 256 GiB of RAM, eight NVIDIA Titan Xp GPUs
  • Graph Analysis Research System: Lenovo System x3650 M5 server with two Intel Xeon E5-2695 v4 (18 core) processors, 1TiB of DDR4 RAM, one NVIDIA Tesla P100 GPU card, one NVIDIA Tesla P40 card, and 2TB Intel NVMe SSD
  • Graph Analysis Workstation: Intel Xeon Phi (KNL) processor, 128GiB DDR4 RAM
  • Computational Biology System: four Intel Xeon E7-8870 v3 (18-core) processors with 1TiB DDR4 RAM
  • Intel Coprocessor/Accelerator lab: two Penguin Relion 2908GT systems with eight Intel Phi (KNC) coprocessors, two Intel Xeon E5-2600 v3 series processors, and 64GiB of DDR4 RAM each
  • Sun Fire X4470 M2 large memory servers:  Two 8-core Intel Xeon E7-4820 processors with 512GiB of RAM, and four 8-core E7-4820 processors with 1TiB RAM.
  • Intel Large-core Server: an Intel Server System QSSC-SR4 with four E7-8870 10-core Intel Xeon processors and 256GiB of RAM.
  • Bugs Cluster:  a 6-node, 48-core cluster of Dell PowerEdge 1950 and 2950 servers (each with 2-socket, 4-core, Intel E5420 processors, 16GiB RAM) configured with Hadoop.
  • Mimosa Cluster: an 80-node, 640-core cluster used for Hadoop and parallel computation research. System is a Yahoo! donation of HPE compute nodes.
  •   Convey FPGA servers:  The HPC Lab also utilizes two Convey HC-1 hybrid-core servers featuring Field Programmable Gate Arrays (FPGAs) coupled with multi-core Intel Xeon processors.
  •   Topaz Cluster:  a 36-node, 288-core cluster of TeamHPC servers (each with 2-socket, 4-core processors).

 

  • The Institute for Information Security and Privacy (IISP) is located in KACB and is comprised of the Information Security Lab and the Converging Infrastructures Security (CISEC) Lab. GTISC operates a substantial number of network, computational server and storage resources to support its research activities in the area of information security.
  • The Center for 21st Century Universities (C21U) is located in KACB and operates the C21U Studio and Innovation Lab which includes a highly connected classroom, control room, and broadcast quality studio, as well as a dedicated support team to support experimental teaching and research in fundamental change in higher education.
  • The Center for Experimental Research in Computer Science (CERCS), is located in KACB and includes the Interactive High-Performance Computing Lab (IHPCL), serving as a focus for interdisciplinary research and instruction involving high-performance computer systems. These facilities are linked by a dedicated high-performance backbone utilizing 10 Gbps Ethernet, and include:
    • Whitestar Cluster: an 840-node, 3360-core IBM BladeCenter LS20 cluster (2- socket, 2-core AMD Opteron 270, 4GB RAM each) configured with VMware VCenter.
    • Jedi Cluster: a 80-node, 760-core, Penguin Computing cluster with 30 Relion 1752 servers (2-socket, 6-core, 2.66GHz Intel X5650, 48GB RAM each) and 50 Relion 1702 servers (2-socket, 4-core, 2.4GHz Intel E5530, 24GB RAM each), configured with Openstack.
    • Vogue Cluster: an 11-node, 88-core cluster of 4 Dell PowerEdge R610 servers (2-socket, 4-core, Intel E5550 processors, 12GB RAM) and 7 Penguin Computing Relion 1700 servers (2-socket, 4-core, Intel E5506 processors, 12GB RAM).
    • Aries Cluster: a 4-node, 40-core cluster of Dell R830 servers (4-socket, 10-core, Intel E5-4610 processors, 512GB RAM)
    • KIDS Cluster: a 120-node, 240 CPU and 360 GPU cluster, composed of HP Proliant SL-390 (Ariston) servers with Intel Westmere 6-core CPUs, NVIDIA 6GB Fermi GPUs, and a Qlogic QDR InfiniBand interconnect.

 

  • The Computer Architecture research group is located in KACB and conducts research on all aspects of future microprocessor technology including performance, power, multi- threading, chip-multiprocessing, security, programmability, reliability, interaction with compilers and software, and the impact of future technologies.
    • Pasta Cluster: a 35-node, 312-core cluster of 25 Dell PowerEdge 1950 servers (2-socket, 4-core, 3.0GHz Intel X5650 processors, 16GB RAM), and 2 Dell PowerEdge R410 (2-socket, 3GHz Intel X5450 processors, 24GB RAM) and an IBM BladeCenter HS22 (8-blades, 2-socket, 6-core, 2.93GHz Intel X5670 processors, 48GB RAM).

 

  • Skynet Cluster: A growing Penguin Computing Cluster that consists of approximately 30 nodes, 496GB RAM, with 240 Pascal and Turing GPUs.

 

  • The GVU Center is located in the TSRB Building and houses a variety of research labs in a multi-facility collection of workplaces, with faculty from every college across campus. Total GVU lab space comprises more than 8,000 square feet. In addition, GVU affiliated laboratories are operated by non-CoC faculty in the College of Architecture; the School of Literature, Culture, and Communication; the School of Psychology; and the Interactive Media and Technology Center. GVU facilities utilize state-of-the-art high-performance servers and graphics workstations from major manufacturers such as Dell, HP, and Apple. GVU is also a partner in the Aware Home Research Initiative (AHRI). A partial list of specialized GVU resources includes:
  • The App Lab: a “hackerspace” devoted to the creation of mobile applications and technologies across a range of platforms.
    • Numerous iOS, Android and Kindle devices for checkout.
    • Several dual-booted OSX and Windows workstations with current, mobile and gaming software dev environments, including Unity for publishing.
  • The Proto Lab: a 1,200 sq. ft. lab devoted to the prototyping of experimental devices such as wearable computers and equipped with devices such as:
    • 3D Printer (Dimension SST 768)
    • 3D Scanner
    • Industrial 3D Printers (Projet 3510 HD, Dimension SST 768)
    • Poster Printer (HP DesignJet Z3200ps)
    • Laser Cutter/Etcher (Epilog)
    • Circuit Mill (LPKF ProtoMat S62)
    • CNC Router (K2CNC 4’x8’)
    • Shop equipment (band saw, table saw, drill press)
    • Surface mount and through-hole soldering stations
    • Bench Equipment (Power Supplies, Oscilloscope/Logic Analyzer, RF Generator and spectrum analyzer)
    • Arduino Development/Test Circuit Boards
    • Silk Screening Equipment (for conductive ink)
    • Embroidery Machines (for conductive thread), sewing equipment, leather stitcher
  • The Usability Lab: complete with separated viewing area, for conducting and capturing video and screen recordings of computer-based studies.
  • A High-Definition (HDTV) Video Conferencing System (LifeSize).
  • A Video Webcasting AV Cart with High-Definition (HD) capture capability.
  • Sony Bloggie HD Cameras for field recording
  • 12 Camera IR Motion Capture System (Vicom)
  • Several Polhemus, Ascension, and Intersense tracking systems and head- mounted displays
  • Several Smartboards
  • A Segway Human Transporter
  • A Poster Printer (HP DesignJet 800)

 

  • The Institute for Robotics and Intelligent Machines (IRIM) is located in the College of Computing Building and houses a variety of research labs in a multi-facility collection of workplaces. In addition, IRIM affiliated laboratories are operated by non-CoC faculty in the College of Architecture, College of Engineering: Schools of Aerospace Engineering, Biomedical Engineering, Mechanical Engineering, Electrical and Computer Engineering, College of Science: School of Physics and the Georgia Tech Research Institute. A partial list of specialized IRIM and robotics equipment includes:

Vehicles:

Robots:

  • 3 cells with KR210 KUKA robots, material handling equipment, and AGVs.
  • Several Kuka robotic arms
  • A Schunk robotic arm (LWA3) with dexterous hand (SDH2).
  • Golem Krang: a mobile manipulator designed and built by the   Humanoid Robotics Lab, featuring a Schunk robotic arm mounted on a custom Segway Human Transporter.
  • Simon: a face-to-face, robotic research platform featuring an upper-torso humanoid social robot with two 7-DOF arms, two 4-DOF hands, and a socially expressive head and neck, including two 2-DOF ears with full RGB spectrum LEDs.
  • A Segway RMP200 Research Mobility Platform.
  • A Mobile Robotics PeopleBot.
  • A PR2 Willow Garage robot.
  • Rovio WowWee mobile webcam.
  • 18 Sony AIBO legged robots
  • 2 iRobot ATRV minis, 1 IS Robotics Pebbles III robots
  • 4 Pioneer 2-DXE, 3 Pioneer AT robots
  • 1 Evolution Scorpion, 1 Evolution ER1, 1 Segway, 1 Denning DRV-I robot
  • 3 RWI ATRV-Jr, 5 ActivMedia Amigobots, 1 Nomad 200, 5 Nomad 150,    1 Hermes II Hexipod, 3 Blizzard robots
  • several SICK scanners, various lasers, vision/motion systems, cameras, and associated equipment.
  • Electronics Shop: a lab with oscilloscopes, logic analyzers, programmable power supplies, soldering equipment, etc.
  • Wilks Cluster: a 15-node, 180-core Supermicro 6016T-NTF compute cluster (2- socket, 6-core, 2.67GHz Intel 5650, 96GB RAM).
  • A Segway Human Transporter.
  • A Poster Printer (HP DesignJet 800).

Georgia Tech Research Networking Capabilities

Georgia Tech's state-of-the-art network provides capabilities with few parallels in academia or industry, delivering unique and sustained competitive advantage to Georgia Tech faculty, students, and staff. Since the mid-80’s Georgia Tech and OIT have provided instrumental leadership in high-performance networks for research and education (R&E) regionally, nationally, and internationally.

 

A founding member of Internet2 (I2) and National LambdaRail (NLR) – high bandwidth networks dedicated to the needs of the research and education community – Georgia Tech manages and operates Southern Crossroads (SoX, the I2 regional GigaPOP). We work within six Southeastern states to make affordable high-performance network access and network services available to researchers and faculty at Georgia Tech, their collaborators, other higher-education systems, K- 12 systems, and beyond.

 

Georgia Tech's network has high-performance connectivity to other members of the research and education community world-wide through 100 Gbps (gigabits per second) links to SoX/SLR, which is peered with Internet2 Network, TransitRail, Oak Ridge National Labs (ORNL), the Department of Energy's Energy Sciences Network (ESNet), NCREN, NASA's NREN, MREN, FLR, Peachnet, LONI, 3ROX, as well as other SoX participants in the Southeast.

 

In addition to the exceptional R&E network connectivity provided to all Georgia Tech faculty, students, and staff, dedicated bandwidth in support of specific collaborations and research is also possible.