Linux Accelerated Computing Instances
If you require high parallel processing capability, you'll benefit from using accelerated computing instances, which provide access to NVIDIA GPUs. You can use accelerated computing instances to accelerate many scientific, engineering, and rendering applications by leveraging the CUDA or Open Computing Language (OpenCL) parallel computing frameworks. You can also use them for graphics applications, including game streaming, 3-D application streaming, and other graphics workloads.
Accelerated computing instances run as HVM-based instances. Hardware virtual machine (HVM) virtualization uses hardware-assist technology provided by the AWS platform. With HVM virtualization, the guest VM runs as if it were on a native hardware platform, which enables Amazon EC2 to provide dedicated access to one or more discrete GPUs in each accelerated computing instance.
You can cluster accelerated computing instances into a placement group. Placement groups provide low latency and high-bandwidth connectivity between the instances within a single Availability Zone. For more information, see Placement Groups.
Contents
For information about Windows accelerated computing instances, see Windows Accelerated Computing Instances in the Amazon EC2 User Guide for Windows Instances.
Accelerated Computing Instance Families
Accelerated computing instance families use hardware accelerators, or co-processors, to perform some functions, such as floating point number calculation and graphics processing, more efficiently than is possible in software running on CPUs. The following accelerated computing instance families are available for you to launch in Amazon EC2.
P2 Instances
P2 instances use NVIDIA Tesla K80 GPUs and are designed for general purpose GPU computing using the CUDA or OpenCL programming models. P2 instances provide high bandwidth networking, powerful single and double precision floating-point capabilities, and 12 GiB of memory per GPU, which makes them ideal for deep learning, graph databases, high performance databases, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, rendering, and other server-side GPU compute workloads.
-
P2 instances support enhanced networking with the Elastic Network Adapter. For more information, see Enabling Enhanced Networking with the Elastic Network Adapter (ENA) on Linux Instances in a VPC.
-
P2 instances are EBS-optimized by default. For more information, see Amazon EBS–Optimized Instances.
-
P2 instances support NVIDIA GPUDirect peer to peer transfers. For more information, see NVIDIA GPUDirect.
-
There are several GPU setting optimizations that you can perform to achieve the best performance on P2 instances. For more information, see Optimizing GPU Settings (P2 Instances Only) .
-
The
p2.16xlargeinstance type provides the ability for an operating system to control processor C-states and P-states. For more information, see Processor State Control for Your EC2 Instance.
G2 Instances
G2 instances use NVIDIA GRID K520 GPUs and provide a cost-effective, high-performance platform for graphics applications using DirectX or OpenGL. NVIDIA GRID GPUs also support NVIDIA’s fast capture and encode API operations. Example applications include video creation services, 3D visualizations, streaming graphics-intensive applications, and other server-side graphics workloads.
CG1 Instances
CG1 instances use NVIDIA Tesla M2050 GPUs and are designed for general purpose GPU computing using the CUDA or OpenCL programming models. CG1 instances provide customers with high bandwidth networking, double precision floating-point capabilities, and error-correcting code (ECC) memory, making them ideal for high performance computing (HPC) applications.
Hardware Specifications
For more information about the hardware specifications for each Amazon EC2 instance type, see Amazon EC2 Instances.
Accelerated Computing Instance Limitations
Accelerated computing instances have the following limitations:
-
You must launch the instance using an HVM AMI.
-
The instance can't access the GPU unless the NVIDIA drivers are installed.
-
There is a limit on the number of instances that you can run. For more information, see How many instances can I run in Amazon EC2? in the Amazon EC2 FAQ. To request an increase in these limits, use the following form: Request to Increase Amazon EC2 Instance Limit.
AMIs for Accelerated Computing Instances
To help you get started, NVIDIA provides AMIs for accelerated computing instances. These reference AMIs include the NVIDIA driver, which enables full functionality and performance of the NVIDIA GPUs.
For a list of AMIs with the NVIDIA driver, see AWS Marketplace (NVIDIA GRID).
You can launch accelerated computing instances using any HVM AMI.
Installing the NVIDIA Driver on Amazon Linux
An accelerated computing instance must have the appropriate NVIDIA driver. The NVIDIA driver you install must be compiled against the kernel that you intend to run on your instance.
Amazon provides AMIs with updated and compatible builds of the NVIDIA kernel drivers for each official kernel upgrade in the AWS Marketplace. If you decide to use a different NVIDIA driver version than the one that Amazon provides, or decide to use a kernel that's not an official Amazon build, you must uninstall the Amazon-provided NVIDIA packages from your system to avoid conflicts with the versions of the drivers that you are trying to install.
Use this command to uninstall Amazon-provided NVIDIA packages:
Copy[ec2-user ~]$sudo yum erase nvidia cuda
The Amazon-provided CUDA toolkit package has dependencies on the NVIDIA drivers. Uninstalling the NVIDIA packages erases the CUDA toolkit. You must reinstall the CUDA toolkit after installing the NVIDIA driver.
You can download NVIDIA drivers from http://www.nvidia.com/Download/Find.aspx. Select the appropriate driver for your instance:
P2 Instances
| Product Type | Tesla |
| Product Series | K-Series |
| Product | K-80 |
| Operating System | Linux 64-bit |
| Recommended/Beta | Recommended/Certified |
G2 Instances
| Product Type | GRID |
| Product Series | GRID Series |
| Product | GRID K520 |
| Operating System | Linux 64-bit |
| Recommended/Beta | Recommended/Certified |
CG1 Instances
| Product Type | Tesla |
| Product Series | M-Class |
| Product | M2050 |
| Operating System | Linux 64-bit |
| Recommended/Beta | Recommended/Certified |
Installing the NVIDIA Driver Manually
To install the driver for an Amazon Linux AMI
-
Run the yum update command to get the latest versions of packages for your instance.
Copy[ec2-user ~]$sudo yum update -y -
Reboot your instance to load the latest kernel version.
Copy[ec2-user ~]$sudo reboot -
Reconnect to your instance after it has rebooted.
-
Install the gcc compiler and the
kernel-develpackage for the version of the kernel you are currently running.Copy[ec2-user ~]$sudo yum install -y gcc kernel-devel-`uname -r` -
Download the driver package that you identified earlier. For example, the following command downloads the 352.99 version of the NVIDIA driver for P2 instances.
Copy[ec2-user ~]$wgethttp://us.download.nvidia.com/XFree86/Linux-x86_64/352.99/NVIDIA-Linux-x86_64-352.99.run -
Run the self-install script to install the NVIDIA driver. For example:
Copy[ec2-user ~]$sudo /bin/bash ./NVIDIA-Linux-x86_64-352.99.run -
Reboot the instance.
Copy[ec2-user ~]$sudo reboot -
Confirm that the driver is functional. The response for the following command lists the installed NVIDIA driver version and details about the GPUs.
Note
This command may take several minutes to run.
Copy[ec2-user ~]$nvidia-smi -q | head==============NVSMI LOG============== Timestamp : Thu Aug 25 04:59:03 2016 Driver Version : 352.99 Attached GPUs : 8 GPU 0000:00:04.0 Product Name : Tesla K80 Product Brand : Tesla -
(P2 instances only) If you are using a P2 instance, complete the optimization steps in the next section to achieve the best performance from your GPU.
Optimizing GPU Settings (P2 Instances Only)
There are several GPU setting optimizations that you can perform to achieve the best performance on P2 instances. By default, the NVIDIA driver uses an autoboost feature, which varies the GPU clock speeds. By disabling the autoboost feature and setting the GPU clock speeds to their maximum frequency, you can consistently achieve the maximum performance with your P2 instances. The following procedure helps you to configure the GPU settings to be persistent, disable the autoboost feature, and set the GPU clock speeds to their maximum frequency.
To optimize P2 GPU settings
-
Configure the GPU settings to be persistent.
Note
This command may take several minutes to run.
Copy[ec2-user ~]$sudo nvidia-smi -pm 1Enabled persistence mode for GPU 0000:00:0F.0. Enabled persistence mode for GPU 0000:00:10.0. Enabled persistence mode for GPU 0000:00:11.0. Enabled persistence mode for GPU 0000:00:12.0. Enabled persistence mode for GPU 0000:00:13.0. Enabled persistence mode for GPU 0000:00:14.0. Enabled persistence mode for GPU 0000:00:15.0. Enabled persistence mode for GPU 0000:00:16.0. Enabled persistence mode for GPU 0000:00:17.0. Enabled persistence mode for GPU 0000:00:18.0. Enabled persistence mode for GPU 0000:00:19.0. Enabled persistence mode for GPU 0000:00:1A.0. Enabled persistence mode for GPU 0000:00:1B.0. Enabled persistence mode for GPU 0000:00:1C.0. Enabled persistence mode for GPU 0000:00:1D.0. Enabled persistence mode for GPU 0000:00:1E.0. All done. -
Disable the autoboost feature for all GPUs on the instance.
Copy[ec2-user ~]$sudo nvidia-smi --auto-boost-default=0All done. -
Set all GPU clock speeds to their maximum frequency.
Copy[ec2-user ~]$sudo nvidia-smi -ac 2505,875Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:0F.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:10.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:11.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:12.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:13.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:14.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:15.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:16.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:17.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:18.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:19.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:1A.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:1B.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:1C.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:1D.0 Applications clocks set to "(MEM 2505, SM 875)" for GPU 0000:00:1E.0 All done.

