Skip navigation
machine learning in the cloud Getty Images

AWS Turns to G4 Instances for Handling Machine Learning Workloads

The Nvidia GPU-powered G4 compute instance was built to help organizations handle machine learning workloads in the cloud more efficiently.

Artificial intelligence and machine learning workloads often don't perform well on general-purpose computing hardware, on-premises or in the cloud. For cloud users, however, there is now a new option: Amazon Elastic Compute Cloud (EC2) G4 instances.

The G4 instance, which became generally available for Amazon Web Services (AWS) users on Sept. 20, provides an Nvidia GPU-powered compute instance type that is purpose-built for machine learning workloads. G4 instances include AWS custom Intel Xeon Scalable “Cascade Lake” processors, local NVMe storage and up to 50G bps of networking throughput.

Nvidia first previewed the G4 in March at its GTC event and had customers preview it to get feedback. The GPU at the foundation of the G4 is the Nvidia T4 Tensor Core GPU, which includes up to 2,560 Nvidia CUDA cores and as much as 8.1 TFLOPS of single precision floating point performance and 65 TFLOPS of mixed-precision performance. The precision of floating point math calculations is a key metric used by GPU vendors to rate the relative capabilities of hardware.

The G4 isn't the first time that AWS has offered GPU instances in its cloud. In October 2017, AWS announced the availability of P3 instances, which are powered by Nvidia Tesla V100 GPUs. AWS expects that G4 instance users will be those that want the mixed-precision ML capabilities offered by Tensor Cores but don’t require the level of performance offered by the Nvidia V100 GPUs that power P3 instances.

Google Joins AWS in Offering T4 GPUs

AWS isn't the only public cloud provider that offers T4 GPUs; Google Cloud Platform (GCP) now offers them as well as an instance type.

Paresh Kharya, director of product management, accelerated computing at Nvidia, told ITPro Today that Google and AWS now provide access to the same T4 GPUs, with support for Nvidia platform components such as the many software tools of the Nvidia CUDA-X library, the Nvidia Quadro Virtual Workstation drivers and the Cloud Gaming driver. AWS customers can take advantage of SageMaker to more easily get up and running on Nvidia GPUs, while GCP customers can make use of Cloud Machine Learning Engine (CMLE) to get projects going.

"The uniqueness of the AWS and GCP offerings comes from the integration of T4 into their platforms and managed service tools and go-to-market partner strategies," Kharya said. "With the addition of AWS, more customers around the world can now instantly tap into the versatility and performance of Nvidia T4 GPUs to accelerate workloads such as AI inference, AI training, graphic visualization and cloud gaming."

In 2020 AWS customers will get access to additional Nvidia GPU capabilities. Kharya said that starting next year, AWS customers using VMware Cloud will gain access to a new, highly scalable and secure cloud service consisting of Amazon EC2 bare metal instances to be accelerated by Nvidia T4 GPUs and new Nvidia Virtual Compute Server (vComputeServer) software.

"This service will enable customers to seamlessly migrate VMware vSphere-based applications and containers to the cloud, unchanged, where they can be modernized to take advantage of high-performance computing, machine learning, data analytics and video processing applications," he said.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish