AWS Launches Its Next-gen GPU instances Powered by NVIDIA’s Latest A100 GPUs

Amazon Web Services (AWS) has announced the introduction of the newest GPU-equipped instances. Dubbed as P4, these new instances are launching a decade after AWS released its first range of Cluster GPU instances. This latest generation is driven by Intel Cascade Lake processors and eight of Nvidia’s A100 Tensor Core GPUs. These instances, as promised by AWS, deliver up to 2.5 times the deep learning output from the previous generation — and training a like model for these new instances can be about 60% cheaper.

In the naming convention of AWS, the p4d.12xlarge instance and the eight A100 GPUs are linked via the NVLink communication interface and support the GPUDirect interface of the business.

It is clearly a very powerful computer with a high bandwidth GPU memory of 320 GB and a network of 400Gbps. It also includes 96 CPU cores, 1.1 TB device memory, and eight TB SSD capacity.

Marketing Technology News: Affinio Announces Snowflake Integration to Support Privacy Compliant Audience Enrichment

The on-demand price will be $32.77 per hour, but for one-year cases, the price will be smaller than $20 per hour and for three-year instances, $11.57.

On the other end, 4000 or more GPUs can be merged into an EC2 UltraCluster for the most powerful workload in a supercomputer. AWS is also working together with many clients to evaluate these instances and clusters, including, Toyota Research Institute, GE Healthcare, and Aon, among others.

The first generation Cluster GPU instances were introduced in 2010, and the previous gene was launched in 2019 with the timeline standings at G2 in 2013, P2 in 2016, P3 in 2017, G3 in 2017, P3dn in 2018, and G4 in 2019.

The update also follows NVIDIA’s introduction this year of its next-generation GPUs, which included RTX 3000 Series GPUs for personal computers and for AI, data analytics, and HPC data centers across the ampere architecture.

Marketing Technology News: Warren Closes $1.4 Million to Help Local Cloud Infrastructure Providers Compete Against Industry Giants

AWS is the most recent big public cloud service to support the A100 GPUs from Nvidia. In July, under two months after Ampere’s launch, Google Cloud launched its GPU A2 family based on A100. In the preview mode, in August, Microsoft Azure released its NDv4 instances with A100 power. The month after, Oracle Cloud announced that its bare metal A100 fuel instance, GPU4.8, had become publicly available.

The computer and AI industry is developing and increasing exponentially, and some analysts are also saying that AI is the future. Data analysis functions, on the other hand, become rapidly power-hungry with the increasing scale of available data. With AWS clusters operated by state-of-the-art GPUs and CPUs optimized for data centers, consumers may remotely use all the power they need to train or process their AI models or evaluate big amounted data without needing to invest in their own hardware.

Marketing Technology News: WPG Americas Inc. Offering New Online Shopping Experience to Customers

Brought to you by
For Sales, write to: contact@martechseries.com
Copyright © 2024 MarTech Series. All Rights Reserved.Privacy Policy
To repurpose or use any of the content or material on this and our sister sites, explicit written permission needs to be sought.