With extreme performance being a core foundation for the Power of Two, we’re particularly excited to announce that the Accelerator-Optimized VM (A2) family is now available on Google Compute Engine. Importantly this is the first A100-based offering in the public cloud, and the technology is available via a private alpha program – with public availability coming later this year.

Accelerator-optimized VMs
Why is the NVIDIA Ampere A100 Tensor Core GPU special? Well, with up to 16GPUs in a single VM, the A2 family is designed to meet the most demanding workloads – such as CUDA-enabled ML training and inference, and high performance computing (HPC).
As for the standout headlines behind the A100:
- Each GPU offers up to 20x the compute performance compared to previous generation units
- They come with 40GB of high-performance HBM2 GPU memory
- The A2 family also NVIDIA’s HGX A100 systems to deliver up to 600GB/s of GPU-to-GPU bandwidth
Built on the new Ampere GPU architecture, the A100 is available in a variety of configurations that can be matched to your precise needs for GPU compute power.
To find out more, click here.