|Price:||Log In to View|
|Publication Date:||3Q 2020|
NVIDIA announced its new Ampere architecture with the A100 graphics processing unit (GPU). The A100 GPU is offered as part of an extremely high density DGX board and is optimized for artificial intelligence (AI)/machine learning (ML) workloads. This represents a major step forward in performance, but also an unprecedented challenge in terms of power density and cooling. It is also a big bet on an AI strategy defined by very large multipurpose models, centralized model training, and cloud services.
Key Questions Addressed:
- What is NVIDIA Ampere A100 and why does it matter?
- What does the A100 tell us about the wider direction of the industry?
- What are the key markets that NVIDIA is targeting with the A100?
- What impact will deploying A100s have on data centers?
- How will the impact on data centers change the wider industry?
- How is NVIDIA supporting customers who deploy A100s in their data centers?
Who Needs This Report?
- Cloud service providers
- Data center and colocation providers
- Enterprises deploying AI and ML
- Semiconductor vendors
- Server OEMs
- Investor community
Table of Contents
The Ampere A100: Extraordinary power in both senses of the word
A clear bet on bigger, multipurpose, centralized models
Despite extreme data center requirements, hyperscale providers are not certain to win
Vendors need to judge the correct product mix; enterprises need to decide whether AI as such is their competitive edge