DGX Station

Data-center performance for desktops

Specialized for intensive AI workloads, bringing data-center performance to developers and researchers that works with cutting-edge AI models, LLMs and inferencing tasks.

DGX Station

The Ultimate AI performance on your desk

NVIDIA unveiled two new AI supercomputers at GTC 2025. The DGX Spark and the DGX Station, where the latter is the larger and high-end counterpart. You can check out its younger brother, the NVIDIA DGX Spark here.

Built from the ground up for AI

The DGX Station is designed from the ground up to build and run AI. Powered by the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, and a massive 784GB of large coherent memory – delivering incredible amounts of compute performance for developing and running large-scale AI training and inferencing workloads at your desktop.

NVIDIA Grace Blackwell Ultra Desktop Superchip

The NVIDIA DGX Station comes with a NVIDIA Blackwell Ultra GPU, built with the latest generation NVIDIA CUDA® cores and 5th generation Tensor Cores, connected to a high-performance NVIDIA Grace CPU via the NVIDIA® NVLink®-C2C interconnect, delivering best-in-class system communication and performance.

5th Generation Tensor Cores

NVIDIA DGX Stations are powered by the latest NVIDIA Blackwell Generation Tensor Cores, enabling 4-bit floating point (FP4) AI. FP4 increases the performance and size of next-generation models that memory can support while maintaining high accuracy.

NVIDIA ConnectX-8 SuperNIC

The NVIDIA ConnectX®-8 SuperNIC™ is optimized to supercharge hyperscale AI computing workloads. With up to 800 gigabits per second (Gb/s), NVIDIA ConnectX-8 SuperNIC delivers extremely fast, efficient network connectivity, significantly enhancing system performance for AI factories.

NVIDIA DGX OS

NVIDIA DGX OS implements stable, fully qualified operating systems for running AI, machine learning, and analytics applications on the NVIDIA DGX platform. It includes system-specific configurations, drivers, and diagnostic and monitoring tools. Easily scale across multiple NVIDIA DGX Station systems, NVIDIA DGX Cloud, or other accelerated data center or cloud infrastructure.

NVIDIA NVLink-C2C Interconnect

NVIDIA NVLink-C2C extends the industry-leading NVLink technology to a chip-to-chip interconnect between the GPU and CPU, enabling high-bandwidth coherent data transfers between processors and accelerators.

Large Coherent Memory for AI

AI models continue to grow in scale and complexity. Training and running these models within NVIDIA Grace Blackwell Ultra’s large coherent memory allows for massive-scale models to be trained and run efficiently within one memory pool, thanks to the C2C superchip interconnect that bypasses the bottlenecks of traditional CPU and GPU systems.

NVIDIA AI Software Stack

A full stack solution for AI workloads including fine-tuning, inference, and data science. NVIDIA’s AI software stack lets you work local and easily deploy to cloud or data center using the same tools, libraries, frameworks and pretrained models from desktop to cloud.

Workload Optimized Power-shifting

NVIDIA DGX Stations take advantage of AI-based system optimizations that intelligently shift power based on the currently active workload, continually maximizing performance and efficiency.

NVIDIA DGX Station

NVIDIA DGX Station Specifications

NVIDIA GPU 1x NVIDIA Blackwell Ultra
NVIDIA CPU 1x Grace-72 Core Neoverse V2
GPU Memory Up to 288GB HBM3e | 8 TB/s
CPU Memory Up to 496GB LPDDR5X | Up to 396 GB/s
NVLink-C2C Up to 900 GB/s
Networking | Peak Bandwidth NVIDIA ConnectX®-8 SuperNIC | Up to 800 Gb/s
Supported OS NVIDIA DGX OS
MIG 7