1. <rt id="uk1wc"></rt>
    2. <tt id="uk1wc"></tt>
    3. <rp id="uk1wc"><optgroup id="uk1wc"></optgroup></rp>

      <tt id="uk1wc"></tt><rt id="uk1wc"><optgroup id="uk1wc"></optgroup></rt>

        <u id="uk1wc"><noscript id="uk1wc"></noscript></u>
      1. <source id="uk1wc"><nav id="uk1wc"></nav></source>

          <source id="uk1wc"><nav id="uk1wc"></nav></source>

              NVIDIA DGX-2

              The world's most powerful AI system for the most complex AI challenges.



              Experience 10X the deep learning performance with NVIDIA? DGX-2?, the world’s first 2 petaFLOPS system that combines 16 interconnected GPUs for the highest levels of speed and scale from NVIDIA. Powered by NVIDIA? DGX? software and the scalable architecture of NVIDIA NVSwitch, DGX-2 is the platform of choice for taking on the world’s most complex AI challenges.


              Unbeatable Compute Power for Unprecedented Training

              AI is getting increasingly complex demanding unprecedented levels of compute power. NVIDIA DGX-2 packs 16 of the world’s most powerful GPUs to accelerate new AI model types that were previously untrainable. Groundbreaking GPU scalability, lets you train 4X bigger models on a single node with 10X the performance of an 8-GPU system.

              NVIDIA DGX-2 is now available in two models, including a new enhanced DGX-2H - specifically engineered for maximum performance for the most demanding applications. Learn how DGX-2H is the compute building block of DGX-2 POD - the first AI supercomputing infrastructure to achieve Top 500 performance.

              A Revolutionary AI Network Fabric

              With DGX-2, model complexity and size are no longer constrained by the limits of traditional architectures. Now, you can take advantage of model-parallel training with the NVIDIA NVSwitch networking fabric. It’s the innovative technology behind the world’s first 2-petaFLOPS GPU accelerator with 2.4 TB/s of bisection bandwidth, delivering a 24X increase over prior generations.

              AI Scale on a Whole New Level

              DGX-2 delivers a ready-to-go solution for rapidly scaling up AI. Flexible networking options for building the largest deep learning compute clusters, combined with virtualization speed scaling and improve user and workload isolation in shared infrastructure environments. With an accelerated deployment model and an architecture purpose-built to scale easily, your team can spend more time driving insights and less time building infrastructure.

              Enterprise-Grade AI Infrastructure

              DGX-2 is purpose-built for reliability, availability and serviceability (RAS) to reduce unplanned downtime, streamline serviceability and maintain operational continuity. Supported by NVIDIA expertise, and built for the rigor of around-the-clock operations, DGX-2  keeps your most important AI endeavors running around the clock.

              Learn more about our enterprise-grade support.


              10X Performance Gain In Less Than a Year

              The New Standard in Backtesting

              NVIDIA DGX-2 with accelerated Python processed 20 million trading simulations and set a new standard in the latest STAC-A3 backtesting benchmark report.

              Raising the Bar for AI Infrastructure

              NVIDIA DGX Systems set eight AI performance records in MLPerf 0.61—a set of benchmarks that enable the machine learning (ML) field to measure training performance across a diverse set of usages.

              Shattering World

              Learn how the world’s 22nd fastest supercomputer, the NVIDIA DGX SuperPOD – built with DGX systems, is being used to accelerate autonomous vehicle development.

              NVIDIA DGX-2

              Explore the powerful components of DGX-2.
              16X FULLY CONNECTED TESLA V100 32GB
              0.5 TB total high-bandwidth memory for more complex deep learning models
              12X NVSWITCHES
              Delivering 2.4TB/s bisection bandwidth
              8X EDR INFINIBAND/100 GigE
              1600Gb/sec total bi-directional bandwidth with low-latency
              NVLINK PLANE CARD
              Innovative 2 GPU baseboard interconnect
              2X XEON PLATINUM
              Latest Generation CPU for faster, more resilient boot and storage management
              1.5TB SYSTEM MEMORY
              More system memory to handle larger Deep Learning workloads
              High I/O throughput for your AI data
              30TB NVME SSDs
              Rapidly ingest the largest datasets into cache

              Red Hat? Enterprise Linux? on NVIDIA? DGX? Systems

              NVIDIA DGX POD

              IT-Approved Infrastructure for the AI Enterprise


              NVIDIA DGX-2 outperforms the industry, incorporating our greatest innovations, solving the world’s most complex AI challenges.

              GET STARTED

              Colocation Services for NVIDIA DGX


              Flexible Leasing Options Available