Skip to content
en
India INR

NVIDIA DGX Spark Barebone Computer, 128GB RAM, 4TB SSD | DGXSPARKFOUNEDITUK

Regular price Price: Rs. 550,000.00
Availability :
Low in Stock at Global Warehouses
Condition :

New Factory Sealed

Warranty :

1 Year Warranty

Shipping  :

Express Shipping Across India 3–7 Days via Delhivery.

Safe, Fast, 100% Genuine. Your Reliable IT Partner.

Best Price Assurance, Bulk Savings, Trusted Worldwide.

Expertise Builds Trust
  • 10 Years, 160+ Countries
  • 6000+ Customers/Projects
  • CCIE, CISSP, JNCIE, NSE 7 AWS, Google Cloud Experts
24/7 Online Service
Join Partner Network
  • Exclusive Discounts/Service
  • Credit Terms/Priority Supply

Get estimate shipping for your order

NVIDIA DGX Spark Barebone Computer, 128GB RAM, 4TB SSD
NVIDIA DGX Spark Barebone Computer, 128GB RAM, 4TB SSD | DGXSPARKFOUNEDITUK

Product Specifications

Description

NVIDIA DGX Spark Founders Edition – A Personal AI Supercomputer for Your Desk

The NVIDIA DGX Spark Founders Edition (Model: DGXSPARKFOUNEDITUK) is a groundbreaking personal AI supercomputer powered by the NVIDIA GB10 Grace Blackwell Superchip. Designed for AI developers, researchers, and data scientists, this compact  desktop system delivers up to 1 petaFLOP (1,000 TOPS) of FP4 AI computing performance — the equivalent of a data center GPU cluster — right on your desk. With 128GB of unified LPDDR5x system memory and a 4TB NVMe SSD, it enables local prototyping, fine-tuning, and inference of AI models with up to 200 billion parameters, all while running the full NVIDIA DGX OS software stack preloaded out of the box.

Powered by the NVIDIA GB10 Grace Blackwell Superchip

At the core of the DGX Spark is the NVIDIA GB10 Grace Blackwell Superchip, an integrated system-on-chip (SoC) that unifies GPU and CPU compute in a single unified memory architecture. The Blackwell GPU delivers 6,144 CUDA Cores, 5th-Generation Tensor Cores with FP4 support, and 4th-Generation RT Cores for ray tracing and neural rendering. The 20-core Arm CPU (10x Cortex-X925 high-performance + 10x Cortex-A725 efficiency cores) supercharges data preprocessing, model orchestration, and real-time inferencing. Together, these components operate over a 256-bit unified memory interface at 273 GB/s bandwidth, eliminating the traditional CPU-to-GPU memory bottleneck.

128GB Unified System Memory – Run the World's Largest AI Models Locally

The DGX Spark features 128GB of LPDDR5x unified system memory running at 4266 MHz across 16 channels. Because CPU and GPU share the same memory pool, large AI models can be loaded and operated without slow memory transfers between separate VRAM and RAM. This architecture allows the DGX Spark to:

  • Fine-tune models up to 70 billion parameters directly on device
  • Run inference on models up to 200 billion parameters at FP4 precision
  • Handle computationally complex data science and machine learning pipelines at full speed
  • Connect two DGX Spark units via ConnectX-7 for a 256GB combined memory pool, supporting models up to 405 billion parameters (e.g., Llama 3.1 405B)

4TB NVMe SSD – Ultra-Fast Local Storage for Large Datasets and Models

The Founders Edition ships with a 4TB NVMe M.2 SSD with hardware self-encryption, providing ample fast local storage for large language models (LLMs), training datasets, checkpoints, and container images. The self-encrypting drive ensures data security for sensitive research or enterprise AI workloads. Standard DGX Spark variants ship with 1TB; this 4TB configuration is exclusive to the Founders Edition.

NVIDIA DGX OS – The Full AI Software Stack, Preloaded

The DGX Spark ships with NVIDIA DGX OS, based on Ubuntu 24.04 LTS, pre-configured with the complete NVIDIA AI software ecosystem. From first boot, you have immediate access to:

  • NVIDIA CUDA, cuDNN, and TensorRT for GPU-accelerated computing
  • NVIDIA NIM microservices for deploying optimised AI inference endpoints
  • TRT-LLM (TensorRT-Large Language Model) for accelerated LLM inference
  • PyTorch, Triton Inference Server, and popular deep learning frameworks
  • NVIDIA NemoClaw (part of the NVIDIA Agent Toolkit) — an open-source platform for building, evaluating, and optimising secure autonomous AI agents locally
  • NGC (NVIDIA GPU Cloud) container registry for instant access to pre-trained models, Jupyter notebooks, and AI frameworks
  • NVIDIA Isaac, Metropolis, and Holoscan frameworks for robotics, smart city, and computer vision edge application development

The system can be used in desktop mode (connect monitor, keyboard, and mouse) or headless server mode for remote SSH and API access — ideal for offloading AI workloads from a laptop or acting as a local inference endpoint.

Advanced Connectivity – Built for AI Workflows and Multi-System Clustering

Despite its ultra-compact 150 x 150 x 50.5 mm footprint, the DGX Spark packs exceptional I/O:

  • ConnectX-7 Smart NIC with 2x QSFP ports (200GbE) — connect two DGX Sparks together for dual-unit AI clustering
  • 1x RJ-45 Ethernet (10 GbE) for high-speed wired networking
  • Wi-Fi 7 (802.11be) for the fastest available wireless connectivity
  • Bluetooth 5.4 for peripherals
  • 4x USB Type-C (one port supports power delivery)
  • 1x HDMI 2.1a for 4K/8K display output with multichannel audio
  • 1x NVENC + 1x NVDEC hardware video encode/decode engines

Compact, Power-Efficient Design – Data Center Performance Without the Infrastructure

The DGX Spark is engineered for desktop environments. At just 1.2 kg (2.6 lbs) and with a footprint no larger than a thick hardcover book, it runs on a standard 240W AC wall outlet (adapter included). The GB10 SoC has a Thermal Design Power (TDP) of just 140W, making it highly power-efficient for the AI performance delivered. The integrated thermal management system maintains reliable operation between 5°C and 30°C, suitable for typical office and lab environments.

Ideal Use Cases – Who Is the NVIDIA DGX Spark Built For?

The NVIDIA DGX Spark Founders Edition is the right tool for:

  • AI Developers & Engineers — Prototype, test, and iterate LLMs and generative AI apps locally without cloud API costs or latency
  • Researchers & Data Scientists — Fine-tune state-of-the-art models (DeepSeek, Llama, Mistral, Qwen, Gemma) on private datasets with full data sovereignty
  • Enterprise AI Teams — Develop and validate AI solutions on-premises before deploying to DGX Cloud or data center infrastructure
  • Robotics & Edge AI Developers — Build intelligent systems using NVIDIA Isaac, Metropolis, and Holoscan frameworks locally
  • Universities & AI Labs — Deliver supercomputer-class AI compute to individual students and researchers without shared cluster queues

Seamless Path from Desktop to Data Center

One of the DGX Spark's most compelling advantages is deployment portability. Because it runs the same NVIDIA AI platform software stack as DGX Cloud, DGX Station, and NVIDIA-accelerated data centers, models developed on the DGX Spark can be migrated to production infrastructure with virtually no code changes. This makes it the ideal prototyping and validation platform for enterprise AI pipelines — develop at your desk, deploy at scale.

Main Specifications
Product Name NVIDIA DGX Spark Founders Edition
Model DGXSPARKFOUNEDITUK
Superchip NVIDIA GB10 Grace Blackwell Superchip
GPU Architecture NVIDIA Blackwell
GPU Cores (CUDA) 6,144 CUDA Cores
Tensor Cores 5th Generation with FP4 Support
RT Cores 4th Generation
AI Performance Up to 1 PFLOP (1,000 TOPS) at FP4 Precision with Sparsity
CPU 20-core Arm Processor (10x Cortex-X925 + 10x Cortex-A725)
RAM 128GB LPDDR5x Unified System Memory
Memory Form Factor Integrated / On-Package (Other)
Memory Bus Width 256-bit (16 Channels)
Memory Speed 4266 MHz
Memory Bandwidth 273 GB/s
Storage 4TB NVMe M.2 2280 SSD with Hardware Self-Encryption
Internal Drive Form Factor M.2 2280 (22mm x 80mm)
Video Encode / Decode 1x NVENC / 1x NVDEC
Display Output 1x HDMI 2.1a (with Multichannel Audio)
USB Ports 4x USB Type-C (1x supports Power Delivery)
Ethernet 1x RJ-45 (10 GbE)
High-Speed Networking ConnectX-7 Smart NIC with 2x QSFP Ports (200GbE)
Wi-Fi Wi-Fi 7 (802.11be)
Bluetooth Bluetooth 5.4
Operating System NVIDIA DGX OS (Ubuntu 24.04 LTS)
AI Software Stack CUDA, cuDNN, TensorRT, TRT-LLM, NIM, NemoClaw, NGC, PyTorch, Triton Inference Server
Max AI Model Size (Standalone) Up to 200 Billion Parameters
Max AI Model Size (Dual-Spark) Up to 405 Billion Parameters
Cooling Technology Active Air Cooling (Integrated Fan)
Power Supply 240W External Power Supply (Included)
SoC TDP 140W (GB10 Superchip)
Dimensions 150mm (L) x 150mm (W) x 50.5mm (H)
Weight 1.2 kg (2.6 lbs)
Form Factor Small Form Factor (SFF) Desktop
Operating Temperature 5°C to 30°C (41°F to 86°F)
Operating Humidity 10% to 90% (Non-Condensing)
Operating Altitude Up to 3,000 meters (9,843 feet)

Our Capabilities

Zortex Computer teams stand ready to help enterprises solve complex technology challenges.

Quality Testing

All deployed systems follow our standard ISO 9001 procedure and go through rigorous quality testing to ensure high field rates.

Data Center and Infrastructure Design

Our team of engineers will take a structured planned approach specific to your business needs and design a solution based on your infrastructure requirements and goals.

Committed Team

Our team of engineers and project managers collaborate with our clients to deliver the results you need.

Global Supply Logistics

We deploy our solutions to over 30 countries worldwide.

Drawer Title
Similar Products
Make an offer
Make an offer
Make an offer