GPU2020 Hyperplane-16
GPU Server with 16 Tesla V100s
Up to sixteen Tesla V100 GPUs with NVLink. Save up to 90% by moving your Deep Learning workload from cloud to on-premise.

Trusted by thousands of customers worldwide
Researchers and engineers at universities, start-ups, Fortune 500s, public agencies, and national labs use GPU2020 to power their artificial intelligence workloads.




Top Configurations
Optimized configurations that won't bottleneck
Our top configuration are benchmarked and tuned to eliminate CPU, memory, and storage bottlenecks when running deep learning workloads.
Basic
|
---|
![]() |
16x Tesla V100 Server |
16-Way NVLink |
2x Xeon Platinum 8268 (24 Cores) |
768 GB of System Memory |
1.92 TB NVMe OS Drive |
Customizable Data Drives |
- |
- |
$
223,601
Academic Discounts Available
|
Customize |
Premium
|
---|
![]() |
16x Tesla V100 Server |
16-Way NVLink |
2x Xeon Platinum 8268 (24 Cores) |
1.5 TB of System Memory |
1.92 TB NVMe OS Drive |
Customizable Data Drives |
Dual-Port 100 Gb/sec Ethernet/IB for Storage |
8x 100 Gb/sec IB for Multi-Node Training |
$
240,000
Academic Discounts Available
|
Customize |
Max
|
---|
![]() |
16x Tesla V100 Server |
16-Way NVLink |
2x Xeon Platinum 8280M (28 Cores) |
3 TB of System Memory |
1.92 TB NVMe OS Drive |
Customizable Data Drives |
Dual-Port 100 Gb/sec Ethernet/IB for Storage |
8x 100 Gb/sec IB for Multi-Node Training |
$
276,360
Academic Discounts Available
|
Customize |
custom
|
---|
![]() |
4x or 8x Tesla V100 |
NVLink |
Any Processor |
Up to 768 GB RAM |
Fully Customizable Storage |
100 Gbps InfiniBand |
+86-18677555856
enterprise@lambdalabs.com
|
Live Chat |