With the launch of their Ampere architecture and new A100 accelerator barely a month behind them, NVIDIA this morning is announcing the PCIe version of their accelerator as part of the start of the now-virtual ISC Digital conference for high performance computing. The more straight-laced counterpart to NVIDIA’s flagship SXM4 version of the A100 accelerator, the PCie version of the A100 is designed to offer A100 in a more traditional form factor for customers who need something that they can plug into standardized servers. Overall the PCIe A100 offers the same peak performance as the SXM4 A100, however with a lower 250 Watt TDP, real-world performance won’t be quite as high.
The obligatory counterpart to NVIDIA’s SXM form factor accelerators, NVIDIA’s PCIe accelerators serve to flesh out the other side of NVIDIA’s accelerator lineup. While NVIDIA would gladly sell everyone SXM-based accelerators – which would include the pricey NVIDIA HGX carrier board – there are still numerous customers who need to be able to use GPU accelerators in standard, PCIe-based rackmount servers. Or for smaller workloads, customers don’t need the kind of 4-way and higher scalability offered by SXM-form factor accelerators. So with their PCIe cards, NVIDIA can serve the rest of the accelerator market that their SXM products can’t reach.
The PCIe A100, in turn, is a full-fledged A100, just in a different form factor and with a more appropriate TDP. In terms of peak performance, the PCIe A100 is just as fast as its SXM4 counterpart; NVIDIA this time isn’t shipping this as a cut-down configuration with lower clockspeeds or fewer functional blocks than the flagship SXM4 version. As a result the PCIe card brings everything A100 offers to the table, with the same heavy focus on tensor operations, including the new higher precision TF32 and FP64 formats, as well as even faster integer inference.
NVIDIA Accelerator Specification Comparison | ||||||
A100 (PCIe) |
A100 (SXM4) |
V100 (PCIe) |
P100 (PCIe) |
|||
FP32 CUDA Cores | 6912 | 6912 | 5120 | 3584 | ||
Boost Clock | 1.41GHz | 1.41GHz | 1.38GHz | 1.3GHz | ||
Memory Clock | 2.4Gbps HBM2 | 2.4Gbps HBM2 | 1.75Gbps HBM2 | 1.4Gbps HBM2 | ||
Memory Bus Width | 5120-bit | 5120-bit | 4096-bit | 4096-bit | ||
Memory Bandwidth | 1.6TB/sec | 1.6TB/sec | 900GB/sec | 720GB/sec | ||
VRAM | 40GB | 40GB | 16GB/32GB | 16GB | ||
Single Precision | 19.5 TFLOPs | 19.5 TFLOPs | 14.1 TFLOPs | 9.3 TFLOPs | ||
Double Precision | 9.7 TFLOPs (1/2 FP32 rate) |
9.7 TFLOPs (1/2 FP32 rate) |
7 TFLOPs (1/2 FP32 rate) |
4.7 TFLOPs (1/2 FP32 rate) |
||
INT8 Tensor | 624 TOPs | 624 TOPs | N/A | N/A | ||
FP16 Tensor | 312 TFLOPs | 312 TFLOPs | 112 TFLOPs | N/A | ||
TF32 Tensor | 156 TFLOPs | 156 TFLOPs | N/A | N/A | ||
Relative Performance (SXM Version) | 90% | 100% | N/A | N/A | ||
Interconnect | NVLink 3 6 Links? (300GB/sec?) |
NVLink 3 12 Links (600GB/sec) |
NVLink 2 4 Links (200GB/sec) |
NVLink 1 4 Links (160GB/sec) |
||
GPU | A100 (826mm2) |
A100 (826mm2) |
GV100 (815mm2) |
GP100 (610mm2) |
||
Transistor Count | 54.2B | 54.2B | 21.1B | 15.3B | ||
TDP | 250W | 400W | 250W | 300W | ||
Manufacturing Process | TSMC 7N | TSMC 7N | TSMC 12nm FFN | TSMC 16nm FinFET | ||
Interface | PCIe 4.0 | SXM4 | PCIe 3.0 | SXM | ||
Architecture | Ampere | Ampere | Volta | Pascal |
But because the dual-slot add-in card form factor is designed for lower TDP products, offering less room for cooling and typically less access to power as well, the PCIe version of the A100 does have to ratchet down its TDP from 400W to 250W. That’s a sizable 38% reduction in power consumption, and as a result the PCIe A100 isn’t going to be able to match the sustained performance figures of its SXM4 counterpart – that’s the advantage of going with a form factor with higher power and cooling budgets. All told, the PCIe version of the A100 should deliver about 90% of the performance of the SXM4 version on single-GPU workloads, which for such a big drop in TDP, is not a bad trade-off.
And on this note, I should give NVIDIA credit where credit is due: unlike the PCIe version of the V100 accelerator, NVIDIA is doing a much better job of documenting these performance differences. This time around NVIDIA is explicitly noting the 90% figure in their their specification sheets and related marketing materials. So there should be a lot less confusion about how the PCIe version of the accelerator compares to the SXM version.
Other than the form factor and TDP changes, the only other notable deviation for the PCIe A100 from the SXM version is the number of NVLink-connected GPUs supported. For their PCIe card NVIDIA is once again using NVLink bridges connected across the top of A100 cards, allowing for two (and only two) cards to be linked together. NVIDIA’s product sheet doesn’t list the total bandwidth available, but as the PCIe V100 supported up to 100GB/sec in each direction using two links, the PCIe A100 and its 3 NVLink connectors should be able to do 150GB/sec, if not more.
Otherwise the PCIe A100 comes with the usual trimmings of the form factor. The card is entirely passively cooled, designed to be used with servers with powerful chassis fans. And though not pictured in NVIDIA’s official shots, there are sockets for PCIe power connectors. Meanwhile, with the reduced usage of NVLink in this version of the card, A100’s native PCIe 4 support will undoubtedly be of increased importance here, underscoring the advantage that an AMD Epyc + NVIDIA A100 pairing has right now since AMD is the only x86 server vendor with PCIe 4 support.
Wrapping things up, while NVIDIA isn’t announcing specific pricing or availability information today, the new PCIe A100 cards should be shipping soon. The wider compatibility of the PCIe card has helped NVIDIA to line up over 50 server wins at this point, with 30 of those servers set to ship this summer.
from AnandTech https://ift.tt/3dny3vF
via IFTTT
0 comments:
Post a Comment