gh200 ARMv8 Neoverse-V2 testing with a Pegatron JIMBO P4352 (00022432 BIOS) and NVIDIA GH200 144G HBM3e 143GB on Ubuntu 24.04 via the Phoronix Test Suite. a: Processor: ARMv8 Neoverse-V2 @ 3.47GHz (72 Cores), Motherboard: Pegatron JIMBO P4352 (00022432 BIOS), Memory: 1 x 480GB LPDDR5-6400MT/s NVIDIA 699-2G530-0236-RC1, Disk: 1000GB CT1000T700SSD3, Graphics: NVIDIA GH200 144G HBM3e 143GB, Network: 2 x Intel X550 OS: Ubuntu 24.04, Kernel: 6.8.0-45-generic-64k (aarch64), Display Driver: NVIDIA, OpenCL: OpenCL 3.0 CUDA 12.6.65, Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200 b: Processor: ARMv8 Neoverse-V2 @ 3.47GHz (72 Cores), Motherboard: Pegatron JIMBO P4352 (00022432 BIOS), Memory: 1 x 480GB LPDDR5-6400MT/s NVIDIA 699-2G530-0236-RC1, Disk: 1000GB CT1000T700SSD3, Graphics: NVIDIA GH200 144G HBM3e 143GB, Network: 2 x Intel X550 OS: Ubuntu 24.04, Kernel: 6.8.0-45-generic-64k (aarch64), Display Driver: NVIDIA, OpenCL: OpenCL 3.0 CUDA 12.6.65, Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200 Timed Linux Kernel Compilation 6.8 Build: defconfig Seconds < Lower Is Better a . 66.71 |============================================================== b . 73.41 |==================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 45.23 |==================================================================== b . 42.29 |================================================================ GraphicsMagick Operation: HWB Color Space Iterations Per Minute > Higher Is Better a . 430 |====================================================================== b . 408 |================================================================== Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 ms < Lower Is Better a . 1.793 |================================================================= b . 1.883 |==================================================================== XNNPACK 2cd86b Model: FP32MobileNetV2 us < Lower Is Better a . 967 |====================================================================== b . 925 |=================================================================== GraphicsMagick 1.3.43 Operation: Swirl Iterations Per Minute > Higher Is Better a . 657 |=================================================================== b . 685 |====================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 390.07 |=================================================================== b . 375.38 |================================================================ ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 5.14685 |================================================================== b . 4.95662 |================================================================ PyPerformance 1.11 Benchmark: asyncio_tcp_ssl Milliseconds < Lower Is Better a . 1.49 |===================================================================== b . 1.44 |=================================================================== GraphicsMagick 1.3.43 Operation: Noise-Gaussian Iterations Per Minute > Higher Is Better a . 301 |====================================================================== b . 291 |==================================================================== Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 ms < Lower Is Better a . 1.134 |==================================================================== b . 1.097 |================================================================== Build2 0.17 Time To Compile Seconds < Lower Is Better a . 84.79 |================================================================== b . 87.52 |==================================================================== GraphicsMagick Operation: Noise-Gaussian Iterations Per Minute > Higher Is Better a . 217 |==================================================================== b . 223 |====================================================================== GraphicsMagick Operation: Swirl Iterations Per Minute > Higher Is Better a . 605 |==================================================================== b . 619 |====================================================================== XNNPACK 2cd86b Model: QU8MobileNetV3Small us < Lower Is Better a . 1083 |=================================================================== b . 1108 |===================================================================== GraphicsMagick 1.3.43 Operation: HWB Color Space Iterations Per Minute > Higher Is Better a . 656 |==================================================================== b . 671 |====================================================================== XNNPACK 2cd86b Model: FP16MobileNetV3Large us < Lower Is Better a . 1226 |===================================================================== b . 1199 |=================================================================== x265 Video Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 12.61 |==================================================================== b . 12.35 |=================================================================== XNNPACK 2cd86b Model: QU8MobileNetV3Large us < Lower Is Better a . 1484 |==================================================================== b . 1513 |===================================================================== Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 ms < Lower Is Better a . 3.396 |=================================================================== b . 3.461 |==================================================================== XNNPACK 2cd86b Model: QU8MobileNetV2 us < Lower Is Better a . 945 |===================================================================== b . 963 |====================================================================== XNNPACK 2cd86b Model: FP16MobileNetV3Small us < Lower Is Better a . 881 |====================================================================== b . 866 |===================================================================== Timed Linux Kernel Compilation 6.8 Build: allmodconfig Seconds < Lower Is Better a . 285.13 |================================================================== b . 289.83 |=================================================================== XNNPACK 2cd86b Model: FP32MobileNetV3Small us < Lower Is Better a . 945 |====================================================================== b . 930 |===================================================================== Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 ms < Lower Is Better a . 1.824 |=================================================================== b . 1.853 |==================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.582902 |================================================================= b . 0.574063 |================================================================ GraphicsMagick 1.3.43 Operation: Rotate Iterations Per Minute > Higher Is Better a . 331 |====================================================================== b . 326 |===================================================================== 7-Zip Compression Test: Compression Rating MIPS > Higher Is Better a . 393523 |================================================================== b . 398821 |=================================================================== XNNPACK 2cd86b Model: FP16MobileNetV2 us < Lower Is Better a . 840 |====================================================================== b . 829 |===================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 18.65 |==================================================================== b . 18.41 |=================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 321.24 |=================================================================== b . 317.28 |================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 0.462943 |================================================================= b . 0.457263 |================================================================ ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 0.303222 |================================================================ b . 0.306833 |================================================================= Blender 4.0.2 Blend File: Fishy Cat - Compute: CPU-Only Seconds < Lower Is Better a . 73.02 |==================================================================== b . 72.18 |=================================================================== Mobile Neural Network 2.9.b11b7037d Model: inception-v3 ms < Lower Is Better a . 13.69 |=================================================================== b . 13.85 |==================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 5.80711 |================================================================= b . 5.87309 |================================================================== POV-Ray Trace Time Seconds < Lower Is Better a . 7.786 |=================================================================== b . 7.869 |==================================================================== Blender 4.0.2 Blend File: BMW27 - Compute: CPU-Only Seconds < Lower Is Better a . 38.06 |=================================================================== b . 38.43 |==================================================================== PyPerformance 1.11 Benchmark: gc_collect Milliseconds < Lower Is Better a . 1.08 |===================================================================== b . 1.07 |==================================================================== GraphicsMagick 1.3.43 Operation: Resizing Iterations Per Minute > Higher Is Better a . 442 |====================================================================== b . 438 |===================================================================== simdjson 3.10 Throughput Test: LargeRandom GB/s > Higher Is Better a . 1.15 |===================================================================== b . 1.14 |==================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 1262.14 |================================================================== b . 1271.75 |================================================================== GraphicsMagick 1.3.43 Operation: Sharpen Iterations Per Minute > Higher Is Better a . 411 |====================================================================== b . 408 |===================================================================== simdjson 3.10 Throughput Test: TopTweet GB/s > Higher Is Better a . 4.14 |===================================================================== b . 4.11 |===================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 161.33 |=================================================================== b . 162.49 |=================================================================== GraphicsMagick Operation: Resizing Iterations Per Minute > Higher Is Better a . 282 |====================================================================== b . 284 |====================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 159.07 |=================================================================== b . 160.17 |=================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 210.12 |=================================================================== b . 211.54 |=================================================================== PyPerformance 1.11 Benchmark: pathlib Milliseconds < Lower Is Better a . 15.5 |===================================================================== b . 15.4 |===================================================================== simdjson 3.10 Throughput Test: Kostya GB/s > Higher Is Better a . 3.11 |===================================================================== b . 3.13 |===================================================================== PyPerformance 1.11 Benchmark: nbody Milliseconds < Lower Is Better a . 64.5 |===================================================================== b . 64.9 |===================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 109.24 |=================================================================== b . 108.60 |=================================================================== GraphicsMagick Operation: Sharpen Iterations Per Minute > Higher Is Better a . 171 |====================================================================== b . 170 |====================================================================== Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 ms < Lower Is Better a . 11.34 |==================================================================== b . 11.27 |==================================================================== PyPerformance 1.11 Benchmark: json_loads Milliseconds < Lower Is Better a . 17.5 |===================================================================== b . 17.4 |===================================================================== x265 Video Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 8.81 |===================================================================== b . 8.86 |===================================================================== 7-Zip Compression 24.05 Test: Decompression Rating MIPS > Higher Is Better a . 420524 |=================================================================== b . 418162 |=================================================================== GraphicsMagick 1.3.43 Operation: Enhanced Iterations Per Minute > Higher Is Better a . 359 |====================================================================== b . 361 |====================================================================== LeelaChessZero 0.31.1 Backend: Eigen Nodes Per Second > Higher Is Better a . 360 |====================================================================== b . 362 |====================================================================== PyPerformance 1.11 Benchmark: python_startup Milliseconds < Lower Is Better a . 18.7 |===================================================================== b . 18.8 |===================================================================== PyPerformance 1.11 Benchmark: pickle_pure_python Milliseconds < Lower Is Better a . 205 |====================================================================== b . 204 |====================================================================== simdjson 3.10 Throughput Test: DistinctUserID GB/s > Higher Is Better a . 4.16 |===================================================================== b . 4.18 |===================================================================== Blender 4.0.2 Blend File: Pabellon Barcelona - Compute: CPU-Only Seconds < Lower Is Better a . 154.46 |=================================================================== b . 153.73 |=================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.360196 |================================================================= b . 0.358509 |================================================================= PyPerformance 1.11 Benchmark: raytrace Milliseconds < Lower Is Better a . 217 |====================================================================== b . 218 |====================================================================== PyPerformance 1.11 Benchmark: go Milliseconds < Lower Is Better a . 98.2 |===================================================================== b . 97.8 |===================================================================== PyPerformance 1.11 Benchmark: asyncio_websockets Milliseconds < Lower Is Better a . 510 |====================================================================== b . 508 |====================================================================== PyPerformance 1.11 Benchmark: django_template Milliseconds < Lower Is Better a . 26.3 |===================================================================== b . 26.2 |===================================================================== PyPerformance 1.11 Benchmark: crypto_pyaes Milliseconds < Lower Is Better a . 54.8 |===================================================================== b . 55.0 |===================================================================== Etcpak 2.0 Benchmark: Multi-Threaded - Configuration: ETC2 Mpx/s > Higher Is Better a . 471.19 |=================================================================== b . 469.59 |=================================================================== Timed LLVM Compilation 16.0 Build System: Unix Makefiles Seconds < Lower Is Better a . 276.93 |=================================================================== b . 277.75 |=================================================================== GraphicsMagick Operation: Enhanced Iterations Per Minute > Higher Is Better a . 351 |====================================================================== b . 350 |====================================================================== PyPerformance 1.11 Benchmark: async_tree_io Milliseconds < Lower Is Better a . 748 |====================================================================== b . 750 |====================================================================== PyPerformance 1.11 Benchmark: regex_compile Milliseconds < Lower Is Better a . 82.3 |===================================================================== b . 82.1 |===================================================================== Mobile Neural Network 2.9.b11b7037d Model: nasnet ms < Lower Is Better a . 5.008 |==================================================================== b . 4.996 |==================================================================== C-Ray 2.0 Resolution: 5K - Rays Per Pixel: 16 Seconds < Lower Is Better a . 36.21 |==================================================================== b . 36.13 |==================================================================== Epoch 4.19.4 Epoch3D Deck: Cone Seconds < Lower Is Better a . 188.20 |=================================================================== b . 187.79 |=================================================================== GROMACS 2024 Implementation: MPI CPU - Input: water_GMX50_bare Ns Per Day > Higher Is Better a . 6.001 |==================================================================== b . 5.990 |==================================================================== WarpX 24.10 Input: Plasma Acceleration Seconds < Lower Is Better a . 20.38 |==================================================================== b . 20.41 |==================================================================== 7-Zip Compression Test: Decompression Rating MIPS > Higher Is Better a . 418819 |=================================================================== b . 418365 |=================================================================== 7-Zip Compression 24.05 Test: Compression Rating MIPS > Higher Is Better a . 384775 |=================================================================== b . 384421 |=================================================================== WarpX 24.10 Input: Uniform Plasma Seconds < Lower Is Better a . 16.90 |==================================================================== b . 16.89 |==================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 317.87 |=================================================================== b . 317.60 |=================================================================== BYTE Unix Benchmark 5.1.3-git Computational Test: Dhrystone 2 LPS > Higher Is Better a . 4998587529.8 |============================================================= b . 4994389993.2 |============================================================= C-Ray 2.0 Resolution: 4K - Rays Per Pixel: 16 Seconds < Lower Is Better a . 20.36 |==================================================================== b . 20.34 |==================================================================== BYTE Unix Benchmark 5.1.3-git Computational Test: Pipe LPS > Higher Is Better a . 202565282.2 |============================================================== b . 202436523.6 |============================================================== GROMACS Input: water_GMX50_bare Ns Per Day > Higher Is Better a . 7.156 |==================================================================== b . 7.159 |==================================================================== C-Ray 2.0 Resolution: 1080p - Rays Per Pixel: 16 Seconds < Lower Is Better a . 5.195 |==================================================================== b . 5.197 |==================================================================== Blender 4.0.2 Blend File: Barbershop - Compute: CPU-Only Seconds < Lower Is Better a . 381.45 |=================================================================== b . 381.55 |=================================================================== Blender 4.0.2 Blend File: Classroom - Compute: CPU-Only Seconds < Lower Is Better a . 78.37 |==================================================================== b . 78.36 |==================================================================== BYTE Unix Benchmark 5.1.3-git Computational Test: Whetstone Double MWIPS > Higher Is Better a . 721978.0 |================================================================= b . 721932.3 |================================================================= Timed LLVM Compilation 16.0 Build System: Ninja Seconds < Lower Is Better a . 175.03 |=================================================================== b . 175.04 |=================================================================== BYTE Unix Benchmark 5.1.3-git Computational Test: System Call LPS > Higher Is Better a . 145868649.3 |============================================================== b . 145872070.3 |============================================================== PyPerformance 1.11 Benchmark: xml_etree Milliseconds < Lower Is Better a . 45.8 |===================================================================== b . 45.8 |===================================================================== PyPerformance 1.11 Benchmark: float Milliseconds < Lower Is Better a . 56.8 |===================================================================== b . 56.8 |===================================================================== PyPerformance 1.11 Benchmark: chaos Milliseconds < Lower Is Better a . 47.4 |===================================================================== b . 47.4 |===================================================================== XNNPACK 2cd86b Model: FP32MobileNetV3Large us < Lower Is Better a . 1426 |===================================================================== b . 1426 |===================================================================== Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 ms < Lower Is Better a . 1.502 |==================================================================== b . 1.502 |==================================================================== GraphicsMagick Operation: Rotate Iterations Per Minute > Higher Is Better a . 209 |====================================================================== b . 209 |====================================================================== simdjson 3.10 Throughput Test: PartialTweets GB/s > Higher Is Better a . 4.06 |===================================================================== b . 4.06 |===================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 3298.19 |================================================================== b . 3259.10 |================================================================= ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 2776.33 |================================================================== b . 2789.32 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 6.28553 |================================================================== b . 6.24111 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 53.63 |=================================================================== b . 54.31 |==================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 3.14523 |================================================================== b . 3.14788 |================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 6.19726 |================================================================== b . 6.15294 |================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 2160.11 |================================================================= b . 2186.92 |================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 1715.64 |================================================================= b . 1741.97 |================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 0.791624 |================================================================= b . 0.785643 |================================================================= ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 3.11158 |================================================================= b . 3.15037 |================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 2.56003 |================================================================ b . 2.66013 |================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 9.15193 |================================================================== b . 9.20593 |================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.75865 |================================================================== b . 4.72548 |================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 22.15 |================================================================ b . 23.64 |==================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 194.36 |================================================================= b . 201.75 |=================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 172.23 |=================================================================== b . 170.27 |================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Apache Cassandra 5.0 Test: Writes Op/s > Higher Is Better PostgreSQL 17 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 1000 - Clients: 800 - Mode: Read Write TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 1000 - Clients: 500 - Mode: Read Write TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 1000 - Clients: 800 - Mode: Read Only TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 1000 - Clients: 500 - Mode: Read Only TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Write TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 500 - Mode: Read Write TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Only TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 500 - Mode: Read Only TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 1 - Clients: 1000 - Mode: Read Write TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 1 - Clients: 800 - Mode: Read Write TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 1 - Clients: 500 - Mode: Read Write TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 1 - Clients: 1000 - Mode: Read Only TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 1 - Clients: 800 - Mode: Read Only TPS > Higher Is Better PostgreSQL 17 Scaling Factor: 1 - Clients: 500 - Mode: Read Only TPS > Higher Is Better GROMACS 2024 Implementation: NVIDIA CUDA GPU - Input: water_GMX50_bare Ns Per Day > Higher Is Better Stockfish Chess Benchmark Nodes Per Second > Higher Is Better a . 58496753 |======================================================= b . 69473429 |================================================================= Stockfish 17 Chess Benchmark Nodes Per Second > Higher Is Better a . 168428763 |========================================================= b . 188288587 |================================================================ LeelaChessZero 0.31.1 Backend: BLAS Nodes Per Second > Higher Is Better