september 2 x AMD EPYC 9124 16-Core testing with a AMD Titanite_4G (RTI1007B BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite. a: Processor: 2 x AMD EPYC 9124 16-Core @ 3.00GHz (32 Cores / 64 Threads), Motherboard: AMD Titanite_4G (RTI1007B BIOS), Chipset: AMD Device 14a4, Memory: 1520GB, Disk: 2 x 3201GB KIOXIA KCMYXVUG3T20, Graphics: ASPEED, Network: Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 24.04, Kernel: 6.8.0-22-generic (x86_64), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200 b: Processor: 2 x AMD EPYC 9124 16-Core @ 3.00GHz (32 Cores / 64 Threads), Motherboard: AMD Titanite_4G (RTI1007B BIOS), Chipset: AMD Device 14a4, Memory: 1520GB, Disk: 2 x 3201GB KIOXIA KCMYXVUG3T20, Graphics: ASPEED, Network: Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 24.04, Kernel: 6.8.0-22-generic (x86_64), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200 c: Processor: 2 x AMD EPYC 9124 16-Core @ 3.00GHz (32 Cores / 64 Threads), Motherboard: AMD Titanite_4G (RTI1007B BIOS), Chipset: AMD Device 14a4, Memory: 1520GB, Disk: 2 x 3201GB KIOXIA KCMYXVUG3T20, Graphics: ASPEED, Network: Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 24.04, Kernel: 6.8.0-22-generic (x86_64), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200 BYTE Unix Benchmark 5.1.3-git Computational Test: Pipe LPS > Higher Is Better a . 59439547.8 |=============================================================== b . 59454086.9 |=============================================================== c . 59451441.2 |=============================================================== BYTE Unix Benchmark 5.1.3-git Computational Test: Dhrystone 2 LPS > Higher Is Better a . 2858800202.7 |============================================================= b . 2862044532.7 |============================================================= c . 2868334497.4 |============================================================= BYTE Unix Benchmark 5.1.3-git Computational Test: System Call LPS > Higher Is Better a . 47402205.6 |=============================================================== b . 47403559.0 |=============================================================== c . 47405630.6 |=============================================================== BYTE Unix Benchmark 5.1.3-git Computational Test: Whetstone Double MWIPS > Higher Is Better a . 497991.1 |================================================================= b . 497887.7 |================================================================= c . 498175.6 |================================================================= SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 9.183 |==================================================================== b . 9.224 |==================================================================== c . 9.215 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 31.07 |=================================================================== b . 31.31 |==================================================================== c . 31.07 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 64.14 |=================================================================== b . 64.63 |==================================================================== c . 64.05 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 180.78 |==================================================== b . 231.12 |=================================================================== c . 183.83 |===================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 26.02 |==================================================================== b . 25.98 |==================================================================== c . 25.96 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 85.27 |==================================================================== b . 84.78 |==================================================================== c . 84.58 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 181.01 |=================================================================== b . 181.86 |=================================================================== c . 181.10 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 535.72 |================================================================== b . 516.85 |================================================================ c . 542.12 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 1.362 |==================================================================== b . 1.353 |==================================================================== c . 1.356 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 5.557 |==================================================================== b . 5.580 |==================================================================== c . 5.581 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 7.688 |=================================================================== b . 7.801 |==================================================================== c . 7.794 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 13.56 |==================================================================== b . 13.49 |==================================================================== c . 13.57 |==================================================================== Stockfish 17 Chess Benchmark Nodes Per Second > Higher Is Better a . 103967998 |============================================================= b . 109820682 |================================================================ c . 100267108 |========================================================== Opus Codec Encoding 1.5.2 WAV To Opus Encode Seconds < Lower Is Better a . 28.39 |==================================================================== b . 28.36 |==================================================================== c . 28.45 |==================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 106.11 |=================================================================== b . 105.96 |=================================================================== c . 105.54 |=================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 9.41348 |================================================================== b . 9.42661 |================================================================== c . 9.46428 |================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 126.29 |================================================================= b . 129.61 |=================================================================== c . 125.70 |================================================================= ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 7.91427 |================================================================== b . 7.70951 |================================================================ c . 7.95180 |================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 6.59574 |================================================================== b . 6.55595 |================================================================== c . 6.55356 |================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 151.61 |=================================================================== b . 152.53 |=================================================================== c . 152.58 |=================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 8.09700 |=============================================================== b . 8.53450 |================================================================== c . 7.86624 |============================================================= ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 123.50 |================================================================= b . 117.17 |============================================================== c . 127.12 |=================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 112.66 |=================================================================== b . 106.32 |=============================================================== c . 109.57 |================================================================= ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 8.87327 |============================================================== b . 9.40247 |================================================================== c . 9.12306 |================================================================ ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 124.43 |=================================================================== b . 117.76 |=============================================================== c . 123.52 |=================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 8.03509 |============================================================== b . 8.49004 |================================================================== c . 8.09397 |=============================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 241.11 |================================================================== b . 243.72 |=================================================================== c . 241.77 |================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 4.14530 |================================================================== b . 4.10072 |================================================================= c . 4.13414 |================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 252.92 |================================================================= b . 248.82 |================================================================ c . 259.40 |=================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 3.95228 |================================================================= b . 4.01713 |================================================================== c . 3.85380 |=============================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 8.15695 |================================================================== b . 8.02974 |================================================================= c . 8.07196 |================================================================= ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 122.59 |================================================================== b . 124.53 |=================================================================== c . 123.88 |=================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 10.22 |===================================================== b . 10.24 |===================================================== c . 13.19 |==================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 97.88 |==================================================================== b . 97.68 |==================================================================== c . 75.81 |===================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 460.47 |============================================================ b . 478.21 |============================================================== c . 512.85 |=================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 2.16888 |================================================================== b . 2.08856 |================================================================ c . 1.94744 |=========================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 503.82 |========================================================== b . 581.40 |=================================================================== c . 572.80 |================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1.98393 |================================================================== b . 1.71945 |========================================================= c . 1.74514 |========================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 1.90195 |================================================================ b . 1.90447 |================================================================ c . 1.96289 |================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 525.77 |=================================================================== b . 525.08 |=================================================================== c . 509.45 |================================================================= ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 4.19845 |================================================================== b . 4.15428 |================================================================= c . 4.03398 |=============================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 238.18 |================================================================ b . 240.71 |================================================================= c . 247.89 |=================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 21.62 |=================================================================== b . 21.67 |=================================================================== c . 21.90 |==================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 46.26 |==================================================================== b . 46.14 |==================================================================== c . 45.65 |=================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 30.61 |================================================================== b . 27.33 |=========================================================== c . 31.40 |==================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 32.66 |============================================================= b . 36.59 |==================================================================== c . 31.84 |=========================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 191.03 |================================================================= b . 184.12 |=============================================================== c . 196.43 |=================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 5.23256 |================================================================ b . 5.42913 |================================================================== c . 5.08908 |============================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 248.92 |================================================================== b . 251.24 |=================================================================== c . 243.18 |================================================================= ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.01577 |================================================================ b . 3.97878 |================================================================ c . 4.11046 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 88.34 |================================================================== b . 90.56 |==================================================================== c . 88.49 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 11.32 |==================================================================== b . 11.04 |================================================================== c . 11.30 |==================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 96.62 |==================================================================== b . 92.55 |================================================================= c . 95.57 |=================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 10.35 |================================================================= b . 10.80 |==================================================================== c . 10.46 |================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 1.17036 |================================================================ b . 1.19314 |================================================================= c . 1.21156 |================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 854.43 |=================================================================== b . 838.12 |================================================================== c . 825.37 |================================================================= ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 1.35255 |======================================== b . 1.43669 |=========================================== c . 2.20950 |================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 739.34 |=================================================================== b . 696.04 |=============================================================== c . 452.59 |========================================= ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 24.88 |==================================================================== b . 24.34 |================================================================== c . 24.90 |==================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 40.19 |=================================================================== b . 41.08 |==================================================================== c . 40.16 |================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 40.58 |==================================================================== b . 40.68 |==================================================================== c . 39.97 |=================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 24.64 |=================================================================== b . 24.58 |=================================================================== c . 25.02 |==================================================================== Whisperfile 20Aug24 Model Size: Tiny Seconds < Lower Is Better a . 48.88 |================================================================= b . 48.84 |================================================================= c . 51.33 |==================================================================== Whisperfile 20Aug24 Model Size: Small Seconds < Lower Is Better a . 153.82 |=================================================================== b . 153.25 |=================================================================== c . 151.73 |================================================================== Whisperfile 20Aug24 Model Size: Medium Seconds < Lower Is Better a . 346.07 |================================================================== b . 349.98 |=================================================================== c . 340.93 |================================================================= Valkey 8.0 Test: GET - Parallel Connections: 500 Requests Per Second > Higher Is Better a . 624061.31 |================================================================ Valkey 8.0 Test: GET - Parallel Connections: 800 Requests Per Second > Higher Is Better a . 656682.62 |================================================================ Valkey 8.0 Test: SET - Parallel Connections: 500 Requests Per Second > Higher Is Better a . 180276.22 |================================================================ Valkey 8.0 Test: SET - Parallel Connections: 800 Requests Per Second > Higher Is Better a . 175215.25 |================================================================ Valkey 8.0 Test: GET - Parallel Connections: 1000 Valkey 8.0 Test: HSET - Parallel Connections: 500 Requests Per Second > Higher Is Better a . 198757.73 |================================================================ Valkey 8.0 Test: HSET - Parallel Connections: 800 Valkey 8.0 Test: INCR - Parallel Connections: 500 Valkey 8.0 Test: INCR - Parallel Connections: 800 Valkey 8.0 Test: LPOP - Parallel Connections: 500 Valkey 8.0 Test: LPOP - Parallel Connections: 800 Valkey 8.0 Test: SADD - Parallel Connections: 500 Valkey 8.0 Test: SADD - Parallel Connections: 800 Valkey 8.0 Test: SET - Parallel Connections: 1000 Valkey 8.0 Test: SPOP - Parallel Connections: 500 Valkey 8.0 Test: SPOP - Parallel Connections: 800 Valkey 8.0 Test: HSET - Parallel Connections: 1000 Valkey 8.0 Test: INCR - Parallel Connections: 1000 Valkey 8.0 Test: LPOP - Parallel Connections: 1000 Valkey 8.0 Test: SADD - Parallel Connections: 1000 Valkey 8.0 Test: SPOP - Parallel Connections: 1000 Valkey 8.0 Test: GET - Parallel Connections: 500 Valkey 8.0 Test: GET - Parallel Connections: 800 Valkey 8.0 Test: SET - Parallel Connections: 500 Valkey 8.0 Test: SET - Parallel Connections: 800 Valkey 8.0 Test: HSET - Parallel Connections: 500