new week AMD Ryzen Threadripper PRO 5965WX 24-Cores testing with a ASUS Pro WS WRX80E-SAGE SE WIFI (1201 BIOS) and ASUS NVIDIA NV106 2GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2403307-NE-NEWWEEK3884&grr&rdt .
new week Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server Display Driver OpenGL Compiler File-System Screen Resolution a b AMD Ryzen Threadripper PRO 5965WX 24-Cores @ 3.80GHz (24 Cores / 48 Threads) ASUS Pro WS WRX80E-SAGE SE WIFI (1201 BIOS) AMD Starship/Matisse 8 x 16GB DDR4-2133MT/s Corsair CMK32GX4M2E3200C16 2048GB SOLIDIGM SSDPFKKW020X7 ASUS NVIDIA NV106 2GB AMD Starship/Matisse VA2431 2 x Intel X550 + Intel Wi-Fi 6 AX200 Ubuntu 23.10 6.5.0-26-generic (x86_64) GNOME Shell 45.0 X Server + Wayland nouveau 4.3 Mesa 23.2.1-1ubuntu3 GCC 13.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa008205 Python Details - Python 3.11.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
new week mysqlslap: 1024 tensorflow: CPU - 256 - ResNet-50 mysqlslap: 512 tensorflow: CPU - 256 - GoogLeNet blender: Barbershop - CPU-Only tensorflow: CPU - 64 - ResNet-50 mariadb: oltp_update_non_index - 768 mariadb: oltp_update_non_index - 64 mariadb: oltp_read_write - 512 mariadb: oltp_write_only - 64 mariadb: oltp_point_select - 512 mariadb: oltp_point_select - 64 mariadb: oltp_write_only - 768 mariadb: oltp_update_non_index - 512 mariadb: oltp_update_index - 768 mariadb: oltp_write_only - 512 mariadb: oltp_point_select - 768 mariadb: oltp_read_write - 64 mariadb: oltp_update_index - 64 mariadb: oltp_read_only - 64 mariadb: oltp_read_only - 768 mariadb: oltp_read_only - 512 mariadb: oltp_update_index - 512 pytorch: CPU - 64 - Efficientnet_v2_l pytorch: CPU - 16 - Efficientnet_v2_l pytorch: CPU - 32 - Efficientnet_v2_l pytorch: CPU - 256 - Efficientnet_v2_l tensorflow: CPU - 256 - AlexNet tensorflow: CPU - 32 - ResNet-50 pytorch: CPU - 256 - ResNet-152 pytorch: CPU - 16 - ResNet-152 pytorch: CPU - 64 - ResNet-152 pytorch: CPU - 32 - ResNet-152 blender: Pabellon Barcelona - CPU-Only blender: Classroom - CPU-Only tensorflow: CPU - 64 - GoogLeNet pytorch: CPU - 1 - Efficientnet_v2_l tensorflow: CPU - 16 - ResNet-50 pytorch: CPU - 1 - ResNet-152 blender: Junkshop - CPU-Only pytorch: CPU - 64 - ResNet-50 pytorch: CPU - 256 - ResNet-50 pytorch: CPU - 16 - ResNet-50 pytorch: CPU - 32 - ResNet-50 tensorflow: CPU - 32 - GoogLeNet blender: Fishy Cat - CPU-Only tensorflow: CPU - 64 - AlexNet blender: BMW27 - CPU-Only tensorflow: CPU - 32 - AlexNet tensorflow: CPU - 16 - GoogLeNet pytorch: CPU - 1 - ResNet-50 tensorflow: CPU - 16 - AlexNet build-mesa: Time To Compile tensorflow: CPU - 1 - ResNet-50 tensorflow: CPU - 1 - AlexNet tensorflow: CPU - 1 - GoogLeNet a b 7 19.48 79 57.87 451.14 19.7 50894 55705 73844 206145 24260 25236 166357 117854 103448 213913 23180 125688 68422 17560 16718 16961 119718 6.80 6.82 6.93 6.90 152.96 20.17 12.32 12.30 12.29 12.27 144.93 122.94 58.15 9.42 18.91 15.64 66.05 31.64 31.90 31.71 31.91 61.27 56.74 134.28 46.92 118.38 60.85 40.17 101.63 16.529 9.6 10.7 12.73 7 19.43 68 57.36 453.43 19.54 108525 40385 89036 163585 24448 24909 177622 110045 96743 153471 23536 120550 54103 17494 16736 16848 98978 6.85 6.96 6.95 6.92 152.53 19.99 12.36 12.26 12.33 12.38 144.76 123 59.03 9.44 20.09 15.63 66.69 31.74 31.54 31.99 31.89 60.8 57.21 133.67 47.27 117.25 60.8 39.95 102.11 16.379 9.66 10.87 12.29 OpenBenchmarking.org
MariaDB mariadb-slap Clients: 1024 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB mariadb-slap 11.5 Clients: 1024 a b 2 4 6 8 10 7 7 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
TensorFlow Device: CPU - Batch Size: 256 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 a b 5 10 15 20 25 19.48 19.43
MariaDB mariadb-slap Clients: 512 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB mariadb-slap 11.5 Clients: 512 a b 20 40 60 80 100 79 68 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
TensorFlow Device: CPU - Batch Size: 256 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: GoogLeNet a b 13 26 39 52 65 57.87 57.36
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Barbershop - Compute: CPU-Only a b 100 200 300 400 500 451.14 453.43
TensorFlow Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b 5 10 15 20 25 19.70 19.54
MariaDB Test: oltp_update_non_index - Threads: 768 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_update_non_index - Threads: 768 a b 20K 40K 60K 80K 100K 50894 108525 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_update_non_index - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_update_non_index - Threads: 64 a b 12K 24K 36K 48K 60K 55705 40385 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_read_write - Threads: 512 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_read_write - Threads: 512 a b 20K 40K 60K 80K 100K 73844 89036 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_write_only - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_write_only - Threads: 64 a b 40K 80K 120K 160K 200K 206145 163585 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_point_select - Threads: 512 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_point_select - Threads: 512 a b 5K 10K 15K 20K 25K 24260 24448 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_point_select - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_point_select - Threads: 64 a b 5K 10K 15K 20K 25K 25236 24909 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_write_only - Threads: 768 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_write_only - Threads: 768 a b 40K 80K 120K 160K 200K 166357 177622 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_update_non_index - Threads: 512 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_update_non_index - Threads: 512 a b 30K 60K 90K 120K 150K 117854 110045 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_update_index - Threads: 768 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_update_index - Threads: 768 a b 20K 40K 60K 80K 100K 103448 96743 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_write_only - Threads: 512 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_write_only - Threads: 512 a b 50K 100K 150K 200K 250K 213913 153471 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_point_select - Threads: 768 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_point_select - Threads: 768 a b 5K 10K 15K 20K 25K 23180 23536 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_read_write - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_read_write - Threads: 64 a b 30K 60K 90K 120K 150K 125688 120550 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_update_index - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_update_index - Threads: 64 a b 15K 30K 45K 60K 75K 68422 54103 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_read_only - Threads: 64 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_read_only - Threads: 64 a b 4K 8K 12K 16K 20K 17560 17494 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_read_only - Threads: 768 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_read_only - Threads: 768 a b 4K 8K 12K 16K 20K 16718 16736 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_read_only - Threads: 512 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_read_only - Threads: 512 a b 4K 8K 12K 16K 20K 16961 16848 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
MariaDB Test: oltp_update_index - Threads: 512 OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.5 Test: oltp_update_index - Threads: 512 a b 30K 60K 90K 120K 150K 119718 98978 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O3 -shared -lrt -lpthread -lz -ldl -lm -lstdc++
PyTorch Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l a b 2 4 6 8 10 6.80 6.85 MIN: 6.7 / MAX: 6.83 MIN: 6.75 / MAX: 6.88
PyTorch Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l a b 2 4 6 8 10 6.82 6.96 MIN: 6.71 / MAX: 6.86 MIN: 6.84 / MAX: 6.98
PyTorch Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l a b 2 4 6 8 10 6.93 6.95 MIN: 6.82 / MAX: 6.99 MIN: 6.85 / MAX: 6.98
PyTorch Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l a b 2 4 6 8 10 6.90 6.92 MIN: 6.79 / MAX: 6.93 MIN: 6.82 / MAX: 6.95
TensorFlow Device: CPU - Batch Size: 256 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: AlexNet a b 30 60 90 120 150 152.96 152.53
TensorFlow Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b 5 10 15 20 25 20.17 19.99
PyTorch Device: CPU - Batch Size: 256 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 a b 3 6 9 12 15 12.32 12.36 MIN: 12.01 / MAX: 12.41 MIN: 12.02 / MAX: 12.42
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 a b 3 6 9 12 15 12.30 12.26 MIN: 12.02 / MAX: 12.39 MIN: 11.94 / MAX: 12.34
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 a b 3 6 9 12 15 12.29 12.33 MIN: 11.98 / MAX: 12.41 MIN: 12.03 / MAX: 12.4
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 a b 3 6 9 12 15 12.27 12.38 MIN: 11.51 / MAX: 12.33 MIN: 11.95 / MAX: 12.45
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Pabellon Barcelona - Compute: CPU-Only a b 30 60 90 120 150 144.93 144.76
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Classroom - Compute: CPU-Only a b 30 60 90 120 150 122.94 123.00
TensorFlow Device: CPU - Batch Size: 64 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: GoogLeNet a b 13 26 39 52 65 58.15 59.03
PyTorch Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l a b 3 6 9 12 15 9.42 9.44 MIN: 9.22 / MAX: 9.48 MIN: 9.25 / MAX: 9.49
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 5 10 15 20 25 18.91 20.09
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 a b 4 8 12 16 20 15.64 15.63 MIN: 15.32 / MAX: 15.71 MIN: 15.1 / MAX: 15.7
Blender Blend File: Junkshop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Junkshop - Compute: CPU-Only a b 15 30 45 60 75 66.05 66.69
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b 7 14 21 28 35 31.64 31.74 MIN: 29.82 / MAX: 31.92 MIN: 29.9 / MAX: 31.94
PyTorch Device: CPU - Batch Size: 256 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 a b 7 14 21 28 35 31.90 31.54 MIN: 30.01 / MAX: 32.16 MIN: 29.77 / MAX: 31.77
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 7 14 21 28 35 31.71 31.99 MIN: 30.77 / MAX: 31.92 MIN: 29.99 / MAX: 32.28
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b 7 14 21 28 35 31.91 31.89 MIN: 30.13 / MAX: 32.28 MIN: 29.92 / MAX: 32.2
TensorFlow Device: CPU - Batch Size: 32 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: GoogLeNet a b 14 28 42 56 70 61.27 60.80
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: Fishy Cat - Compute: CPU-Only a b 13 26 39 52 65 56.74 57.21
TensorFlow Device: CPU - Batch Size: 64 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: AlexNet a b 30 60 90 120 150 134.28 133.67
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.1 Blend File: BMW27 - Compute: CPU-Only a b 11 22 33 44 55 46.92 47.27
TensorFlow Device: CPU - Batch Size: 32 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: AlexNet a b 30 60 90 120 150 118.38 117.25
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: GoogLeNet a b 14 28 42 56 70 60.85 60.80
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b 9 18 27 36 45 40.17 39.95 MIN: 38.51 / MAX: 40.42 MIN: 37.01 / MAX: 40.27
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: AlexNet a b 20 40 60 80 100 101.63 102.11
Timed Mesa Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Mesa Compilation 24.0 Time To Compile a b 4 8 12 16 20 16.53 16.38
TensorFlow Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b 3 6 9 12 15 9.60 9.66
TensorFlow Device: CPU - Batch Size: 1 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: AlexNet a b 3 6 9 12 15 10.70 10.87
TensorFlow Device: CPU - Batch Size: 1 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: GoogLeNet a b 3 6 9 12 15 12.73 12.29
Phoronix Test Suite v10.8.5