new rn tr

AMD Ryzen Threadripper 7980X 64-Cores testing with a System76 Thelio Major (FA Z5 BIOS) and AMD Radeon RX 6700 XT 12GB on Ubuntu 24.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2410161-PTS-NEWRNTR258&sro&gru.

new rn trProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcdAMD Ryzen Threadripper 7980X 64-Cores @ 5.37GHz (64 Cores / 128 Threads)System76 Thelio Major (FA Z5 BIOS)AMD Device 14a44 x 32GB DDR5-4800MT/s Micron MTC20F1045S1RC48BA21000GB CT1000T700SSD5AMD Radeon RX 6700 XT 12GBAMD Device 14ccDELL P2415QAquantia AQC113C NBase-T/IEEE + Realtek RTL8125 2.5GbE + Intel Wi-Fi 6EUbuntu 24.106.11.0-8-generic (x86_64)GNOME Shell 47.0X Server + Wayland4.6 Mesa 24.2.3-1ubuntu1 (LLVM 19.1.0 DRM 3.58)GCC 14.2.0ext41920x1200OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2,rust --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate-epp powersave (Boost: Enabled EPP: balance_performance) - CPU Microcode: 0xa108105 Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

new rn trlitert: DeepLab V3litert: SqueezeNetlitert: Inception V4litert: NASNet Mobilelitert: Mobilenet Floatlitert: Mobilenet Quantlitert: Inception ResNet V2litert: Quantized COCO SSD MobileNet v1onednn: IP Shapes 1D - CPUonednn: IP Shapes 3D - CPUonednn: Convolution Batch Shapes Auto - CPUonednn: Deconvolution Batch shapes_1d - CPUonednn: Deconvolution Batch shapes_3d - CPUonednn: Recurrent Neural Network Training - CPUonednn: Recurrent Neural Network Inference - CPUxnnpack: FP32MobileNetV1xnnpack: FP32MobileNetV2xnnpack: FP32MobileNetV3Largexnnpack: FP32MobileNetV3Smallxnnpack: FP16MobileNetV1xnnpack: FP16MobileNetV2xnnpack: FP16MobileNetV3Largexnnpack: FP16MobileNetV3Smallxnnpack: QS8MobileNetV2abcd13513.83266.4226612.81734552102.9915789.834923.97924.860.5820290.3351410.5576767.460411.02976546.398326.23621194032596242382031374356244164403112117.33303.9626574.61567032141.9616068.833651.97935.030.5732700.3378900.5543517.426101.02672548.732326.78821564053600542712052380457484301398811896.53264.0626433.21425412112.3516501.234821.77672.130.5773030.3330220.5603907.504051.02743548.888325.44821564055604742692070382558274255402512363.43282.5026390.11512752132.4515994.034742.17531.430.5727390.3373840.5544057.489711.02498550.540325.778215940075932428520523762580642943985OpenBenchmarking.org

LiteRT

Model: DeepLab V3

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: DeepLab V3abcd3K6K9K12K15KSE +/- 244.20, N = 15SE +/- 118.42, N = 3SE +/- 50.22, N = 313513.812117.311896.512363.4

LiteRT

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: SqueezeNetabcd7001400210028003500SE +/- 24.67, N = 3SE +/- 34.80, N = 4SE +/- 2.50, N = 33266.423303.963264.063282.50

LiteRT

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception V4abcd6K12K18K24K30KSE +/- 227.17, N = 14SE +/- 209.28, N = 3SE +/- 113.50, N = 326612.826574.626433.226390.1

LiteRT

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: NASNet Mobileabcd40K80K120K160K200KSE +/- 1880.66, N = 3SE +/- 1575.26, N = 3SE +/- 2554.81, N = 12173455156703142541151275

LiteRT

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Floatabcd5001000150020002500SE +/- 20.11, N = 3SE +/- 12.05, N = 3SE +/- 14.67, N = 32102.992141.962112.352132.45

LiteRT

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Quantabcd4K8K12K16K20KSE +/- 321.23, N = 12SE +/- 268.08, N = 13SE +/- 187.77, N = 315789.816068.816501.215994.0

LiteRT

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception ResNet V2abcd7K14K21K28K35KSE +/- 242.28, N = 3SE +/- 330.78, N = 15SE +/- 375.95, N = 334923.933651.934821.734742.1

LiteRT

Model: Quantized COCO SSD MobileNet v1

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Quantized COCO SSD MobileNet v1abcd2K4K6K8K10KSE +/- 199.12, N = 12SE +/- 83.25, N = 15SE +/- 184.40, N = 127924.867935.037672.137531.43

oneDNN

Harness: IP Shapes 1D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 1D - Engine: CPUabcd0.1310.2620.3930.5240.655SE +/- 0.003697, N = 3SE +/- 0.000734, N = 3SE +/- 0.004087, N = 30.5820290.5732700.5773030.572739MIN: 0.55MIN: 0.54MIN: 0.54MIN: 0.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: IP Shapes 3D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 3D - Engine: CPUabcd0.0760.1520.2280.3040.38SE +/- 0.001917, N = 3SE +/- 0.002114, N = 3SE +/- 0.001618, N = 30.3351410.3378900.3330220.337384MIN: 0.31MIN: 0.31MIN: 0.31MIN: 0.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Convolution Batch Shapes Auto - Engine: CPUabcd0.12610.25220.37830.50440.6305SE +/- 0.004992, N = 3SE +/- 0.000955, N = 3SE +/- 0.003803, N = 30.5576760.5543510.5603900.554405MIN: 0.52MIN: 0.51MIN: 0.52MIN: 0.521. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_1d - Engine: CPUabcd246810SE +/- 0.00222, N = 3SE +/- 0.04488, N = 3SE +/- 0.07017, N = 37.460417.426107.504057.48971MIN: 6.52MIN: 6.55MIN: 4.63MIN: 4.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_3d - Engine: CPUabcd0.23170.46340.69510.92681.1585SE +/- 0.00042, N = 3SE +/- 0.00238, N = 3SE +/- 0.00347, N = 31.029761.026721.027431.02498MIN: 0.96MIN: 0.97MIN: 0.96MIN: 0.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Training - Engine: CPUabcd120240360480600SE +/- 0.35, N = 3SE +/- 0.64, N = 3SE +/- 1.00, N = 3546.40548.73548.89550.54MIN: 540.95MIN: 542.36MIN: 542.07MIN: 542.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Inference - Engine: CPUabcd70140210280350SE +/- 1.83, N = 3SE +/- 0.28, N = 3SE +/- 0.91, N = 3326.24326.79325.45325.78MIN: 322.1MIN: 319.81MIN: 321.39MIN: 320.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

XNNPACK

Model: FP32MobileNetV1

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV1abcd5001000150020002500SE +/- 28.29, N = 3SE +/- 15.37, N = 3SE +/- 9.40, N = 321192156215621591. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP32MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV2abcd9001800270036004500SE +/- 40.70, N = 3SE +/- 22.36, N = 3SE +/- 4.98, N = 340324053405540071. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP32MobileNetV3Large

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Largeabcd13002600390052006500SE +/- 43.59, N = 3SE +/- 3.71, N = 3SE +/- 38.74, N = 359626005604759321. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP32MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Smallabcd9001800270036004500SE +/- 28.17, N = 3SE +/- 20.51, N = 3SE +/- 34.96, N = 342384271426942851. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV1

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV1abcd400800120016002000SE +/- 11.84, N = 3SE +/- 19.34, N = 3SE +/- 17.89, N = 320312052207020521. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV2abcd8001600240032004000SE +/- 37.65, N = 3SE +/- 48.96, N = 3SE +/- 23.68, N = 337433804382537621. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV3Large

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Largeabcd12002400360048006000SE +/- 67.87, N = 3SE +/- 63.97, N = 3SE +/- 39.03, N = 356245748582758061. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: FP16MobileNetV3Small

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Smallabcd9001800270036004500SE +/- 51.08, N = 3SE +/- 19.50, N = 3SE +/- 72.75, N = 341644301425542941. (CXX) g++ options: -O3 -lrt -lm

XNNPACK

Model: QS8MobileNetV2

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: QS8MobileNetV2abcd9001800270036004500SE +/- 32.71, N = 3SE +/- 32.13, N = 3SE +/- 27.10, N = 340313988402539851. (CXX) g++ options: -O3 -lrt -lm


Phoronix Test Suite v10.8.5