Intel Core i7-10510U testing with a LENOVO 20U9CTO1WW (N2WET24W 1.14 BIOS) and Intel UHD 3GB on Fedora 33 via the Phoronix Test Suite.
XPS 13 Tiger Lake Ubuntu 20.04 Processor: Intel Core i5-1135G7 @ 4.20GHz (4 Cores / 8 Threads), Motherboard: Dell 0THX8P (1.1.1 BIOS), Chipset: Intel Device a0ef, Memory: 16GB, Disk: Micron 2300 NVMe 512GB, Graphics: Intel Xe 3GB (1300MHz), Audio: Realtek ALC289, Network: Intel Device a0f0
OS: Ubuntu 20.04, Kernel: 5.6.0-1036-oem (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 4.6 Mesa 20.0.8, Vulkan: 1.2.131, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1200
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 1.9.1Python Notes: Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
ThinkPad X1 Fedora Comet Lake Processor: Intel Core i7-10510U @ 4.90GHz (4 Cores / 8 Threads) , Motherboard: LENOVO 20U9CTO1WW (N2WET24W 1.14 BIOS) , Chipset: Intel Comet Lake PCH-LP , Memory: 2 x 8 GB LPDDR3-2133MT/s Samsung , Disk: 256GB Western Digital PC SN730 SDBQNTY-256G-1001 , Graphics: Intel UHD 3GB (1150MHz) , Audio: Realtek ALC285 , Network: Intel + Intel Comet Lake PCH-LP CNVi WiFi
OS: Fedora 33, Kernel: 5.9.16-200.fc33.x86_64 (x86_64), Desktop: KDE Plasma 5.20.4, Display Server: X Server 1.20.10, Display Driver: modesetting 1.20.10, OpenGL: 4.6 Mesa 20.2.6, Compiler: GCC 10.2.1 20201125 + Clang 11.0.0, File-System: btrfs, Screen Resolution: 2560x1440
Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-isl --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersavePython Notes: Python 3.9.1Security Notes: SELinux + itlb_multihit: KVM: Mitigation of VMX unsupported + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Mitigation of TSX disabled + tsx_async_abort: Not affected
Coder Radio XPS 13 ML Ubuntu Benchmark Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server Display Driver OpenGL Vulkan Compiler File-System Screen Resolution XPS 13 Tiger Lake Ubuntu 20.04 ThinkPad X1 Fedora Comet Lake Intel Core i5-1135G7 @ 4.20GHz (4 Cores / 8 Threads) Dell 0THX8P (1.1.1 BIOS) Intel Device a0ef 16GB Micron 2300 NVMe 512GB Intel Xe 3GB (1300MHz) Realtek ALC289 Intel Device a0f0 Ubuntu 20.04 5.6.0-1036-oem (x86_64) GNOME Shell 3.36.4 X Server 1.20.8 modesetting 1.20.8 4.6 Mesa 20.0.8 1.2.131 GCC 9.3.0 ext4 1920x1200 Intel Core i7-10510U @ 4.90GHz (4 Cores / 8 Threads) LENOVO 20U9CTO1WW (N2WET24W 1.14 BIOS) Intel Comet Lake PCH-LP 2 x 8 GB LPDDR3-2133MT/s Samsung 256GB Western Digital PC SN730 SDBQNTY-256G-1001 Intel UHD 3GB (1150MHz) Realtek ALC285 Intel + Intel Comet Lake PCH-LP CNVi WiFi Fedora 33 5.9.16-200.fc33.x86_64 (x86_64) KDE Plasma 5.20.4 X Server 1.20.10 modesetting 1.20.10 4.6 Mesa 20.2.6 GCC 10.2.1 20201125 + Clang 11.0.0 btrfs 2560x1440 OpenBenchmarking.org Compiler Details - XPS 13 Tiger Lake Ubuntu 20.04: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - ThinkPad X1 Fedora Comet Lake: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-isl --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver Processor Details - XPS 13 Tiger Lake Ubuntu 20.04: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 1.9.1 - ThinkPad X1 Fedora Comet Lake: Scaling Governor: intel_pstate powersave Python Details - XPS 13 Tiger Lake Ubuntu 20.04: Python 3.8.5 - ThinkPad X1 Fedora Comet Lake: Python 3.9.1 Security Details - XPS 13 Tiger Lake Ubuntu 20.04: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - ThinkPad X1 Fedora Comet Lake: SELinux + itlb_multihit: KVM: Mitigation of VMX unsupported + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Mitigation of TSX disabled + tsx_async_abort: Not affected
XPS 13 Tiger Lake Ubuntu 20.04 vs. ThinkPad X1 Fedora Comet Lake Comparison Phoronix Test Suite Baseline +160.9% +160.9% +321.8% +321.8% +482.7% +482.7% 4.3% 2.3% D.B.s - u8s8f32 - CPU 643.6% M.M.B.S.T - u8s8f32 - CPU 456.7% IP Shapes 3D - f32 - CPU 172.2% C.B.S.A - u8s8f32 - CPU 166.4% D.B.s - u8s8f32 - CPU 160.8% IP Shapes 1D - u8s8f32 - CPU 141.9% C.B.S.A - f32 - CPU 90.4% R.N.N.T - bf16bf16bf16 - CPU 78.8% R.N.N.T - u8s8f32 - CPU 78.3% R.N.N.T - f32 - CPU 77.5% R.N.N.I - u8s8f32 - CPU 76.8% R.N.N.I - bf16bf16bf16 - CPU 76.4% No - Inference - VGG16 - CPU 73% M.M.B.S.T - f32 - CPU 71.1% resnet-v2-50 61.5% SqueezeNetV1.0 60.3% CPU - vgg16 58.7% mobilenet-v1-1.0 58.1% Vulkan GPU - vgg16 55.7% MobileNetV2_224 54.2% IP Shapes 1D - f32 - CPU 53.4% IP Shapes 3D - u8s8f32 - CPU 50.2% DNN - D.N.N 48.9% CPU - shufflenet-v2 45.9% R.N.N.I - f32 - CPU 43.2% inception-v3 42.6% D.B.s - f32 - CPU 41.5% CPU - alexnet 39.3% CPU-v2-v2 - mobilenet-v2 37.1% CPU - resnet18 37% CPU-v3-v3 - mobilenet-v3 35.6% Vulkan GPU - alexnet 33.9% CPU - blazeface 33.3% Vulkan GPU - mobilenet 32.2% CPU - resnet50 31.7% NASNet Mobile 31.2% Vulkan GPU - yolov4-tiny 30.2% CPU - mobilenet 29.5% Vulkan GPU - resnet50 28.3% CPU - yolov4-tiny 27.9% Mobilenet Quant 27.7% Vulkan GPU - resnet18 27.3% No - Inference - ResNet 50 - CPU 27.2% CPU - googlenet 25.5% Vulkan GPU-v2-v2 - mobilenet-v2 25% Vulkan GPU - shufflenet-v2 24.3% Mobilenet Float 23.8% Vulkan GPU-v3-v3 - mobilenet-v3 23.6% SqueezeNet 22.6% I.R.V 22.2% CPU - efficientnet-b0 21.5% Inception V4 20.8% CPU - mnasnet 20.5% D.B.s - f32 - CPU 18.2% Vulkan GPU - googlenet 17.4% Vulkan GPU - efficientnet-b0 16.1% Vulkan GPU - blazeface 15.7% CPU - squeezenet_ssd 15.1% Vulkan GPU - squeezenet_ssd 13.1% Vulkan GPU - mnasnet 7.9% CPU - regnety_400m 5.8% Vulkan GPU - regnety_400m oneDNN oneDNN oneDNN oneDNN oneDNN oneDNN oneDNN oneDNN oneDNN oneDNN oneDNN oneDNN PlaidML oneDNN Mobile Neural Network Mobile Neural Network NCNN Mobile Neural Network NCNN Mobile Neural Network oneDNN oneDNN OpenCV NCNN oneDNN Mobile Neural Network oneDNN NCNN NCNN NCNN NCNN NCNN NCNN NCNN NCNN TensorFlow Lite NCNN NCNN NCNN NCNN TensorFlow Lite NCNN PlaidML NCNN NCNN NCNN TensorFlow Lite NCNN TensorFlow Lite TensorFlow Lite NCNN TensorFlow Lite NCNN oneDNN NCNN NCNN NCNN NCNN NCNN NCNN NCNN NCNN Numpy Benchmark XPS 13 Tiger Lake Ubuntu 20.04 ThinkPad X1 Fedora Comet Lake
Coder Radio XPS 13 ML Ubuntu Benchmark numenta-nab: EXPoSE plaidml: No - Inference - VGG16 - CPU plaidml: No - Inference - ResNet 50 - CPU ncnn: CPUv2-yolov3v2-yolov3 - mobilenetv2-yolov3 ai-benchmark: Device AI Score ai-benchmark: Device Training Score ai-benchmark: Device Inference Score onednn: Recurrent Neural Network Inference - f32 - CPU mnn: inception-v3 mnn: mobilenet-v1-1.0 mnn: MobileNetV2_224 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU numenta-nab: Earthgecko Skyline ncnn: CPU - regnety_400m ncnn: CPU - squeezenet_ssd ncnn: CPU - yolov4-tiny ncnn: CPU - resnet50 ncnn: CPU - alexnet ncnn: CPU - resnet18 ncnn: CPU - vgg16 ncnn: CPU - googlenet ncnn: CPU - blazeface ncnn: CPU - efficientnet-b0 ncnn: CPU - mnasnet ncnn: CPU - shufflenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU - mobilenet onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU numpy: mlpack: scikit_qda tensorflow-lite: SqueezeNet tensorflow-lite: NASNet Mobile tensorflow-lite: Mobilenet Quant tensorflow-lite: Mobilenet Float tensorflow-lite: Inception V4 tensorflow-lite: Inception ResNet V2 ncnn: Vulkan GPUv2-yolov3v2-yolov3 - mobilenetv2-yolov3 mlpack: scikit_ica ncnn: Vulkan GPU - regnety_400m ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - yolov4-tiny ncnn: Vulkan GPU - resnet50 ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - resnet18 ncnn: Vulkan GPU - vgg16 ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU - efficientnet-b0 ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - shufflenet-v2 ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: Vulkan GPU - mobilenet numenta-nab: Bayesian Changepoint onednn: Deconvolution Batch shapes_1d - f32 - CPU numenta-nab: Relative Entropy mlpack: scikit_linearridgeregression onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU onednn: IP Shapes 1D - f32 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU mlpack: scikit_svm onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU rnnoise: numenta-nab: Windowed Gaussian onednn: Convolution Batch Shapes Auto - f32 - CPU opencv: DNN - Deep Neural Network scikit-learn: onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU XPS 13 Tiger Lake Ubuntu 20.04 ThinkPad X1 Fedora Comet Lake 1139.241 6.47 3.27 1186 630 556 5736.43 68.523 8.320 6.238 54.691 11.256 8859.05 8875.33 8862.65 333.639 21.11 39.61 44.65 51.01 19.06 22.07 68.50 25.01 2.85 12.53 8.10 9.99 6.71 7.78 35.03 4528.11 4516.60 293.18 138.24 627487 455525 419506 424309 9220230 8329360 123.23 21.01 39.53 44.68 51.13 19.23 22.25 69.16 24.58 2.80 11.83 8.13 10.30 6.70 7.71 35.11 81.062 14.6326 48.609 13.50 2.94262 11.8072 10.2773 57.0546 2.43051 34.66 3.93298 2.12307 31.947 27.702 11.5632 5351 17.900 25.8282 2.62981 52.4284 6.39979 6.59350 7.93450 52.6824 3.13684 13.4125 3.74 2.57 45.37 8215.08 97.71 13.15 9.62 88.35 18.04 15837.41 15828.66 15729.96 22.34 45.61 57.10 67.20 26.55 30.23 108.71 31.39 3.80 15.22 9.76 14.58 9.10 10.67 45.37 8004.41 7969.03 299.99 769601 597569 535897 525312 11134467 10177933 46.41 20.14 44.71 58.16 65.59 25.75 28.33 107.70 28.85 3.24 13.74 8.77 12.80 8.28 9.64 46.41 20.71 21.88 15.77 5.88 6.73 11.82 22.02 7968 3.95 17.95 21.14 8.18 15.85 OpenBenchmarking.org
Numenta Anomaly Benchmark Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: EXPoSE XPS 13 Tiger Lake Ubuntu 20.04 200 400 600 800 1000 SE +/- 2.32, N = 3 1139.24
OpenBenchmarking.org FPS, More Is Better PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 0.7358 1.4716 2.2074 2.9432 3.679 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 2.57 3.27
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 ThinkPad X1 Fedora Comet Lake 10 20 30 40 50 SE +/- 0.96, N = 12 45.37 MIN: 36.97 / MAX: 552.34 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 2K 4K 6K 8K 10K SE +/- 175.62, N = 12 SE +/- 1216.13, N = 12 8215.08 5736.43 -O2 - MIN: 6630.85 MIN: 4504.31 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: inception-v3 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 20 40 60 80 100 SE +/- 4.98, N = 3 SE +/- 0.25, N = 11 97.71 68.52 MIN: 80.89 / MAX: 306.21 MIN: 66.62 / MAX: 223.45 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: mobilenet-v1-1.0 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 1.044, N = 3 SE +/- 0.171, N = 11 13.150 8.320 MIN: 10.33 / MAX: 65.74 MIN: 6.48 / MAX: 30.71 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: MobileNetV2_224 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 1.171, N = 3 SE +/- 0.012, N = 11 9.620 6.238 MIN: 5.03 / MAX: 90.69 MIN: 6.15 / MAX: 28.03 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: resnet-v2-50 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 20 40 60 80 100 SE +/- 1.68, N = 3 SE +/- 0.17, N = 11 88.35 54.69 MIN: 67.94 / MAX: 196.35 MIN: 53.32 / MAX: 132.87 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: SqueezeNetV1.0 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 4 8 12 16 20 SE +/- 0.10, N = 3 SE +/- 0.24, N = 11 18.04 11.26 MIN: 16.55 / MAX: 73.33 MIN: 8.69 / MAX: 35.37 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 3K 6K 9K 12K 15K SE +/- 318.55, N = 12 SE +/- 1.56, N = 3 15837.41 8859.05 -O2 - MIN: 13562.8 MIN: 8815.8 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 3K 6K 9K 12K 15K SE +/- 292.22, N = 12 SE +/- 18.79, N = 3 15828.66 8875.33 -O2 - MIN: 13821.5 MIN: 8833.23 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 3K 6K 9K 12K 15K SE +/- 284.25, N = 12 SE +/- 5.47, N = 3 15729.96 8862.65 -O2 - MIN: 13106.7 MIN: 8818.95 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
Numenta Anomaly Benchmark Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline XPS 13 Tiger Lake Ubuntu 20.04 70 140 210 280 350 SE +/- 0.78, N = 3 333.64
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: regnety_400m ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 5 10 15 20 25 SE +/- 0.62, N = 12 SE +/- 0.17, N = 3 22.34 21.11 -O2 - MIN: 18.96 / MAX: 70.8 -O3 - MIN: 18.89 / MAX: 41.4 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: squeezenet_ssd ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 10 20 30 40 50 SE +/- 1.00, N = 12 SE +/- 0.23, N = 3 45.61 39.61 -O2 - MIN: 37.69 / MAX: 158.08 -O3 - MIN: 39.09 / MAX: 57.3 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: yolov4-tiny ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 13 26 39 52 65 SE +/- 1.17, N = 12 SE +/- 0.55, N = 3 57.10 44.65 -O2 - MIN: 48.72 / MAX: 192.16 -O3 - MIN: 43.73 / MAX: 74 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet50 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 15 30 45 60 75 SE +/- 2.13, N = 12 SE +/- 0.12, N = 3 67.20 51.01 -O2 - MIN: 48.46 / MAX: 167.35 -O3 - MIN: 50.55 / MAX: 116.37 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: alexnet ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 6 12 18 24 30 SE +/- 0.44, N = 12 SE +/- 0.02, N = 3 26.55 19.06 -O2 - MIN: 22.72 / MAX: 58.37 -O3 - MIN: 18.7 / MAX: 27.76 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet18 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 7 14 21 28 35 SE +/- 0.80, N = 12 SE +/- 0.02, N = 3 30.23 22.07 -O2 - MIN: 24.75 / MAX: 110.11 -O3 - MIN: 21.29 / MAX: 25.98 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: vgg16 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 20 40 60 80 100 SE +/- 1.36, N = 12 SE +/- 0.25, N = 3 108.71 68.50 -O2 - MIN: 101.54 / MAX: 1158.67 -O3 - MIN: 67.14 / MAX: 86.96 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: googlenet ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 7 14 21 28 35 SE +/- 0.89, N = 12 SE +/- 0.11, N = 3 31.39 25.01 -O2 - MIN: 26.34 / MAX: 90.63 -O3 - MIN: 23.82 / MAX: 51.99 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: blazeface ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 0.855 1.71 2.565 3.42 4.275 SE +/- 0.11, N = 12 SE +/- 0.02, N = 3 3.80 2.85 -O2 - MIN: 3.13 / MAX: 13.74 -O3 - MIN: 2.56 / MAX: 3.12 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: efficientnet-b0 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 4 8 12 16 20 SE +/- 0.51, N = 12 SE +/- 0.04, N = 3 15.22 12.53 -O2 - MIN: 10.96 / MAX: 26.84 -O3 - MIN: 12.32 / MAX: 31.52 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mnasnet ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.39, N = 12 SE +/- 0.50, N = 3 9.76 8.10 -O2 - MIN: 6.72 / MAX: 21.36 -O3 - MIN: 7 / MAX: 12.21 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: shufflenet-v2 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 4 8 12 16 20 SE +/- 0.67, N = 12 SE +/- 0.59, N = 3 14.58 9.99 -O2 - MIN: 8.91 / MAX: 27.37 -O3 - MIN: 8.78 / MAX: 13.35 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v3-v3 - Model: mobilenet-v3 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.34, N = 12 SE +/- 0.02, N = 3 9.10 6.71 -O2 - MIN: 6.6 / MAX: 12.73 -O3 - MIN: 6.52 / MAX: 10.19 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v2-v2 - Model: mobilenet-v2 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.58, N = 12 SE +/- 0.05, N = 3 10.67 7.78 -O2 - MIN: 7.99 / MAX: 44.88 -O3 - MIN: 7.55 / MAX: 25.73 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mobilenet ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 10 20 30 40 50 SE +/- 0.96, N = 12 SE +/- 0.56, N = 3 45.37 35.03 -O2 - MIN: 36.97 / MAX: 552.34 -O3 - MIN: 34.13 / MAX: 54.59 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 2K 4K 6K 8K 10K SE +/- 200.39, N = 15 SE +/- 10.91, N = 3 8004.41 4528.11 -O2 - MIN: 6681.8 MIN: 4494.22 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 2K 4K 6K 8K 10K SE +/- 202.13, N = 15 SE +/- 5.17, N = 3 7969.03 4516.60 -O2 - MIN: 6667.75 MIN: 4500.62 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2020-08-23 Model: NASNet Mobile ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 130K 260K 390K 520K 650K SE +/- 10287.96, N = 15 SE +/- 4542.22, N = 3 597569 455525
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2020-08-23 Model: Mobilenet Quant ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 110K 220K 330K 440K 550K SE +/- 10547.34, N = 15 SE +/- 4741.44, N = 3 535897 419506
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2020-08-23 Model: Mobilenet Float ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 110K 220K 330K 440K 550K SE +/- 10164.17, N = 15 SE +/- 2876.33, N = 3 525312 424309
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2020-08-23 Model: Inception ResNet V2 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 2M 4M 6M 8M 10M SE +/- 63902.46, N = 3 SE +/- 33855.35, N = 3 10177933 8329360
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov3 ThinkPad X1 Fedora Comet Lake 11 22 33 44 55 SE +/- 0.13, N = 3 46.41 MIN: 44.19 / MAX: 58.77 1. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: regnety_400m ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 5 10 15 20 25 SE +/- 2.18, N = 3 SE +/- 0.90, N = 3 20.14 21.01 -O2 - MIN: 15.81 / MAX: 33.22 -O3 - MIN: 18.85 / MAX: 41.97 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: squeezenet_ssd ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 10 20 30 40 50 SE +/- 2.49, N = 3 SE +/- 0.23, N = 3 44.71 39.53 -O2 - MIN: 38.9 / MAX: 62.29 -O3 - MIN: 39 / MAX: 49.44 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: yolov4-tiny ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 13 26 39 52 65 SE +/- 1.59, N = 3 SE +/- 0.57, N = 3 58.16 44.68 -O2 - MIN: 48.75 / MAX: 86.05 -O3 - MIN: 43.68 / MAX: 64.97 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: resnet50 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 15 30 45 60 75 SE +/- 5.84, N = 3 SE +/- 0.13, N = 3 65.59 51.13 -O2 - MIN: 48.53 / MAX: 137.66 -O3 - MIN: 50.6 / MAX: 85.33 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: alexnet ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 6 12 18 24 30 SE +/- 0.92, N = 3 SE +/- 0.20, N = 3 25.75 19.23 -O2 - MIN: 23.88 / MAX: 46.02 -O3 - MIN: 18.73 / MAX: 22.46 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: resnet18 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 7 14 21 28 35 SE +/- 1.78, N = 3 SE +/- 0.17, N = 3 28.33 22.25 -O2 - MIN: 24.76 / MAX: 38.34 -O3 - MIN: 21.22 / MAX: 66.35 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: vgg16 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 20 40 60 80 100 SE +/- 1.75, N = 3 SE +/- 0.54, N = 3 107.70 69.16 -O2 - MIN: 103.24 / MAX: 139.5 -O3 - MIN: 67.22 / MAX: 89.29 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: googlenet ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 7 14 21 28 35 SE +/- 2.51, N = 3 SE +/- 0.60, N = 3 28.85 24.58 -O2 - MIN: 22.37 / MAX: 44.42 -O3 - MIN: 20.79 / MAX: 125.42 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: blazeface ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 0.729 1.458 2.187 2.916 3.645 SE +/- 0.52, N = 3 SE +/- 0.06, N = 3 3.24 2.80 -O2 - MIN: 2.22 / MAX: 5.15 -O3 - MIN: 2.58 / MAX: 5.23 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: efficientnet-b0 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 4 8 12 16 20 SE +/- 1.57, N = 3 SE +/- 0.66, N = 3 13.74 11.83 -O2 - MIN: 10.93 / MAX: 18.12 -O3 - MIN: 10.35 / MAX: 14.92 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: mnasnet ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 1.15, N = 3 SE +/- 0.54, N = 3 8.77 8.13 -O2 - MIN: 6.73 / MAX: 11.94 -O3 - MIN: 6.99 / MAX: 27.68 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: shufflenet-v2 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 2.09, N = 3 SE +/- 0.67, N = 3 12.80 10.30 -O2 - MIN: 8.9 / MAX: 18.09 -O3 - MIN: 8.76 / MAX: 26.22 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.97, N = 3 SE +/- 0.01, N = 3 8.28 6.70 -O2 - MIN: 6.56 / MAX: 11.29 -O3 - MIN: 6.52 / MAX: 9.63 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 1.35, N = 3 SE +/- 0.03, N = 3 9.64 7.71 -O2 - MIN: 7.97 / MAX: 19.4 -O3 - MIN: 7.52 / MAX: 10.75 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: mobilenet ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 11 22 33 44 55 SE +/- 0.13, N = 3 SE +/- 0.48, N = 3 46.41 35.11 -O2 - MIN: 44.19 / MAX: 58.77 -O3 - MIN: 34.07 / MAX: 62.97 1. (CXX) g++ options: -rdynamic -lgomp -lpthread
Numenta Anomaly Benchmark Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint XPS 13 Tiger Lake Ubuntu 20.04 20 40 60 80 100 SE +/- 1.06, N = 4 81.06
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 5 10 15 20 25 SE +/- 0.74, N = 15 SE +/- 0.12, N = 13 20.71 14.63 -O2 - MIN: 15.71 MIN: 13.05 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
Numenta Anomaly Benchmark Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy XPS 13 Tiger Lake Ubuntu 20.04 11 22 33 44 55 SE +/- 0.59, N = 6 48.61
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 5 10 15 20 25 SE +/- 0.09223, N = 3 SE +/- 0.02862, N = 12 21.88000 2.94262 -O2 - MIN: 19.11 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.12, N = 12 11.81 MIN: 10.58 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 4 8 12 16 20 SE +/- 0.62, N = 15 SE +/- 0.06, N = 3 15.77 10.28 -O2 - MIN: 9.11 MIN: 9.23 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 13 26 39 52 65 SE +/- 0.64, N = 6 57.05 MIN: 54.12 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 1.323 2.646 3.969 5.292 6.615 SE +/- 0.02743, N = 3 SE +/- 0.02334, N = 14 5.88000 2.43051 -O2 - MIN: 5.15 MIN: 1.9 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.16653, N = 15 SE +/- 0.01624, N = 3 6.73000 3.93298 -O2 - MIN: 5.42 MIN: 3.57 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 3 6 9 12 15 SE +/- 0.36808, N = 15 SE +/- 0.00736, N = 3 11.82000 2.12307 -O2 - MIN: 9.18 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
RNNoise RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RNNoise 2020-06-28 XPS 13 Tiger Lake Ubuntu 20.04 7 14 21 28 35 SE +/- 0.01, N = 3 31.95 1. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
Numenta Anomaly Benchmark Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian XPS 13 Tiger Lake Ubuntu 20.04 7 14 21 28 35 SE +/- 0.41, N = 3 27.70
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 5 10 15 20 25 SE +/- 0.02, N = 3 SE +/- 0.13, N = 15 22.02 11.56 -O2 - MIN: 21.81 MIN: 10.07 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenCV This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.4 Test: DNN - Deep Neural Network ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 2K 4K 6K 8K 10K SE +/- 973.85, N = 12 SE +/- 87.51, N = 3 7968 5351 -O2 -O3 1. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -ldl -lm -lpthread -lrt
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 6 12 18 24 30 SE +/- 0.10, N = 3 25.83 MIN: 25.21 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 0.8888 1.7776 2.6664 3.5552 4.444 SE +/- 0.05749, N = 3 SE +/- 0.00152, N = 3 3.95000 2.62981 -O2 - MIN: 3.46 MIN: 2.56 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 12 24 36 48 60 SE +/- 0.02, N = 3 52.43 MIN: 52.31 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.06267, N = 3 6.39979 MIN: 5.73 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 4 8 12 16 20 SE +/- 0.01158, N = 3 SE +/- 0.01444, N = 3 17.95000 6.59350 -O2 - MIN: 13.82 MIN: 6.09 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 5 10 15 20 25 SE +/- 0.16335, N = 3 SE +/- 0.02049, N = 3 21.14000 7.93450 -O2 - MIN: 20.41 MIN: 7.85 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU XPS 13 Tiger Lake Ubuntu 20.04 12 24 36 48 60 SE +/- 0.07, N = 3 52.68 MIN: 52.49 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 2 4 6 8 10 SE +/- 0.08942, N = 3 SE +/- 0.00316, N = 3 8.18000 3.13684 -O2 - MIN: 7.76 MIN: 3.11 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ThinkPad X1 Fedora Comet Lake XPS 13 Tiger Lake Ubuntu 20.04 4 8 12 16 20 SE +/- 0.27, N = 3 SE +/- 0.05, N = 3 15.85 13.41 -O2 - MIN: 14.87 MIN: 13.24 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
XPS 13 Tiger Lake Ubuntu 20.04 Processor: Intel Core i5-1135G7 @ 4.20GHz (4 Cores / 8 Threads), Motherboard: Dell 0THX8P (1.1.1 BIOS), Chipset: Intel Device a0ef, Memory: 16GB, Disk: Micron 2300 NVMe 512GB, Graphics: Intel Xe 3GB (1300MHz), Audio: Realtek ALC289, Network: Intel Device a0f0
OS: Ubuntu 20.04, Kernel: 5.6.0-1036-oem (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 4.6 Mesa 20.0.8, Vulkan: 1.2.131, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1200
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 1.9.1Python Notes: Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 31 December 2020 13:39 by user studio.
ThinkPad X1 Fedora Comet Lake Processor: Intel Core i7-10510U @ 4.90GHz (4 Cores / 8 Threads), Motherboard: LENOVO 20U9CTO1WW (N2WET24W 1.14 BIOS), Chipset: Intel Comet Lake PCH-LP, Memory: 2 x 8 GB LPDDR3-2133MT/s Samsung, Disk: 256GB Western Digital PC SN730 SDBQNTY-256G-1001, Graphics: Intel UHD 3GB (1150MHz), Audio: Realtek ALC285, Network: Intel + Intel Comet Lake PCH-LP CNVi WiFi
OS: Fedora 33, Kernel: 5.9.16-200.fc33.x86_64 (x86_64), Desktop: KDE Plasma 5.20.4, Display Server: X Server 1.20.10, Display Driver: modesetting 1.20.10, OpenGL: 4.6 Mesa 20.2.6, Compiler: GCC 10.2.1 20201125 + Clang 11.0.0, File-System: btrfs, Screen Resolution: 2560x1440
Compiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-isl --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driverProcessor Notes: Scaling Governor: intel_pstate powersavePython Notes: Python 3.9.1Security Notes: SELinux + itlb_multihit: KVM: Mitigation of VMX unsupported + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Mitigation of TSX disabled + tsx_async_abort: Not affected
Testing initiated at 1 January 2021 10:14 by user root.