AMD EPYC 7F32 8-Core testing with a ASRockRack EPYCD8 (P2.40 BIOS) and ASPEED on Debian 12 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301034Java Notes: OpenJDK Runtime Environment (build 17.0.9+9-Debian-1deb12u1)Python Notes: Python 3.11.2Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b Processor: AMD EPYC 7F32 8-Core @ 3.70GHz (8 Cores / 16 Threads), Motherboard: ASRockRack EPYCD8 (P2.40 BIOS), Chipset: AMD Starship/Matisse, Memory: 7 x 4 GB DDR4-2666MT/s Micron 9ASF51272PZ-2G6B1, Disk: Samsung SSD 970 EVO Plus 250GB, Graphics: ASPEED, Network: 2 x Intel I350
OS: Debian 12, Kernel: 6.1.0-11-amd64 (x86_64), Display Server: X Server, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1024x768
rpyc jan OpenBenchmarking.org Phoronix Test Suite AMD EPYC 7F32 8-Core @ 3.70GHz (8 Cores / 16 Threads) ASRockRack EPYCD8 (P2.40 BIOS) AMD Starship/Matisse 7 x 4 GB DDR4-2666MT/s Micron 9ASF51272PZ-2G6B1 Samsung SSD 970 EVO Plus 250GB ASPEED 2 x Intel I350 Debian 12 6.1.0-11-amd64 (x86_64) X Server GCC 12.2.0 ext4 1024x768 Processor Motherboard Chipset Memory Disk Graphics Network OS Kernel Display Server Compiler File-System Screen Resolution Rpyc Jan Benchmarks System Logs - Transparent Huge Pages: always - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301034 - OpenJDK Runtime Environment (build 17.0.9+9-Debian-1deb12u1) - Python 3.11.2 - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
a vs. b Comparison Phoronix Test Suite Baseline +6.5% +6.5% +13% +13% +19.5% +19.5% 25.9% 22.9% 20.6% 17.5% 12.7% 12.2% 11.9% 11.9% 11.4% 11.3% 11% 10.7% 10.5% 10.3% 10.3% 9.2% 9.1% 7.4% 7% 7% 6.9% 6.7% 6.7% 6.4% 6.3% 6.3% 6.2% 6.2% 6.2% 6.2% 6.1% 5.8% 5.6% 5.5% 5.4% 5.3% 5% 4.9% 4.9% 4.8% 4.7% 4.6% 4.5% 4.5% 4.4% 4.4% 4.4% 4.3% 4% 4% 3.9% 3.8% 3.6% 3.4% 3.4% 3.2% 3.2% 3.2% 3.1% 3.1% 3% 2.9% 2.9% 2.9% 2.6% 2.4% 2.4% 2.4% 2.4% 2.4% 2.4% 2.3% 2.3% 2.3% 2.1% 2.1% 2.1% 2.1% 2% 2% 2% 2% 2% 10 - Q90 llama-2-13b.Q4_0.gguf CPU - 1 - AlexNet 22.8% CPU - 1 - VGG-16 21.6% 10 - Q33 1 - Q39 20.2% 10 - Q46 17.8% 1 - Q16 17.8% 10 - Q48 1 - Q33 15.6% CPU - 16 - AlexNet 15.4% CPU - 16 - ResNet-50 14.7% 10 - Q38 1 - Q79 12.4% 10 - Q65 1 - Q73 llama-2-7b.Q4_0.gguf 1 - Q40 11.8% 1 - Q65 11.5% 10 - Q97 10 - Q26 10 - Q99 10 - Q27 10.9% CPU - 16 - VGG-16 10.9% 1 - Q75 1 - Q24 10.6% 1 - Q82 CPU - 16 - ResNet-152 10 - Q31 10 - Q03 9.9% 10 - Q73 9.7% 10 - Q36 9.5% 1 - Q45 9.4% 1 - Q56 10 - Q76 1 - Q88 9.1% 1 - Q18 9% 10 - Q02 8.9% 10 - Q92 8.9% 10 - Q42 8.1% 1 - Q53 8% 10 - Q55 7.7% 1 - Q42 7.6% 10 - Q79 Read While Writing 7.3% CPU - 16 - GoogLeNet 7.2% 1 - Q55 7.2% 1 - Q28 10 - Q14 CPU - 16 - Efficientnet_v2_l 10 - Q63 6.8% 10 - Q77 6.8% 10 - Q06 6.8% 1 - Q09 10 - Q60 10 - Q68 6.5% 10 - Q09 6.4% 10 - Q13 10 - Q32 6.3% 1 - Q63 10 - Q96 1 - Q60 10 - Q34 1 - Q78 CPU - 1 - ResNet-152 10 - Q57 1 - Q99 6% 10 - Q88 10 - Q15 5.6% 10 - Q40 1 - Q08 1 - Q80 1 - Q04 5.4% 1 - Q98 1 - Q31 5.1% CPU - 1 - ResNet-50 10 - Q44 1 - Q52 4.9% 10 - Q85 1 - Q51 4.8% CPU - 1 - GoogLeNet 4.8% Preset 12 - Bosphorus 1080p Preset 13 - Bosphorus 1080p 1 - Q25 1 - Q69 4.6% 10 - Q21 4.6% 10 - Q01 10 - Q98 1 - Q87 4.4% 10 - Q89 10 - Q87 10 - Q28 1 - Q76 1 - Q06 4.3% 1 - Q26 4.3% 1 - Q21 4.3% 10 - Q69 4.2% 1 - Q13 4.1% 10 - Q56 4% 10 - Q67 4% 10 - Q93 10 - Q11 10 - Q10 1 - Q59 3.9% 10 - Q66 3.8% 10 - Q53 3.8% 10 - Q95 10 - Q37 3.6% 1 - Q94 3.6% 10 - Q07 3.6% 10 - Q51 1 - Q74 3.4% R.R.W.R 3.4% CPU - 1 - Efficientnet_v2_l 10 - Q86 10 - Q82 3.3% Seq Fill 3.2% 10 - Q47 10 - Q41 1 - Q44 Preset 13 - Bosphorus 4K 10 - Q49 1 - Q86 3% 1 - Q66 CPU - 16 - ResNet-50 2.9% 1 - Q17 1 - Q36 2.9% 10 - Q54 2.9% 10 - Q71 1 - Q85 1 - Q58 2.9% 1 - Q96 2.8% 1 - Q50 2.8% 1 - Q48 2.8% 10 - Q18 2.8% 1 - Q68 2.7% 1 - Q54 2.7% 1 - Q64 2.6% Rand Fill 10 - Q39 2.5% 1 - Q05 2.5% 10 - Q62 1 - Q07 2.4% Eigen 10 - Q35 1 - Q90 10 - Q23 10 - Q45 10 - Q52 2.4% 1 - Q20 1 - Q97 2.3% BLAS 10 - Q83 10 - Q75 2.3% 1 - Q46 2.2% 10 - Q22 2.2% 10 - Q19 10 - Q59 1 - Q38 Preset 12 - Bosphorus 4K 1 - Q27 2% N.T.C.B.b.u.S.S.I - A.M.S 1 - Q29 N.T.C.B.b.u.S.S.I - A.M.S 1 - Q35 1 - Q47 Apache Spark TPC-DS Llama.cpp TensorFlow TensorFlow Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS TensorFlow TensorFlow Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Llama.cpp Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS TensorFlow Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS PyTorch Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Speedb TensorFlow Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS PyTorch Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS PyTorch Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS PyTorch Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS TensorFlow SVT-AV1 SVT-AV1 Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Speedb PyTorch Apache Spark TPC-DS Apache Spark TPC-DS Speedb Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS SVT-AV1 Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS PyTorch Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Speedb Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS LeelaChessZero Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS LeelaChessZero Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS Apache Spark TPC-DS SVT-AV1 Apache Spark TPC-DS Neural Magic DeepSparse Apache Spark TPC-DS Neural Magic DeepSparse Apache Spark TPC-DS Apache Spark TPC-DS a b
rpyc jan spark-tpcds: 10 - Q99 spark-tpcds: 10 - Q98 spark-tpcds: 10 - Q97 spark-tpcds: 10 - Q96 spark-tpcds: 10 - Q95 spark-tpcds: 10 - Q94 spark-tpcds: 10 - Q93 spark-tpcds: 10 - Q92 spark-tpcds: 10 - Q91 spark-tpcds: 10 - Q90 spark-tpcds: 10 - Q89 spark-tpcds: 10 - Q88 spark-tpcds: 10 - Q87 spark-tpcds: 10 - Q86 spark-tpcds: 10 - Q85 spark-tpcds: 10 - Q84 spark-tpcds: 10 - Q83 spark-tpcds: 10 - Q82 spark-tpcds: 10 - Q81 spark-tpcds: 10 - Q80 spark-tpcds: 10 - Q79 spark-tpcds: 10 - Q78 spark-tpcds: 10 - Q77 spark-tpcds: 10 - Q76 spark-tpcds: 10 - Q75 spark-tpcds: 10 - Q74 spark-tpcds: 10 - Q73 spark-tpcds: 10 - Q72 spark-tpcds: 10 - Q71 spark-tpcds: 10 - Q70 spark-tpcds: 10 - Q69 spark-tpcds: 10 - Q68 spark-tpcds: 10 - Q67 spark-tpcds: 10 - Q66 spark-tpcds: 10 - Q65 spark-tpcds: 10 - Q64 spark-tpcds: 10 - Q63 spark-tpcds: 10 - Q62 spark-tpcds: 10 - Q61 spark-tpcds: 10 - Q60 spark-tpcds: 10 - Q59 spark-tpcds: 10 - Q58 spark-tpcds: 10 - Q57 spark-tpcds: 10 - Q56 spark-tpcds: 10 - Q55 spark-tpcds: 10 - Q54 spark-tpcds: 10 - Q53 spark-tpcds: 10 - Q52 spark-tpcds: 10 - Q51 spark-tpcds: 10 - Q50 spark-tpcds: 10 - Q49 spark-tpcds: 10 - Q48 spark-tpcds: 10 - Q47 spark-tpcds: 10 - Q46 spark-tpcds: 10 - Q45 spark-tpcds: 10 - Q44 spark-tpcds: 10 - Q43 spark-tpcds: 10 - Q42 spark-tpcds: 10 - Q41 spark-tpcds: 10 - Q40 spark-tpcds: 10 - Q39 spark-tpcds: 10 - Q38 spark-tpcds: 10 - Q37 spark-tpcds: 10 - Q36 spark-tpcds: 10 - Q35 spark-tpcds: 10 - Q34 spark-tpcds: 10 - Q33 spark-tpcds: 10 - Q32 spark-tpcds: 10 - Q31 spark-tpcds: 10 - Q30 spark-tpcds: 10 - Q29 spark-tpcds: 10 - Q28 spark-tpcds: 10 - Q27 spark-tpcds: 10 - Q26 spark-tpcds: 10 - Q25 spark-tpcds: 10 - Q24 spark-tpcds: 10 - Q23 spark-tpcds: 10 - Q22 spark-tpcds: 10 - Q21 spark-tpcds: 10 - Q20 spark-tpcds: 10 - Q19 spark-tpcds: 10 - Q18 spark-tpcds: 10 - Q17 spark-tpcds: 10 - Q15 spark-tpcds: 10 - Q14 spark-tpcds: 10 - Q13 spark-tpcds: 10 - Q12 spark-tpcds: 10 - Q11 spark-tpcds: 10 - Q10 spark-tpcds: 10 - Q09 spark-tpcds: 10 - Q08 spark-tpcds: 10 - Q07 spark-tpcds: 10 - Q06 spark-tpcds: 10 - Q05 spark-tpcds: 10 - Q04 spark-tpcds: 10 - Q03 spark-tpcds: 10 - Q02 spark-tpcds: 10 - Q01 spark-tpcds: 1 - Q99 spark-tpcds: 1 - Q98 spark-tpcds: 1 - Q97 spark-tpcds: 1 - Q96 spark-tpcds: 1 - Q95 spark-tpcds: 1 - Q94 spark-tpcds: 1 - Q93 spark-tpcds: 1 - Q92 spark-tpcds: 1 - Q91 spark-tpcds: 1 - Q90 spark-tpcds: 1 - Q89 spark-tpcds: 1 - Q88 spark-tpcds: 1 - Q87 spark-tpcds: 1 - Q86 spark-tpcds: 1 - Q85 spark-tpcds: 1 - Q84 spark-tpcds: 1 - Q83 spark-tpcds: 1 - Q82 spark-tpcds: 1 - Q81 spark-tpcds: 1 - Q80 spark-tpcds: 1 - Q79 spark-tpcds: 1 - Q78 spark-tpcds: 1 - Q77 spark-tpcds: 1 - Q76 spark-tpcds: 1 - Q75 spark-tpcds: 1 - Q74 spark-tpcds: 1 - Q73 spark-tpcds: 1 - Q72 spark-tpcds: 1 - Q71 spark-tpcds: 1 - Q70 spark-tpcds: 1 - Q69 spark-tpcds: 1 - Q68 spark-tpcds: 1 - Q67 spark-tpcds: 1 - Q66 spark-tpcds: 1 - Q65 spark-tpcds: 1 - Q64 spark-tpcds: 1 - Q63 spark-tpcds: 1 - Q62 spark-tpcds: 1 - Q61 spark-tpcds: 1 - Q60 spark-tpcds: 1 - Q59 spark-tpcds: 1 - Q58 spark-tpcds: 1 - Q57 spark-tpcds: 1 - Q56 spark-tpcds: 1 - Q55 spark-tpcds: 1 - Q54 spark-tpcds: 1 - Q53 spark-tpcds: 1 - Q52 spark-tpcds: 1 - Q51 spark-tpcds: 1 - Q50 spark-tpcds: 1 - Q49 spark-tpcds: 1 - Q48 spark-tpcds: 1 - Q47 spark-tpcds: 1 - Q46 spark-tpcds: 1 - Q45 spark-tpcds: 1 - Q44 spark-tpcds: 1 - Q43 spark-tpcds: 1 - Q42 spark-tpcds: 1 - Q41 spark-tpcds: 1 - Q40 spark-tpcds: 1 - Q39 spark-tpcds: 1 - Q38 spark-tpcds: 1 - Q37 spark-tpcds: 1 - Q36 spark-tpcds: 1 - Q35 spark-tpcds: 1 - Q34 spark-tpcds: 1 - Q33 spark-tpcds: 1 - Q32 spark-tpcds: 1 - Q31 spark-tpcds: 1 - Q30 spark-tpcds: 1 - Q29 spark-tpcds: 1 - Q28 spark-tpcds: 1 - Q27 spark-tpcds: 1 - Q26 spark-tpcds: 1 - Q25 spark-tpcds: 1 - Q24 spark-tpcds: 1 - Q23 spark-tpcds: 1 - Q22 spark-tpcds: 1 - Q21 spark-tpcds: 1 - Q20 spark-tpcds: 1 - Q19 spark-tpcds: 1 - Q18 spark-tpcds: 1 - Q17 spark-tpcds: 1 - Q16 spark-tpcds: 1 - Q15 spark-tpcds: 1 - Q14 spark-tpcds: 1 - Q13 spark-tpcds: 1 - Q12 spark-tpcds: 1 - Q11 spark-tpcds: 1 - Q10 spark-tpcds: 1 - Q09 spark-tpcds: 1 - Q08 spark-tpcds: 1 - Q07 spark-tpcds: 1 - Q06 spark-tpcds: 1 - Q05 spark-tpcds: 1 - Q04 spark-tpcds: 1 - Q03 spark-tpcds: 1 - Q02 spark-tpcds: 1 - Q01 quicksilver: CTS2 quicksilver: CORAL2 P2 pytorch: CPU - 16 - Efficientnet_v2_l tensorflow: CPU - 16 - VGG-16 lczero: BLAS lczero: Eigen pytorch: CPU - 16 - ResNet-152 pytorch: CPU - 1 - Efficientnet_v2_l tensorflow: CPU - 16 - ResNet-50 quicksilver: CORAL2 P1 cachebench: Read / Modify / Write cachebench: Write cachebench: Read pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 16 - ResNet-50 llama-cpp: llama-2-13b.Q4_0.gguf deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream speedb: Rand Fill Sync speedb: Update Rand speedb: Rand Fill speedb: Read Rand Write Rand speedb: Read While Writing speedb: Rand Read speedb: Seq Fill svt-av1: Preset 4 - Bosphorus 4K tensorflow: CPU - 16 - GoogLeNet tensorflow: CPU - 1 - VGG-16 y-cruncher: 1B deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream llama-cpp: llama-2-7b.Q4_0.gguf deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream pytorch: CPU - 1 - ResNet-50 deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream tensorflow: CPU - 16 - AlexNet svt-av1: Preset 8 - Bosphorus 4K y-cruncher: 500M tensorflow: CPU - 1 - ResNet-50 svt-av1: Preset 4 - Bosphorus 1080p tensorflow: CPU - 1 - AlexNet tensorflow: CPU - 1 - GoogLeNet svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p a b 4.43 2.34 8.04 3.23 4.68 4.49 11.76 4.16 4.94 4.03 4.99 8.63 8.37 4 8.18 5.16 5.39 6.45 5.63 17.99 5.38 19.55 7.08 6.1 13.69 9.42 5.49 39.02 5.73 5.29 6.63 5.89 12.52 5.99 8 21.11 4.85 4.2 2.96 7.47 5.86 6.22 7.71 5.99 3.78 7.25 4.21 4.22 11.32 9.77 8.4 4.91 8.67 5.34 5.6 2.78 4.26 3.85 3.23 8.34 3.18 8.95 5.8 4.93 7.68 6.49 7.13 4.28 10.19 5.8 14.25 8.38 5.85 5.63 14.15 6.34 15.49 11.93 5.25 2.26 5.85 6.46 14.2 5.2 8.92 5.85 2.36 14.91 7.22 4.97 5.6 5.59 7.98 9.47 19.18 3.93 5.03 5.6 3.66 2.2 5.18 2.81 3.6 2.78 3.75 3.9 3.52 3.87 4.05 6.61 4.99 3.7 5.73 3.32 3.94 5.05 3.72 7.42 3.95 7.69 5.13 4.57 7.88 5.8 4.31 8.48 4.19 4.6 4.78 4.4 5.79 5.89 4.62 7.63 3.71 3.85 2.77 5.28 4.41 4.89 5.04 5.09 2.91 5.15 3.5 3.28 6.63 4.25 5.36 4.29 6.26 4.12 3.83 4.2 3.34 3.01 2.71 3.98 1.19 4.94 2.5 4.13 5.13 4.29 4.74 3.82 6.63 3.81 6.1 6.76 4.42 4.18 6.09 0.94 4.11 4.27 3.75 2.2 4.19 5.66 6.36 2.59 4.28 4.4 4.66 2.16 6.58 4.96 4.29 4.39 4.52 4.16 5.7 7.47 3.17 4.52 4.1 5986000 7679000 4.19 4.69 130 83 7.44 5.91 11.52 7364000 57602.719674 55386.26412 9891.085916 10.83 20.05 5.16 434.3769 9.1492 3082 215940 233324 1011750 2257890 34248146 275849 2.949 35.56 2.59 47.648 32.0561 124.668 71.2145 56.1324 512.1858 7.7676 11.32 505.1583 7.866 480.2254 8.304 26.58 67.6345 59.0697 52.7328 75.7554 9.9773 399.7162 100.7805 39.6374 99.5285 40.1489 52.245 76.4568 59.35 23.542 22.303 6.38 9.956 8.88 9.89 63.991 78.31 78.877 255.926 316.493 3.99 2.24 7.22 3.04 4.51 4.48 11.31 4.53 4.93 3.2 4.78 8.16 8.02 3.87 7.8 5.09 5.27 6.66 5.65 17.99 5.01 19.54 7.56 5.59 14 9.48 6.02 39.13 5.57 5.21 6.91 6.27 13.02 6.22 7.13 20.79 5.18 4.1 3.01 7 5.74 6.25 7.27 6.23 4.07 7.46 4.37 4.32 10.93 9.78 8.15 4.18 8.4 6.29 5.47 2.65 4.28 4.16 3.13 7.9 3.26 7.94 6.01 5.4 7.5 6.11 5.91 4.55 9.24 5.81 14.43 8.03 6.49 5.06 14.41 6.26 15.13 12.19 5.49 2.26 5.73 6.64 14.18 5.49 8.34 5.5 2.36 14.34 6.95 5.29 5.62 5.79 8.52 9.43 19.24 4.32 5.48 5.36 3.88 2.09 5.3 2.89 3.67 2.88 3.73 3.92 3.57 3.78 3.98 7.21 5.21 3.81 5.57 3.33 4.01 4.57 3.78 7.04 4.44 7.24 5.1 4.38 7.12 6 3.85 8.35 4.18 4.56 5 4.52 5.73 5.72 5.15 7.83 3.49 3.86 2.82 4.97 4.58 5.03 5.07 4.66 3.12 5.29 3.78 3.44 6.95 4.37 5.45 4.41 6.14 4.21 4.19 4.07 3.39 3.24 2.74 4.45 1.43 4.84 2.53 4.25 5.03 4.21 5.48 3.85 6.97 3.81 5.98 6.32 4.51 4.36 5.82 1.04 4.14 4.23 3.91 2.15 4.21 6.17 6.18 3.05 4.35 4.38 4.85 2.17 6.46 4.99 4.02 4.16 4.63 4.34 5.84 7.87 3.17 4.6 4.03 5982000 7662000 4.48 4.23 133 85 8.21 6.11 10.04 7308000 57618.902797 55484.050343 9892.738544 11.50 19.48 6.34 433.0207 9.2258 3026 216433 239360 978572 2104380 34244004 267223 2.931 33.16 2.13 47.2 31.4254 127.168 70.5121 56.6743 515.2201 7.7424 12.67 509.3372 7.8507 478.9249 8.3172 27.92 67.8367 58.897 53.6692 74.4881 10.0658 396.2172 101.3381 39.4407 99.7263 40.0737 52.0753 76.7658 51.44 23.345 22.048 6.47 9.768 7.23 9.44 63.704 79.927 81.313 268.084 331.495 OpenBenchmarking.org
Apache Spark TPC-DS This is a benchmark of Apache Spark using the TPC-DS data-set. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmarks the Apache Spark in a single-system configuration and leverages the https://github.com/databricks/tpcds-kit and https://github.com/IBM/spark-tpc-ds-performance-test/ projects for testing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-DS 3.5 Scale Factor: 10 - Q99 b a 0.9968 1.9936 2.9904 3.9872 4.984 3.99 4.43 1. (CC) gcc options: -O3 -fcommon -lm
Quicksilver Quicksilver is a proxy application that represents some elements of the Mercury workload by solving a simplified dynamic Monte Carlo particle transport problem. Quicksilver is developed by Lawrence Livermore National Laboratory (LLNL) and this test profile currently makes use of the OpenMP CPU threaded code path. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CTS2 b a 1.3M 2.6M 3.9M 5.2M 6.5M 5982000 5986000 1. (CXX) g++ options: -fopenmp -O3 -march=native
PyTorch This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l b a 1.008 2.016 3.024 4.032 5.04 4.48 4.19 MIN: 3.47 / MAX: 4.56 MIN: 3.69 / MAX: 4.34
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 b a 1.0553 2.1106 3.1659 4.2212 5.2765 4.23 4.69
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l b a 2 4 6 8 10 6.11 5.91 MIN: 5.41 / MAX: 6.29 MIN: 5.31 / MAX: 6
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 b a 3 6 9 12 15 10.04 11.52
Quicksilver Quicksilver is a proxy application that represents some elements of the Mercury workload by solving a simplified dynamic Monte Carlo particle transport problem. Quicksilver is developed by Lawrence Livermore National Laboratory (LLNL) and this test profile currently makes use of the OpenMP CPU threaded code path. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Figure Of Merit, More Is Better Quicksilver 20230818 Input: CORAL2 P1 b a 1.6M 3.2M 4.8M 6.4M 8M 7308000 7364000 1. (CXX) g++ options: -fopenmp -O3 -march=native
CacheBench This is a performance test of CacheBench, which is part of LLCbench. CacheBench is designed to test the memory and cache bandwidth performance Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better CacheBench Test: Read / Modify / Write b a 12K 24K 36K 48K 60K 57618.90 57602.72 MIN: 54501.37 / MAX: 59219.51 MIN: 54488.08 / MAX: 59203.02 1. (CC) gcc options: -O3 -lrt
OpenBenchmarking.org MB/s, More Is Better CacheBench Test: Write b a 12K 24K 36K 48K 60K 55484.05 55386.26 MIN: 43456.57 / MAX: 59035.76 MIN: 42949.81 / MAX: 59033.61 1. (CC) gcc options: -O3 -lrt
OpenBenchmarking.org MB/s, More Is Better CacheBench Test: Read b a 2K 4K 6K 8K 10K 9892.74 9891.09 MIN: 9875.91 / MAX: 9897.99 MIN: 9838 / MAX: 9897.29 1. (CC) gcc options: -O3 -lrt
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 b a 5 10 15 20 25 19.48 20.05 MIN: 18.54 / MAX: 19.9 MIN: 15.96 / MAX: 20.76
Llama.cpp Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b1808 Model: llama-2-13b.Q4_0.gguf b a 2 4 6 8 10 6.34 5.16 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Update Random b a 50K 100K 150K 200K 250K 216433 215940 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Random Fill b a 50K 100K 150K 200K 250K 239360 233324 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Read Random Write Random b a 200K 400K 600K 800K 1000K 978572 1011750 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Read While Writing b a 500K 1000K 1500K 2000K 2500K 2104380 2257890 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Random Read b a 7M 14M 21M 28M 35M 34244004 34248146 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Speedb 2.7 Test: Sequential Fill b a 60K 120K 180K 240K 300K 267223 275849 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 4K b a 0.6635 1.327 1.9905 2.654 3.3175 2.931 2.949 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet b a 8 16 24 32 40 33.16 35.56
Llama.cpp Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b1808 Model: llama-2-7b.Q4_0.gguf b a 3 6 9 12 15 12.67 11.32 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
PyTorch This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Currently this test profile is catered to CPU-based testing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 b a 7 14 21 28 35 27.92 26.58 MIN: 22.12 / MAX: 29.19 MIN: 25.18 / MAX: 27.28
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet b a 13 26 39 52 65 51.44 59.35
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 4K b a 6 12 18 24 30 23.35 23.54 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: ResNet-50 b a 2 4 6 8 10 6.47 6.38
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 1080p b a 3 6 9 12 15 9.768 9.956 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: AlexNet b a 2 4 6 8 10 7.23 8.88
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 1080p b a 14 28 42 56 70 63.70 63.99 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 4K b a 20 40 60 80 100 79.93 78.31 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 4K b a 20 40 60 80 100 81.31 78.88 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 1080p b a 60 120 180 240 300 268.08 255.93 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 1080p b a 70 140 210 280 350 331.50 316.49 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
a Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301034Java Notes: OpenJDK Runtime Environment (build 17.0.9+9-Debian-1deb12u1)Python Notes: Python 3.11.2Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 14 January 2024 19:17 by user phoronix.
b Processor: AMD EPYC 7F32 8-Core @ 3.70GHz (8 Cores / 16 Threads), Motherboard: ASRockRack EPYCD8 (P2.40 BIOS), Chipset: AMD Starship/Matisse, Memory: 7 x 4 GB DDR4-2666MT/s Micron 9ASF51272PZ-2G6B1, Disk: Samsung SSD 970 EVO Plus 250GB, Graphics: ASPEED, Network: 2 x Intel I350
OS: Debian 12, Kernel: 6.1.0-11-amd64 (x86_64), Display Server: X Server, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301034Java Notes: OpenJDK Runtime Environment (build 17.0.9+9-Debian-1deb12u1)Python Notes: Python 3.11.2Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 14 January 2024 23:02 by user phoronix.