Amazon AWS Graviton3 benchmarks by Michael Larabel.
a1.4xlarge Graviton Processor: ARMv8 Cortex-A72 (16 Cores), Motherboard: Amazon EC2 a1.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected
c6g.4xlarge Graviton2 Changed Processor to ARMv8 Neoverse-N1 (16 Cores) .
Changed Motherboard to Amazon EC2 c6g.4xlarge (1.0 BIOS) .
Security Change: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
c7g.4xlarge Graviton3 Changed Processor to ARMv8 Neoverse-V1 (16 Cores) .
Changed Motherboard to Amazon EC2 c7g.4xlarge (1.0 BIOS) .
c6a.4xlarge EPYC Processor: AMD EPYC 7R13 (8 Cores / 16 Threads) , Motherboard: Amazon EC2 c6a.4xlarge (1.0 BIOS) , Chipset: Intel 440FX 82441FX PMC , Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (x86_64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: CPU Microcode: 0xa001144Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
c6i.4xlarge Xeon Changed Processor to Intel Xeon Platinum 8375C (8 Cores / 16 Threads) .
Changed Motherboard to Amazon EC2 c6i.4xlarge (1.0 BIOS) .
Processor Change: CPU Microcode: 0xd000331Security Change: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Amazon EC2 Graviton3 Benchmark Comparison Processor Motherboard Chipset Memory Disk Network OS Kernel Compiler File-System System Layer a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon ARMv8 Cortex-A72 (16 Cores) Amazon EC2 a1.4xlarge (1.0 BIOS) Amazon Device 0200 32GB 193GB Amazon Elastic Block Store Amazon Elastic Ubuntu 22.04 5.15.0-1004-aws (aarch64) GCC 11.2.0 ext4 amazon ARMv8 Neoverse-N1 (16 Cores) Amazon EC2 c6g.4xlarge (1.0 BIOS) ARMv8 Neoverse-V1 (16 Cores) Amazon EC2 c7g.4xlarge (1.0 BIOS) AMD EPYC 7R13 (8 Cores / 16 Threads) Amazon EC2 c6a.4xlarge (1.0 BIOS) Intel 440FX 82441FX PMC 5.15.0-1004-aws (x86_64) Intel Xeon Platinum 8375C (8 Cores / 16 Threads) Amazon EC2 c6i.4xlarge (1.0 BIOS) OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - a1.4xlarge Graviton: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c6g.4xlarge Graviton2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c7g.4xlarge Graviton3: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c6a.4xlarge EPYC: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - c6i.4xlarge Xeon: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Java Details - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) Python Details - Python 3.10.4 Security Details - a1.4xlarge Graviton: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected - c6g.4xlarge Graviton2: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - c7g.4xlarge Graviton3: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - c6a.4xlarge EPYC: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - c6i.4xlarge Xeon: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected Processor Details - c6a.4xlarge EPYC: CPU Microcode: 0xa001144 - c6i.4xlarge Xeon: CPU Microcode: 0xd000331
a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon Logarithmic Result Overview Phoronix Test Suite High Performance Conjugate Gradient Algebraic Multi-Grid Benchmark ACES DGEMM ONNX Runtime Xcompact3d Incompact3d NAS Parallel Benchmarks Timed MrBayes Analysis GPAW LULESH GROMACS Apache HTTP Server simdjson TensorFlow Lite ASTC Encoder Timed Node.js Compilation LAMMPS Molecular Dynamics Simulator PyBench PHPBench libavif avifenc Timed ImageMagick Compilation Timed Apache Compilation Timed LLVM Compilation Rodinia OpenSSL Zstd Compression SecureMark Ngspice Liquid-DSP Build2 Timed PHP Compilation DaCapo Benchmark WebP Image Encode Timed Gem5 Compilation nginx C-Ray TSCP Stockfish POV-Ray 7-Zip Compression Stress-NG asmFish Google SynthMark LeelaChessZero Coremark N-Queens m-queens
Amazon EC2 Graviton3 Benchmark Comparison hpcg: npb: BT.C npb: CG.C npb: EP.D npb: FT.C npb: IS.D npb: LU.C npb: MG.C npb: SP.C lczero: BLAS lczero: Eigen rodinia: OpenMP LavaMD rodinia: OpenMP CFD Solver rodinia: OpenMP Streamcluster amg: mrbayes: Primate Phylogeny Analysis incompact3d: input.i3d 129 Cells Per Direction incompact3d: input.i3d 193 Cells Per Direction lammps: Rhodopsin Protein lulesh: webp: Quality 100, Lossless webp: Quality 100, Lossless, Highest Compression simdjson: Kostya simdjson: LargeRand simdjson: PartialTweets simdjson: DistinctUserID dacapobench: H2 dacapobench: Jython dacapobench: Tradesoap dacapobench: Tradebeans compress-zstd: 3 - Compression Speed compress-zstd: 19 - Compression Speed compress-zstd: 19 - Decompression Speed compress-zstd: 19, Long Mode - Compression Speed compress-zstd: 19, Long Mode - Decompression Speed tscp: AI Chess Performance mt-dgemm: Sustained Floating-Point Rate coremark: CoreMark Size 666 - Iterations Per Second compress-7zip: Compression Rating compress-7zip: Decompression Rating stockfish: Total Time asmfish: 1024 Hash Memory, 26 Depth avifenc: 0 avifenc: 2 avifenc: 6, Lossless build-apache: Time To Compile build-gem5: Time To Compile build-imagemagick: Time To Compile build-llvm: Ninja build-nodejs: Time To Compile build-php: Time To Compile build2: Time To Compile c-ray: Total Time - 4K, 16 Rays Per Pixel povray: Trace Time m-queens: Time To Solve n-queens: Elapsed Time ngspice: C2670 ngspice: C7552 synthmark: VoiceMark_100 securemark: SecureMark-TLS openssl: SHA256 openssl: RSA4096 openssl: RSA4096 liquid-dsp: 16 - 256 - 57 gromacs: MPI CPU - water_GMX50_bare tensorflow-lite: SqueezeNet tensorflow-lite: Inception V4 tensorflow-lite: NASNet Mobile tensorflow-lite: Mobilenet Float tensorflow-lite: Mobilenet Quant tensorflow-lite: Inception ResNet V2 astcenc: Thorough astcenc: Exhaustive stress-ng: Crypto stress-ng: IO_uring stress-ng: CPU Stress stress-ng: Vector Math stress-ng: Memory Copying gpaw: Carbon Nanotube pybench: Total For Average Test Times nginx: 100 nginx: 200 nginx: 500 nginx: 1000 onnx: GPT-2 - CPU - Standard onnx: bertsquad-12 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Standard onnx: super-resolution-10 - CPU - Standard apache: 100 apache: 200 apache: 500 apache: 1000 phpbench: PHP Benchmark Suite a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 3.77834 3148.18 1213.15 339.20 2927.16 197.57 2558.12 3266.36 1293.80 135 128 360.304 41.450 47.430 186716933 644.788 53.7706274 182.583939 3.245 2328.2724 61.801 124.708 0.63 0.3 0.78 0.8 6740 12997 11182 9045 633.9 16.9 1121.7 16 1213.9 538500 0.891391 203869.402017 32498 40891 10980430 15331550 768.302 449.022 33.991 74.742 1155.615 93.632 1784.600 1765.910 196.029 353.912 104.761 93.801 110.368 32.285 473.901 480.793 331.070 74356 6785689517 588.3 45328.6 165513333 0.316 12014.7 188910 30986.7 9990.15 5724.66 171169 33.5198 277.7669 11985.38 918172.37 2366.00 27341.47 798.24 769.346 3452 143155.48 141436.20 139414.84 138205.11 2312 115 10 165 757 18636.43 20887.58 20133.49 19278.68 241259 19.7218 6449.11 3520.86 558.88 6244.48 372.76 5133.89 6720.68 2356.16 864 834 215.666 17.035 15.484 932652900 384.753 11.5733547 41.0240835 7.935 6016.1627 31.082 66.147 1.19 0.49 1.51 1.53 3964 5626 4506 4344 2888.3 34.6 2051.6 31.0 2196.3 872313 4.785123 315464.339800 71285 59445 21679245 26540482 406.937 238.205 16.518 34.201 488.805 40.333 682.981 628.401 88.897 142.277 62.323 51.047 75.224 23.136 263.724 255.205 470.389 120301 10723184083 660.6 53951.5 262890000 0.781 3969.35 46793.9 14985.4 2500.87 1980.24 45955.7 16.5222 159.2039 17924.18 770521.81 3404.94 37753.89 2903.00 215.528 1741 307349.36 308938.67 310596.58 308213.13 6948 322 28 334 2072 46995.35 50059.97 50077.81 46629.45 449855 26.3058 10339.53 6571.95 934.72 11791.77 1041.90 7730.41 13481.61 4467.19 1103 1189 143.334 10.478 13.296 1258807333 251.397 8.01671425 29.1258570 11.291 10940.939 22.769 48.208 1.94 0.7 2.62 2.69 2951 3940 3524 3203 4639.1 41.2 3050.3 39.5 3240.6 1370094 5.853864 405413.860554 97824 73054 27608891 32134123 256.841 141.698 11.908 26.940 391.171 27.904 544.929 497.579 69.483 115.020 38.517 37.863 66.822 21.536 198.224 191.286 675.635 183708 13722045973 2546.4 178460.4 383606667 1.128 3257.94 41855.1 11591.9 2156.60 1502.95 40051.3 13.9248 139.3797 23181.81 843015.78 5029.71 55258.17 6693.32 155.180 1185 345710.87 352380.98 346613.34 346814.75 7990 407 38 609 2817 67231.88 73676.95 73546.32 72719.33 666484 5.06042 13134.46 6169.22 466.21 18299.96 541.35 25140.55 16826.43 8094.79 1091 1001 224.331 21.789 18.383 267670700 120.636 28.2797661 110.770027 5.067 5452.1051 26.708 48.677 2.80 0.95 3.64 4.30 3019 4616 4052 3167 2784.0 30.0 2907.5 25.9 2826.0 1442631 2.432432 345133.440541 62562 57318 23857623 26187688 195.532 93.946 16.394 23.532 515.201 32.626 760.344 664.347 67.084 150.994 69.349 49.435 72.330 16.378 245.886 180.356 663.073 213288 11691403353 2088.9 136784.2 509746667 1.004 3103.12 44920.6 9266.86 2159.72 3847.96 41366.6 7.9818 72.3908 13556.06 768723.46 13304.50 53787.61 3551.80 302.956 1961 388010.76 390932.79 389030.11 388657.76 5617 488 65 1192 3696 77567.69 83070.00 81995.64 71537.11 480741 8.66031 13888.40 9522.82 1103.22 20423.57 861.57 38136.77 26298.81 9563.22 1397 1466 281.389 20.446 23.512 661364767 134.924 17.8682772 69.2169978 6.220 8112.3715 21.122 41.805 2.46 0.86 3.71 4.30 2921 4013 3815 2928 3440.6 38.1 2582.0 33.8 2666.1 1272596 2.230545 285378.841661 66631 45653 22081961 23746200 204.994 97.735 17.529 22.527 469.940 29.737 685.704 604.620 64.337 136.801 92.545 52.784 91.231 18.839 147.893 161.081 565.690 230549 7096993937 2161.3 140964.4 373100000 1.452 2983.93 41185.7 10900.6 1965.07 3967.39 41179.7 7.2625 69.6387 10210.34 1037943.37 12527.16 40140.30 3150.49 202.106 997 356302.84 356829.93 351672.92 347345.49 7944 773 139 1374 3450 86545.57 94458.22 91746.57 79830.96 828186 OpenBenchmarking.org
High Performance Conjugate Gradient HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 6 12 18 24 30 SE +/- 0.00065, N = 3 SE +/- 0.00225, N = 3 SE +/- 0.01639, N = 3 SE +/- 0.04033, N = 3 SE +/- 0.03738, N = 3 3.77834 5.06042 19.72180 8.66031 26.30580 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: BT.C a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 3K 6K 9K 12K 15K SE +/- 3.44, N = 3 SE +/- 98.45, N = 3 SE +/- 3.20, N = 3 SE +/- 22.04, N = 3 SE +/- 7.36, N = 3 3148.18 13134.46 6449.11 13888.40 10339.53 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: CG.C a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 2K 4K 6K 8K 10K SE +/- 11.79, N = 6 SE +/- 81.25, N = 3 SE +/- 9.95, N = 3 SE +/- 66.44, N = 3 SE +/- 17.12, N = 3 1213.15 6169.22 3520.86 9522.82 6571.95 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.D a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 200 400 600 800 1000 SE +/- 0.24, N = 3 SE +/- 0.06, N = 3 SE +/- 0.23, N = 3 SE +/- 19.93, N = 9 SE +/- 0.39, N = 3 339.20 466.21 558.88 1103.22 934.72 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: FT.C a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 4K 8K 12K 16K 20K SE +/- 1.73, N = 3 SE +/- 45.90, N = 3 SE +/- 1.10, N = 3 SE +/- 40.24, N = 3 SE +/- 1.17, N = 3 2927.16 18299.96 6244.48 20423.57 11791.77 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: IS.D a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 200 400 600 800 1000 SE +/- 0.31, N = 3 SE +/- 0.47, N = 3 SE +/- 0.20, N = 3 SE +/- 2.14, N = 3 SE +/- 2.29, N = 3 197.57 541.35 372.76 861.57 1041.90 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 8K 16K 24K 32K 40K SE +/- 0.15, N = 3 SE +/- 18.06, N = 3 SE +/- 0.90, N = 3 SE +/- 160.86, N = 3 SE +/- 1.96, N = 3 2558.12 25140.55 5133.89 38136.77 7730.41 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: MG.C a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 6K 12K 18K 24K 30K SE +/- 1.64, N = 3 SE +/- 30.62, N = 3 SE +/- 1.39, N = 3 SE +/- 184.24, N = 3 SE +/- 4.69, N = 3 3266.36 16826.43 6720.68 26298.81 13481.61 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.C a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 2K 4K 6K 8K 10K SE +/- 2.51, N = 3 SE +/- 24.63, N = 3 SE +/- 0.57, N = 3 SE +/- 73.65, N = 3 SE +/- 9.61, N = 3 1293.80 8094.79 2356.16 9563.22 4467.19 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
LeelaChessZero LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: BLAS a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 300 600 900 1200 1500 SE +/- 0.88, N = 3 SE +/- 12.82, N = 9 SE +/- 10.22, N = 4 SE +/- 12.41, N = 9 SE +/- 6.44, N = 3 135 1091 864 1397 1103 1. (CXX) g++ options: -flto -pthread
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: Eigen a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 300 600 900 1200 1500 SE +/- 0.67, N = 3 SE +/- 11.74, N = 9 SE +/- 12.00, N = 3 SE +/- 13.37, N = 3 SE +/- 9.70, N = 3 128 1001 834 1466 1189 1. (CXX) g++ options: -flto -pthread
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP LavaMD a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 80 160 240 320 400 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.14, N = 3 SE +/- 0.15, N = 3 360.30 224.33 215.67 281.39 143.33 1. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP CFD Solver a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 9 18 27 36 45 SE +/- 0.08, N = 3 SE +/- 0.08, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 41.45 21.79 17.04 20.45 10.48 1. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP Streamcluster a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 11 22 33 44 55 SE +/- 0.02, N = 3 SE +/- 0.05, N = 3 SE +/- 0.26, N = 15 SE +/- 0.07, N = 3 SE +/- 0.33, N = 12 47.43 18.38 15.48 23.51 13.30 1. (CXX) g++ options: -O2 -lOpenCL
Algebraic Multi-Grid Benchmark AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Figure Of Merit, More Is Better Algebraic Multi-Grid Benchmark 1.2 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 300M 600M 900M 1200M 1500M SE +/- 176548.39, N = 3 SE +/- 103921.81, N = 3 SE +/- 3420043.89, N = 3 SE +/- 5114517.12, N = 3 SE +/- 952437.28, N = 3 186716933 267670700 932652900 661364767 1258807333 1. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
Timed MrBayes Analysis This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed MrBayes Analysis 3.2.7 Primate Phylogeny Analysis a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 140 280 420 560 700 SE +/- 0.49, N = 3 SE +/- 0.35, N = 3 SE +/- 0.11, N = 3 SE +/- 1.43, N = 3 SE +/- 0.24, N = 3 644.79 120.64 384.75 134.92 251.40 -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm 1. (CC) gcc options: -O3 -std=c99 -pedantic -lm
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 12 24 36 48 60 SE +/- 0.02862870, N = 3 SE +/- 0.03718674, N = 3 SE +/- 0.01351889, N = 3 SE +/- 0.09619197, N = 3 SE +/- 0.01401446, N = 3 53.77062740 28.27976610 11.57335470 17.86827720 8.01671425 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 40 80 120 160 200 SE +/- 0.15, N = 3 SE +/- 0.12, N = 3 SE +/- 0.01, N = 3 SE +/- 0.14, N = 3 SE +/- 0.03, N = 3 182.58 110.77 41.02 69.22 29.13 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
LULESH LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org z/s, More Is Better LULESH 2.0.3 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 2K 4K 6K 8K 10K SE +/- 6.27, N = 3 SE +/- 5.52, N = 3 SE +/- 4.88, N = 3 SE +/- 14.20, N = 3 SE +/- 76.73, N = 3 2328.27 5452.11 6016.16 8112.37 10940.94 1. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 14 28 42 56 70 SE +/- 0.06, N = 3 SE +/- 0.17, N = 15 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 SE +/- 0.09, N = 3 61.80 26.71 31.08 21.12 22.77 -ltiff -ltiff -ltiff -ltiff 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 30 60 90 120 150 SE +/- 0.09, N = 3 SE +/- 0.06, N = 3 SE +/- 0.01, N = 3 SE +/- 0.34, N = 3 SE +/- 0.01, N = 3 124.71 48.68 66.15 41.81 48.21 -ltiff -ltiff -ltiff -ltiff 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: Kostya a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 0.63 1.26 1.89 2.52 3.15 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.63 2.80 1.19 2.46 1.94 1. (CXX) g++ options: -O3
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: LargeRandom a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 0.2138 0.4276 0.6414 0.8552 1.069 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.30 0.95 0.49 0.86 0.70 1. (CXX) g++ options: -O3
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: PartialTweets a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 0.8348 1.6696 2.5044 3.3392 4.174 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.78 3.64 1.51 3.71 2.62 1. (CXX) g++ options: -O3
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: DistinctUserID a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 0.9675 1.935 2.9025 3.87 4.8375 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.80 4.30 1.53 4.30 2.69 1. (CXX) g++ options: -O3
OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 9.12-MR1 Java Test: Jython a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 3K 6K 9K 12K 15K SE +/- 48.38, N = 4 SE +/- 23.33, N = 4 SE +/- 23.29, N = 4 SE +/- 24.07, N = 4 SE +/- 6.99, N = 4 12997 4616 5626 4013 3940
OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 9.12-MR1 Java Test: Tradesoap a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 2K 4K 6K 8K 10K SE +/- 71.92, N = 4 SE +/- 16.15, N = 4 SE +/- 27.95, N = 4 SE +/- 24.39, N = 4 SE +/- 14.95, N = 4 11182 4052 4506 3815 3524
OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 9.12-MR1 Java Test: Tradebeans a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 2K 4K 6K 8K 10K SE +/- 44.35, N = 4 SE +/- 23.53, N = 11 SE +/- 40.13, N = 4 SE +/- 19.24, N = 20 SE +/- 26.73, N = 4 9045 3167 4344 2928 3203
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 3 - Compression Speed a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 1000 2000 3000 4000 5000 SE +/- 4.47, N = 3 SE +/- 18.93, N = 3 SE +/- 3.74, N = 3 SE +/- 29.53, N = 3 SE +/- 9.57, N = 3 633.9 2768.7 2878.8 3440.6 4639.1 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Compression Speed a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 9 18 27 36 45 SE +/- 0.03, N = 3 SE +/- 0.21, N = 3 SE +/- 0.06, N = 3 SE +/- 0.40, N = 3 SE +/- 0.00, N = 3 16.9 30.0 34.6 38.1 41.2 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Decompression Speed a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 700 1400 2100 2800 3500 SE +/- 4.74, N = 3 SE +/- 3.25, N = 3 SE +/- 12.10, N = 3 SE +/- 24.18, N = 3 SE +/- 7.75, N = 3 1121.7 2907.5 2051.6 2582.0 3050.3 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Compression Speed a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 9 18 27 36 45 SE +/- 0.00, N = 3 SE +/- 0.27, N = 3 SE +/- 0.03, N = 3 SE +/- 0.10, N = 3 SE +/- 0.23, N = 3 16.0 25.9 31.0 33.8 39.5 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Decompression Speed a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 700 1400 2100 2800 3500 SE +/- 15.28, N = 3 SE +/- 6.53, N = 3 SE +/- 2.93, N = 3 SE +/- 7.82, N = 3 SE +/- 6.93, N = 3 1213.9 2826.0 2196.3 2666.1 3240.6 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
TSCP This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better TSCP 1.81 AI Chess Performance a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 300K 600K 900K 1200K 1500K SE +/- 196.86, N = 5 SE +/- 4180.17, N = 5 SE +/- 338.27, N = 5 SE +/- 1099.67, N = 5 SE +/- 0.00, N = 5 538500 1442631 872313 1272596 1370094 1. (CC) gcc options: -O3 -march=native
ACES DGEMM This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better ACES DGEMM 1.0 Sustained Floating-Point Rate a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 1.3171 2.6342 3.9513 5.2684 6.5855 SE +/- 0.002370, N = 3 SE +/- 0.023324, N = 6 SE +/- 0.007139, N = 3 SE +/- 0.003819, N = 3 SE +/- 0.016350, N = 3 0.891391 2.432432 4.785123 2.230545 5.853864 1. (CC) gcc options: -O3 -march=native -fopenmp
Coremark This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations/Sec, More Is Better Coremark 1.0 CoreMark Size 666 - Iterations Per Second a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 90K 180K 270K 360K 450K SE +/- 116.54, N = 3 SE +/- 2163.00, N = 3 SE +/- 49.84, N = 3 SE +/- 80.93, N = 3 SE +/- 3211.91, N = 3 203869.40 345133.44 315464.34 285378.84 405413.86 1. (CC) gcc options: -O2 -lrt" -lrt
7-Zip Compression This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 21.06 Test: Compression Rating a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20K 40K 60K 80K 100K SE +/- 91.00, N = 3 SE +/- 16.02, N = 3 SE +/- 44.77, N = 3 SE +/- 174.34, N = 3 SE +/- 159.36, N = 3 32498 62562 71285 66631 97824 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 21.06 Test: Decompression Rating a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 16K 32K 48K 64K 80K SE +/- 31.21, N = 3 SE +/- 142.56, N = 3 SE +/- 239.68, N = 3 SE +/- 35.00, N = 3 SE +/- 12.88, N = 3 40891 57318 59445 45653 73054 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 13 Total Time a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 6M 12M 18M 24M 30M SE +/- 123749.22, N = 3 SE +/- 149731.77, N = 3 SE +/- 292329.99, N = 3 SE +/- 242448.39, N = 3 SE +/- 153578.64, N = 3 10980430 23857623 21679245 22081961 27608891 -m64 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -m64 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 1. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver
asmFish This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes/second, More Is Better asmFish 2018-07-23 1024 Hash Memory, 26 Depth a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 7M 14M 21M 28M 35M SE +/- 106812.26, N = 3 SE +/- 303648.79, N = 3 SE +/- 359309.26, N = 3 SE +/- 325631.00, N = 3 SE +/- 104795.40, N = 3 15331550 26187688 26540482 23746200 32134123
libavif avifenc This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 0 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 170 340 510 680 850 SE +/- 0.58, N = 3 SE +/- 0.62, N = 3 SE +/- 0.13, N = 3 SE +/- 0.33, N = 3 SE +/- 0.18, N = 3 768.30 195.53 406.94 204.99 256.84 1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 2 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 100 200 300 400 500 SE +/- 0.29, N = 3 SE +/- 0.44, N = 3 SE +/- 0.12, N = 3 SE +/- 0.26, N = 3 SE +/- 0.11, N = 3 449.02 93.95 238.21 97.74 141.70 1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6, Lossless a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 8 16 24 32 40 SE +/- 0.31, N = 3 SE +/- 0.12, N = 3 SE +/- 0.17, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 33.99 16.39 16.52 17.53 11.91 1. (CXX) g++ options: -O3 -fPIC -lm
Timed Gem5 Compilation This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Gem5 Compilation 21.2 Time To Compile a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 200 400 600 800 1000 SE +/- 0.78, N = 3 SE +/- 0.79, N = 3 SE +/- 0.53, N = 3 SE +/- 0.59, N = 3 SE +/- 1.33, N = 3 1155.62 515.20 488.81 469.94 391.17
Timed Node.js Compilation This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 17.3 Time To Compile a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 400 800 1200 1600 2000 SE +/- 1.80, N = 3 SE +/- 0.26, N = 3 SE +/- 0.37, N = 3 SE +/- 0.42, N = 3 SE +/- 2.06, N = 3 1765.91 664.35 628.40 604.62 497.58
Build2 This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.13 Time To Compile a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 80 160 240 320 400 SE +/- 1.89, N = 3 SE +/- 0.87, N = 3 SE +/- 0.70, N = 3 SE +/- 0.69, N = 3 SE +/- 0.64, N = 3 353.91 150.99 142.28 136.80 115.02
C-Ray This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 1.1 Total Time - 4K, 16 Rays Per Pixel a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20 40 60 80 100 SE +/- 2.00, N = 15 SE +/- 0.77, N = 5 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 104.76 69.35 62.32 92.55 38.52 1. (CC) gcc options: -lm -lpthread -O3
POV-Ray This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better POV-Ray 3.7.0.7 Trace Time a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20 40 60 80 100 SE +/- 0.94, N = 15 SE +/- 0.18, N = 3 SE +/- 0.00, N = 3 SE +/- 0.12, N = 3 SE +/- 0.01, N = 3 93.80 49.44 51.05 52.78 37.86 -march=native -march=native 1. (CXX) g++ options: -pipe -O3 -ffast-math -R/usr/lib -lXpm -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
m-queens A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better m-queens 1.2 Time To Solve a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20 40 60 80 100 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.00, N = 3 SE +/- 0.06, N = 3 SE +/- 0.00, N = 3 110.37 72.33 75.22 91.23 66.82 1. (CXX) g++ options: -fopenmp -O2 -march=native
N-Queens This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better N-Queens 1.0 Elapsed Time a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 7 14 21 28 35 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 32.29 16.38 23.14 18.84 21.54 1. (CC) gcc options: -static -fopenmp -O3 -march=native
Ngspice Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C2670 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 100 200 300 400 500 SE +/- 3.48, N = 3 SE +/- 1.17, N = 3 SE +/- 0.91, N = 3 SE +/- 1.80, N = 4 SE +/- 0.86, N = 3 473.90 245.89 263.72 147.89 198.22 1. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C7552 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 100 200 300 400 500 SE +/- 1.19, N = 3 SE +/- 0.66, N = 3 SE +/- 2.40, N = 7 SE +/- 0.33, N = 3 SE +/- 1.94, N = 3 480.79 180.36 255.21 161.08 191.29 1. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
Google SynthMark SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Voices, More Is Better Google SynthMark 20201109 Test: VoiceMark_100 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 150 300 450 600 750 SE +/- 0.00, N = 3 SE +/- 7.09, N = 3 SE +/- 0.33, N = 3 SE +/- 2.00, N = 3 SE +/- 0.32, N = 3 331.07 663.07 470.39 565.69 675.64 1. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
SecureMark SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org marks, More Is Better SecureMark 1.0.4 Benchmark: SecureMark-TLS a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 50K 100K 150K 200K 250K SE +/- 59.40, N = 3 SE +/- 3310.19, N = 9 SE +/- 23.07, N = 3 SE +/- 864.34, N = 3 SE +/- 773.26, N = 3 74356 213288 120301 230549 183708 1. (CC) gcc options: -pedantic -O3
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.0 Algorithm: SHA256 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 3000M 6000M 9000M 12000M 15000M SE +/- 12563225.46, N = 3 SE +/- 8616254.20, N = 3 SE +/- 47755430.47, N = 3 SE +/- 606684.16, N = 3 SE +/- 7739237.92, N = 3 6785689517 11691403353 10723184083 7096993937 13722045973 -m64 -m64 1. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 500 1000 1500 2000 2500 SE +/- 0.12, N = 3 SE +/- 1.40, N = 3 SE +/- 0.03, N = 3 SE +/- 4.47, N = 3 SE +/- 0.23, N = 3 588.3 2088.9 660.6 2161.3 2546.4 -m64 -m64 1. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 40K 80K 120K 160K 200K SE +/- 63.75, N = 3 SE +/- 74.60, N = 3 SE +/- 3.30, N = 3 SE +/- 47.94, N = 3 SE +/- 82.61, N = 3 45328.6 136784.2 53951.5 140964.4 178460.4 -m64 -m64 1. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 16 - Buffer Length: 256 - Filter Length: 57 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 110M 220M 330M 440M 550M SE +/- 8819.17, N = 3 SE +/- 489364.67, N = 3 SE +/- 35118.85, N = 3 SE +/- 41633.32, N = 3 SE +/- 400097.21, N = 3 165513333 509746667 262890000 373100000 383606667 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 0.3267 0.6534 0.9801 1.3068 1.6335 SE +/- 0.000, N = 3 SE +/- 0.002, N = 3 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 SE +/- 0.002, N = 3 0.316 1.004 0.781 1.452 1.128 1. (CXX) g++ options: -O3
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 3K 6K 9K 12K 15K SE +/- 46.48, N = 3 SE +/- 1.37, N = 3 SE +/- 37.23, N = 3 SE +/- 3.54, N = 3 SE +/- 22.07, N = 3 12014.70 3103.12 3969.35 2983.93 3257.94
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception V4 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 40K 80K 120K 160K 200K SE +/- 1746.17, N = 3 SE +/- 53.95, N = 3 SE +/- 197.89, N = 3 SE +/- 75.14, N = 3 SE +/- 210.27, N = 3 188910.0 44920.6 46793.9 41185.7 41855.1
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: NASNet Mobile a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 7K 14K 21K 28K 35K SE +/- 49.84, N = 3 SE +/- 23.44, N = 3 SE +/- 203.15, N = 15 SE +/- 166.62, N = 14 SE +/- 121.56, N = 15 30986.70 9266.86 14985.40 10900.60 11591.90
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Float a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 2K 4K 6K 8K 10K SE +/- 113.94, N = 3 SE +/- 1.03, N = 3 SE +/- 28.63, N = 3 SE +/- 1.81, N = 3 SE +/- 19.61, N = 3 9990.15 2159.72 2500.87 1965.07 2156.60
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Quant a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 1200 2400 3600 4800 6000 SE +/- 20.90, N = 3 SE +/- 53.31, N = 15 SE +/- 14.44, N = 3 SE +/- 80.05, N = 12 SE +/- 17.76, N = 3 5724.66 3847.96 1980.24 3967.39 1502.95
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 40K 80K 120K 160K 200K SE +/- 825.35, N = 3 SE +/- 27.66, N = 3 SE +/- 336.95, N = 3 SE +/- 110.01, N = 3 SE +/- 305.31, N = 3 171169.0 41366.6 45955.7 41179.7 40051.3
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Thorough a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 8 16 24 32 40 SE +/- 0.0061, N = 3 SE +/- 0.0154, N = 3 SE +/- 0.0064, N = 3 SE +/- 0.0001, N = 3 SE +/- 0.0011, N = 3 33.5198 7.9818 16.5222 7.2625 13.9248 1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Exhaustive a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 60 120 180 240 300 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.00, N = 3 SE +/- 0.04, N = 3 SE +/- 0.01, N = 3 277.77 72.39 159.20 69.64 139.38 1. (CXX) g++ options: -O3 -flto -pthread
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Crypto a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 5K 10K 15K 20K 25K SE +/- 6.29, N = 3 SE +/- 3.93, N = 3 SE +/- 92.83, N = 3 SE +/- 5.89, N = 3 SE +/- 32.01, N = 3 11985.38 13556.06 17924.18 10210.34 23181.81 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: IO_uring a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 200K 400K 600K 800K 1000K SE +/- 3840.04, N = 3 SE +/- 713.03, N = 3 SE +/- 2395.13, N = 3 SE +/- 405.56, N = 3 SE +/- 614.16, N = 3 918172.37 768723.46 770521.81 1037943.37 843015.78 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Stress a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 3K 6K 9K 12K 15K SE +/- 0.16, N = 3 SE +/- 37.60, N = 3 SE +/- 0.54, N = 3 SE +/- 155.66, N = 3 SE +/- 0.41, N = 3 2366.00 13304.50 3404.94 12527.16 5029.71 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Vector Math a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 12K 24K 36K 48K 60K SE +/- 0.49, N = 3 SE +/- 2.46, N = 3 SE +/- 15.72, N = 3 SE +/- 28.50, N = 3 SE +/- 17.05, N = 3 27341.47 53787.61 37753.89 40140.30 55258.17 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Memory Copying a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 1400 2800 4200 5600 7000 SE +/- 0.91, N = 3 SE +/- 11.57, N = 3 SE +/- 3.75, N = 3 SE +/- 0.94, N = 3 SE +/- 3.52, N = 3 798.24 3551.80 2903.00 3150.49 6693.32 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
GPAW GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GPAW 22.1 Input: Carbon Nanotube a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 170 340 510 680 850 SE +/- 5.37, N = 3 SE +/- 0.17, N = 3 SE +/- 0.13, N = 3 SE +/- 0.24, N = 3 SE +/- 0.08, N = 3 769.35 302.96 215.53 202.11 155.18 1. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
PyBench This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milliseconds, Fewer Is Better PyBench 2018-02-16 Total For Average Test Times a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 700 1400 2100 2800 3500 SE +/- 18.15, N = 3 SE +/- 1.53, N = 3 SE +/- 1.67, N = 3 SE +/- 3.84, N = 3 SE +/- 0.33, N = 3 3452 1961 1741 997 1185
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 100 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 80K 160K 240K 320K 400K SE +/- 22.67, N = 3 SE +/- 436.72, N = 3 SE +/- 3992.58, N = 3 SE +/- 1727.81, N = 3 SE +/- 2009.97, N = 3 143155.48 388010.76 307349.36 356302.84 345710.87 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 200 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 80K 160K 240K 320K 400K SE +/- 133.96, N = 3 SE +/- 1242.81, N = 3 SE +/- 1347.28, N = 3 SE +/- 1582.66, N = 3 SE +/- 3986.77, N = 3 141436.20 390932.79 308938.67 356829.93 352380.98 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 500 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 80K 160K 240K 320K 400K SE +/- 141.15, N = 3 SE +/- 771.95, N = 3 SE +/- 3783.68, N = 3 SE +/- 1620.39, N = 3 SE +/- 1017.52, N = 3 139414.84 389030.11 310596.58 351672.92 346613.34 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 1000 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 80K 160K 240K 320K 400K SE +/- 66.96, N = 3 SE +/- 781.49, N = 3 SE +/- 1677.89, N = 3 SE +/- 2637.25, N = 3 SE +/- 1410.11, N = 3 138205.11 388657.76 308213.13 347345.49 346814.75 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 2K 4K 6K 8K 10K SE +/- 2.20, N = 3 SE +/- 75.29, N = 12 SE +/- 3.50, N = 3 SE +/- 322.41, N = 12 SE +/- 2.40, N = 3 2312 5617 6948 7944 7990 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 170 340 510 680 850 SE +/- 0.88, N = 3 SE +/- 0.58, N = 3 SE +/- 0.17, N = 3 SE +/- 50.92, N = 12 SE +/- 0.17, N = 3 115 488 322 773 407 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 30 60 90 120 150 SE +/- 0.00, N = 3 SE +/- 5.55, N = 12 SE +/- 0.00, N = 3 SE +/- 0.60, N = 3 SE +/- 0.00, N = 3 10 65 28 139 38 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 300 600 900 1200 1500 SE +/- 0.50, N = 3 SE +/- 82.60, N = 12 SE +/- 0.17, N = 3 SE +/- 91.51, N = 12 SE +/- 0.00, N = 3 165 1192 334 1374 609 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 800 1600 2400 3200 4000 SE +/- 0.50, N = 3 SE +/- 234.97, N = 12 SE +/- 1.74, N = 3 SE +/- 1.61, N = 3 SE +/- 1.86, N = 3 757 3696 2072 3450 2817 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 100 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20K 40K 60K 80K 100K SE +/- 28.97, N = 3 SE +/- 211.56, N = 3 SE +/- 93.03, N = 3 SE +/- 389.13, N = 3 SE +/- 38.09, N = 3 18636.43 77567.69 46995.35 86545.57 67231.88 1. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 200 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20K 40K 60K 80K 100K SE +/- 59.55, N = 3 SE +/- 644.29, N = 3 SE +/- 112.65, N = 3 SE +/- 615.05, N = 3 SE +/- 649.31, N = 3 20887.58 83070.00 50059.97 94458.22 73676.95 1. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 500 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20K 40K 60K 80K 100K SE +/- 93.64, N = 3 SE +/- 636.46, N = 13 SE +/- 578.32, N = 3 SE +/- 833.50, N = 7 SE +/- 89.82, N = 3 20133.49 81995.64 50077.81 91746.57 73546.32 1. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 1000 a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 20K 40K 60K 80K 100K SE +/- 98.61, N = 3 SE +/- 397.88, N = 3 SE +/- 276.10, N = 3 SE +/- 335.63, N = 3 SE +/- 83.83, N = 3 19278.68 71537.11 46629.45 79830.96 72719.33 1. (CC) gcc options: -shared -fPIC -O2
PHPBench PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Score, More Is Better PHPBench 0.8.1 PHP Benchmark Suite a1.4xlarge Graviton c6a.4xlarge EPYC c6g.4xlarge Graviton2 c6i.4xlarge Xeon c7g.4xlarge Graviton3 200K 400K 600K 800K 1000K SE +/- 816.27, N = 3 SE +/- 2681.41, N = 3 SE +/- 743.13, N = 3 SE +/- 983.65, N = 3 SE +/- 525.83, N = 3 241259 480741 449855 828186 666484
a1.4xlarge Graviton Processor: ARMv8 Cortex-A72 (16 Cores), Motherboard: Amazon EC2 a1.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 25 May 2022 00:29 by user ubuntu.
c6g.4xlarge Graviton2 Processor: ARMv8 Neoverse-N1 (16 Cores), Motherboard: Amazon EC2 c6g.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 25 May 2022 13:01 by user ubuntu.
c7g.4xlarge Graviton3 Processor: ARMv8 Neoverse-V1 (16 Cores), Motherboard: Amazon EC2 c7g.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 24 May 2022 11:30 by user ubuntu.
c6a.4xlarge EPYC Processor: AMD EPYC 7R13 (8 Cores / 16 Threads), Motherboard: Amazon EC2 c6a.4xlarge (1.0 BIOS), Chipset: Intel 440FX 82441FX PMC, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (x86_64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: CPU Microcode: 0xa001144Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 May 2022 00:31 by user ubuntu.
c6i.4xlarge Xeon Processor: Intel Xeon Platinum 8375C (8 Cores / 16 Threads), Motherboard: Amazon EC2 c6i.4xlarge (1.0 BIOS), Chipset: Intel 440FX 82441FX PMC, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (x86_64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: CPU Microcode: 0xd000331Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 May 2022 00:32 by user ubuntu.