Amazon AWS Graviton3 benchmarks by Michael Larabel.
a1.4xlarge Graviton Processor: ARMv8 Cortex-A72 (16 Cores), Motherboard: Amazon EC2 a1.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected
c6g.4xlarge Graviton2 Changed Processor to ARMv8 Neoverse-N1 (16 Cores) .
Changed Motherboard to Amazon EC2 c6g.4xlarge (1.0 BIOS) .
Security Change: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
c7g.4xlarge Graviton3 Changed Processor to ARMv8 Neoverse-V1 (16 Cores) .
Changed Motherboard to Amazon EC2 c7g.4xlarge (1.0 BIOS) .
c6a.4xlarge EPYC Processor: AMD EPYC 7R13 (8 Cores / 16 Threads) , Motherboard: Amazon EC2 c6a.4xlarge (1.0 BIOS) , Chipset: Intel 440FX 82441FX PMC , Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (x86_64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: CPU Microcode: 0xa001144Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
c6i.4xlarge Xeon Changed Processor to Intel Xeon Platinum 8375C (8 Cores / 16 Threads) .
Changed Motherboard to Amazon EC2 c6i.4xlarge (1.0 BIOS) .
Processor Change: CPU Microcode: 0xd000331Security Change: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Amazon EC2 Graviton3 Benchmark Comparison Processor Motherboard Chipset Memory Disk Network OS Kernel Compiler File-System System Layer a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon ARMv8 Cortex-A72 (16 Cores) Amazon EC2 a1.4xlarge (1.0 BIOS) Amazon Device 0200 32GB 193GB Amazon Elastic Block Store Amazon Elastic Ubuntu 22.04 5.15.0-1004-aws (aarch64) GCC 11.2.0 ext4 amazon ARMv8 Neoverse-N1 (16 Cores) Amazon EC2 c6g.4xlarge (1.0 BIOS) ARMv8 Neoverse-V1 (16 Cores) Amazon EC2 c7g.4xlarge (1.0 BIOS) AMD EPYC 7R13 (8 Cores / 16 Threads) Amazon EC2 c6a.4xlarge (1.0 BIOS) Intel 440FX 82441FX PMC 5.15.0-1004-aws (x86_64) Intel Xeon Platinum 8375C (8 Cores / 16 Threads) Amazon EC2 c6i.4xlarge (1.0 BIOS) OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - a1.4xlarge Graviton: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c6g.4xlarge Graviton2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c7g.4xlarge Graviton3: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -v - c6a.4xlarge EPYC: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - c6i.4xlarge Xeon: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Java Details - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) Python Details - Python 3.10.4 Security Details - a1.4xlarge Graviton: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected - c6g.4xlarge Graviton2: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - c7g.4xlarge Graviton3: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected - c6a.4xlarge EPYC: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - c6i.4xlarge Xeon: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected Processor Details - c6a.4xlarge EPYC: CPU Microcode: 0xa001144 - c6i.4xlarge Xeon: CPU Microcode: 0xd000331
a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon Logarithmic Result Overview Phoronix Test Suite High Performance Conjugate Gradient Algebraic Multi-Grid Benchmark ACES DGEMM ONNX Runtime Xcompact3d Incompact3d NAS Parallel Benchmarks Timed MrBayes Analysis GPAW LULESH GROMACS Apache HTTP Server simdjson TensorFlow Lite ASTC Encoder Timed Node.js Compilation LAMMPS Molecular Dynamics Simulator PyBench PHPBench libavif avifenc Timed ImageMagick Compilation Timed Apache Compilation Timed LLVM Compilation Rodinia OpenSSL Zstd Compression SecureMark Ngspice Liquid-DSP Build2 Timed PHP Compilation DaCapo Benchmark WebP Image Encode Timed Gem5 Compilation nginx C-Ray TSCP Stockfish POV-Ray 7-Zip Compression Stress-NG asmFish Google SynthMark LeelaChessZero Coremark N-Queens m-queens
Amazon EC2 Graviton3 Benchmark Comparison build-llvm: Ninja build-nodejs: Time To Compile lczero: BLAS build-gem5: Time To Compile lczero: Eigen npb: SP.C npb: BT.C securemark: SecureMark-TLS avifenc: 0 ngspice: C7552 gpaw: Carbon Nanotube npb: LU.C mrbayes: Primate Phylogeny Analysis npb: EP.D ngspice: C2670 onnx: GPT-2 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Standard gromacs: MPI CPU - water_GMX50_bare rodinia: OpenMP LavaMD avifenc: 2 tensorflow-lite: NASNet Mobile onnx: fcn-resnet101-11 - CPU - Standard asmfish: 1024 Hash Memory, 26 Depth onnx: bertsquad-12 - CPU - Standard onnx: super-resolution-10 - CPU - Standard openssl: SHA256 build2: Time To Compile apache: 500 c-ray: Total Time - 4K, 16 Rays Per Pixel hpcg: astcenc: Exhaustive tensorflow-lite: Mobilenet Quant povray: Trace Time mt-dgemm: Sustained Floating-Point Rate npb: IS.D build-php: Time To Compile apache: 1000 nginx: 100 nginx: 200 nginx: 1000 nginx: 500 apache: 200 apache: 100 incompact3d: input.i3d 193 Cells Per Direction m-queens: Time To Solve npb: CG.C simdjson: PartialTweets simdjson: DistinctUserID webp: Quality 100, Lossless, Highest Compression simdjson: Kostya compress-zstd: 19, Long Mode - Decompression Speed compress-zstd: 19, Long Mode - Compression Speed tensorflow-lite: Inception V4 tensorflow-lite: Inception ResNet V2 tensorflow-lite: Mobilenet Float tensorflow-lite: SqueezeNet openssl: RSA4096 openssl: RSA4096 compress-zstd: 19 - Decompression Speed compress-zstd: 19 - Compression Speed npb: FT.C simdjson: LargeRand webp: Quality 100, Lossless stockfish: Total Time build-imagemagick: Time To Compile phpbench: PHP Benchmark Suite rodinia: OpenMP Streamcluster pybench: Total For Average Test Times compress-zstd: 3 - Compression Speed build-apache: Time To Compile compress-7zip: Decompression Rating compress-7zip: Compression Rating dacapobench: Tradebeans stress-ng: CPU Stress synthmark: VoiceMark_100 stress-ng: IO_uring stress-ng: Memory Copying stress-ng: Crypto stress-ng: Vector Math incompact3d: input.i3d 129 Cells Per Direction coremark: CoreMark Size 666 - Iterations Per Second n-queens: Elapsed Time astcenc: Thorough rodinia: OpenMP CFD Solver npb: MG.C liquid-dsp: 16 - 256 - 57 dacapobench: Tradesoap avifenc: 6, Lossless amg: dacapobench: H2 dacapobench: Jython lulesh: lammps: Rhodopsin Protein tscp: AI Chess Performance a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 1784.600 1765.910 135 1155.615 128 1293.80 3148.18 74356 768.302 480.793 769.346 2558.12 644.788 339.20 473.901 2312 165 0.316 360.304 449.022 30986.7 10 15331550 115 757 6785689517 353.912 20133.49 104.761 3.77834 277.7669 5724.66 93.801 0.891391 197.57 196.029 19278.68 143155.48 141436.20 138205.11 139414.84 20887.58 18636.43 182.583939 110.368 1213.15 0.78 0.8 124.708 0.63 1213.9 16 188910 171169 9990.15 12014.7 45328.6 588.3 1121.7 16.9 2927.16 0.3 61.801 10980430 93.632 241259 47.430 3452 633.9 74.742 40891 32498 9045 2366.00 331.070 918172.37 798.24 11985.38 27341.47 53.7706274 203869.402017 32.285 33.5198 41.450 3266.36 165513333 11182 33.991 186716933 6740 12997 2328.2724 3.245 538500 682.981 628.401 864 488.805 834 2356.16 6449.11 120301 406.937 255.205 215.528 5133.89 384.753 558.88 263.724 6948 334 0.781 215.666 238.205 14985.4 28 26540482 322 2072 10723184083 142.277 50077.81 62.323 19.7218 159.2039 1980.24 51.047 4.785123 372.76 88.897 46629.45 307349.36 308938.67 308213.13 310596.58 50059.97 46995.35 41.0240835 75.224 3520.86 1.51 1.53 66.147 1.19 2196.3 31.0 46793.9 45955.7 2500.87 3969.35 53951.5 660.6 2051.6 34.6 6244.48 0.49 31.082 21679245 40.333 449855 15.484 1741 2888.3 34.201 59445 71285 4344 3404.94 470.389 770521.81 2903.00 17924.18 37753.89 11.5733547 315464.339800 23.136 16.5222 17.035 6720.68 262890000 4506 16.518 932652900 3964 5626 6016.1627 7.935 872313 544.929 497.579 1103 391.171 1189 4467.19 10339.53 183708 256.841 191.286 155.180 7730.41 251.397 934.72 198.224 7990 609 1.128 143.334 141.698 11591.9 38 32134123 407 2817 13722045973 115.020 73546.32 38.517 26.3058 139.3797 1502.95 37.863 5.853864 1041.90 69.483 72719.33 345710.87 352380.98 346814.75 346613.34 73676.95 67231.88 29.1258570 66.822 6571.95 2.62 2.69 48.208 1.94 3240.6 39.5 41855.1 40051.3 2156.60 3257.94 178460.4 2546.4 3050.3 41.2 11791.77 0.7 22.769 27608891 27.904 666484 13.296 1185 4639.1 26.940 73054 97824 3203 5029.71 675.635 843015.78 6693.32 23181.81 55258.17 8.01671425 405413.860554 21.536 13.9248 10.478 13481.61 383606667 3524 11.908 1258807333 2951 3940 10940.939 11.291 1370094 760.344 664.347 1091 515.201 1001 8094.79 13134.46 213288 195.532 180.356 302.956 25140.55 120.636 466.21 245.886 5617 1192 1.004 224.331 93.946 9266.86 65 26187688 488 3696 11691403353 150.994 81995.64 69.349 5.06042 72.3908 3847.96 49.435 2.432432 541.35 67.084 71537.11 388010.76 390932.79 388657.76 389030.11 83070.00 77567.69 110.770027 72.330 6169.22 3.64 4.30 48.677 2.80 2826.0 25.9 44920.6 41366.6 2159.72 3103.12 136784.2 2088.9 2907.5 30.0 18299.96 0.95 26.708 23857623 32.626 480741 18.383 1961 2784.0 23.532 57318 62562 3167 13304.50 663.073 768723.46 3551.80 13556.06 53787.61 28.2797661 345133.440541 16.378 7.9818 21.789 16826.43 509746667 4052 16.394 267670700 3019 4616 5452.1051 5.067 1442631 685.704 604.620 1397 469.940 1466 9563.22 13888.40 230549 204.994 161.081 202.106 38136.77 134.924 1103.22 147.893 7944 1374 1.452 281.389 97.735 10900.6 139 23746200 773 3450 7096993937 136.801 91746.57 92.545 8.66031 69.6387 3967.39 52.784 2.230545 861.57 64.337 79830.96 356302.84 356829.93 347345.49 351672.92 94458.22 86545.57 69.2169978 91.231 9522.82 3.71 4.30 41.805 2.46 2666.1 33.8 41185.7 41179.7 1965.07 2983.93 140964.4 2161.3 2582.0 38.1 20423.57 0.86 21.122 22081961 29.737 828186 23.512 997 3440.6 22.527 45653 66631 2928 12527.16 565.690 1037943.37 3150.49 10210.34 40140.30 17.8682772 285378.841661 18.839 7.2625 20.446 26298.81 373100000 3815 17.529 661364767 2921 4013 8112.3715 6.220 1272596 OpenBenchmarking.org
Timed Node.js Compilation This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 17.3 Time To Compile a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 400 800 1200 1600 2000 SE +/- 1.80, N = 3 SE +/- 0.37, N = 3 SE +/- 2.06, N = 3 SE +/- 0.26, N = 3 SE +/- 0.42, N = 3 1765.91 628.40 497.58 664.35 604.62
LeelaChessZero LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: BLAS a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 300 600 900 1200 1500 SE +/- 0.88, N = 3 SE +/- 10.22, N = 4 SE +/- 6.44, N = 3 SE +/- 12.82, N = 9 SE +/- 12.41, N = 9 135 864 1103 1091 1397 1. (CXX) g++ options: -flto -pthread
Timed Gem5 Compilation This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Gem5 Compilation 21.2 Time To Compile a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 200 400 600 800 1000 SE +/- 0.78, N = 3 SE +/- 0.53, N = 3 SE +/- 1.33, N = 3 SE +/- 0.79, N = 3 SE +/- 0.59, N = 3 1155.62 488.81 391.17 515.20 469.94
LeelaChessZero LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: Eigen a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 300 600 900 1200 1500 SE +/- 0.67, N = 3 SE +/- 12.00, N = 3 SE +/- 9.70, N = 3 SE +/- 11.74, N = 9 SE +/- 13.37, N = 3 128 834 1189 1001 1466 1. (CXX) g++ options: -flto -pthread
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.C a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 2K 4K 6K 8K 10K SE +/- 2.51, N = 3 SE +/- 0.57, N = 3 SE +/- 9.61, N = 3 SE +/- 24.63, N = 3 SE +/- 73.65, N = 3 1293.80 2356.16 4467.19 8094.79 9563.22 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: BT.C a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 3K 6K 9K 12K 15K SE +/- 3.44, N = 3 SE +/- 3.20, N = 3 SE +/- 7.36, N = 3 SE +/- 98.45, N = 3 SE +/- 22.04, N = 3 3148.18 6449.11 10339.53 13134.46 13888.40 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
SecureMark SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org marks, More Is Better SecureMark 1.0.4 Benchmark: SecureMark-TLS a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 50K 100K 150K 200K 250K SE +/- 59.40, N = 3 SE +/- 23.07, N = 3 SE +/- 773.26, N = 3 SE +/- 3310.19, N = 9 SE +/- 864.34, N = 3 74356 120301 183708 213288 230549 1. (CC) gcc options: -pedantic -O3
libavif avifenc This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 0 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 170 340 510 680 850 SE +/- 0.58, N = 3 SE +/- 0.13, N = 3 SE +/- 0.18, N = 3 SE +/- 0.62, N = 3 SE +/- 0.33, N = 3 768.30 406.94 256.84 195.53 204.99 1. (CXX) g++ options: -O3 -fPIC -lm
Ngspice Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C7552 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 100 200 300 400 500 SE +/- 1.19, N = 3 SE +/- 2.40, N = 7 SE +/- 1.94, N = 3 SE +/- 0.66, N = 3 SE +/- 0.33, N = 3 480.79 255.21 191.29 180.36 161.08 1. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
GPAW GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GPAW 22.1 Input: Carbon Nanotube a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 170 340 510 680 850 SE +/- 5.37, N = 3 SE +/- 0.13, N = 3 SE +/- 0.08, N = 3 SE +/- 0.17, N = 3 SE +/- 0.24, N = 3 769.35 215.53 155.18 302.96 202.11 1. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 8K 16K 24K 32K 40K SE +/- 0.15, N = 3 SE +/- 0.90, N = 3 SE +/- 1.96, N = 3 SE +/- 18.06, N = 3 SE +/- 160.86, N = 3 2558.12 5133.89 7730.41 25140.55 38136.77 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
Timed MrBayes Analysis This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed MrBayes Analysis 3.2.7 Primate Phylogeny Analysis a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 140 280 420 560 700 SE +/- 0.49, N = 3 SE +/- 0.11, N = 3 SE +/- 0.24, N = 3 SE +/- 0.35, N = 3 SE +/- 1.43, N = 3 644.79 384.75 251.40 120.64 134.92 -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm 1. (CC) gcc options: -O3 -std=c99 -pedantic -lm
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.D a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 200 400 600 800 1000 SE +/- 0.24, N = 3 SE +/- 0.23, N = 3 SE +/- 0.39, N = 3 SE +/- 0.06, N = 3 SE +/- 19.93, N = 9 339.20 558.88 934.72 466.21 1103.22 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
Ngspice Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C2670 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 100 200 300 400 500 SE +/- 3.48, N = 3 SE +/- 0.91, N = 3 SE +/- 0.86, N = 3 SE +/- 1.17, N = 3 SE +/- 1.80, N = 4 473.90 263.72 198.22 245.89 147.89 1. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 2K 4K 6K 8K 10K SE +/- 2.20, N = 3 SE +/- 3.50, N = 3 SE +/- 2.40, N = 3 SE +/- 75.29, N = 12 SE +/- 322.41, N = 12 2312 6948 7990 5617 7944 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 300 600 900 1200 1500 SE +/- 0.50, N = 3 SE +/- 0.17, N = 3 SE +/- 0.00, N = 3 SE +/- 82.60, N = 12 SE +/- 91.51, N = 12 165 334 609 1192 1374 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 0.3267 0.6534 0.9801 1.3068 1.6335 SE +/- 0.000, N = 3 SE +/- 0.001, N = 3 SE +/- 0.002, N = 3 SE +/- 0.002, N = 3 SE +/- 0.001, N = 3 0.316 0.781 1.128 1.004 1.452 1. (CXX) g++ options: -O3
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP LavaMD a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 80 160 240 320 400 SE +/- 0.07, N = 3 SE +/- 0.01, N = 3 SE +/- 0.15, N = 3 SE +/- 0.03, N = 3 SE +/- 0.14, N = 3 360.30 215.67 143.33 224.33 281.39 1. (CXX) g++ options: -O2 -lOpenCL
libavif avifenc This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 2 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 100 200 300 400 500 SE +/- 0.29, N = 3 SE +/- 0.12, N = 3 SE +/- 0.11, N = 3 SE +/- 0.44, N = 3 SE +/- 0.26, N = 3 449.02 238.21 141.70 93.95 97.74 1. (CXX) g++ options: -O3 -fPIC -lm
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: NASNet Mobile a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 7K 14K 21K 28K 35K SE +/- 49.84, N = 3 SE +/- 203.15, N = 15 SE +/- 121.56, N = 15 SE +/- 23.44, N = 3 SE +/- 166.62, N = 14 30986.70 14985.40 11591.90 9266.86 10900.60
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 30 60 90 120 150 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 5.55, N = 12 SE +/- 0.60, N = 3 10 28 38 65 139 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
asmFish This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes/second, More Is Better asmFish 2018-07-23 1024 Hash Memory, 26 Depth a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 7M 14M 21M 28M 35M SE +/- 106812.26, N = 3 SE +/- 359309.26, N = 3 SE +/- 104795.40, N = 3 SE +/- 303648.79, N = 3 SE +/- 325631.00, N = 3 15331550 26540482 32134123 26187688 23746200
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 170 340 510 680 850 SE +/- 0.88, N = 3 SE +/- 0.17, N = 3 SE +/- 0.17, N = 3 SE +/- 0.58, N = 3 SE +/- 50.92, N = 12 115 322 407 488 773 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 800 1600 2400 3200 4000 SE +/- 0.50, N = 3 SE +/- 1.74, N = 3 SE +/- 1.86, N = 3 SE +/- 234.97, N = 12 SE +/- 1.61, N = 3 757 2072 2817 3696 3450 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.0 Algorithm: SHA256 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 3000M 6000M 9000M 12000M 15000M SE +/- 12563225.46, N = 3 SE +/- 47755430.47, N = 3 SE +/- 7739237.92, N = 3 SE +/- 8616254.20, N = 3 SE +/- 606684.16, N = 3 6785689517 10723184083 13722045973 11691403353 7096993937 -m64 -m64 1. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
Build2 This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.13 Time To Compile a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 80 160 240 320 400 SE +/- 1.89, N = 3 SE +/- 0.70, N = 3 SE +/- 0.64, N = 3 SE +/- 0.87, N = 3 SE +/- 0.69, N = 3 353.91 142.28 115.02 150.99 136.80
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 500 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 20K 40K 60K 80K 100K SE +/- 93.64, N = 3 SE +/- 578.32, N = 3 SE +/- 89.82, N = 3 SE +/- 636.46, N = 13 SE +/- 833.50, N = 7 20133.49 50077.81 73546.32 81995.64 91746.57 1. (CC) gcc options: -shared -fPIC -O2
C-Ray This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 1.1 Total Time - 4K, 16 Rays Per Pixel a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 20 40 60 80 100 SE +/- 2.00, N = 15 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 SE +/- 0.77, N = 5 SE +/- 0.04, N = 3 104.76 62.32 38.52 69.35 92.55 1. (CC) gcc options: -lm -lpthread -O3
High Performance Conjugate Gradient HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 6 12 18 24 30 SE +/- 0.00065, N = 3 SE +/- 0.01639, N = 3 SE +/- 0.03738, N = 3 SE +/- 0.00225, N = 3 SE +/- 0.04033, N = 3 3.77834 19.72180 26.30580 5.06042 8.66031 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Exhaustive a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 60 120 180 240 300 SE +/- 0.07, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 277.77 159.20 139.38 72.39 69.64 1. (CXX) g++ options: -O3 -flto -pthread
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Quant a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 1200 2400 3600 4800 6000 SE +/- 20.90, N = 3 SE +/- 14.44, N = 3 SE +/- 17.76, N = 3 SE +/- 53.31, N = 15 SE +/- 80.05, N = 12 5724.66 1980.24 1502.95 3847.96 3967.39
POV-Ray This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better POV-Ray 3.7.0.7 Trace Time a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 20 40 60 80 100 SE +/- 0.94, N = 15 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.18, N = 3 SE +/- 0.12, N = 3 93.80 51.05 37.86 49.44 52.78 -march=native -march=native 1. (CXX) g++ options: -pipe -O3 -ffast-math -R/usr/lib -lXpm -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
ACES DGEMM This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better ACES DGEMM 1.0 Sustained Floating-Point Rate a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 1.3171 2.6342 3.9513 5.2684 6.5855 SE +/- 0.002370, N = 3 SE +/- 0.007139, N = 3 SE +/- 0.016350, N = 3 SE +/- 0.023324, N = 6 SE +/- 0.003819, N = 3 0.891391 4.785123 5.853864 2.432432 2.230545 1. (CC) gcc options: -O3 -march=native -fopenmp
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: IS.D a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 200 400 600 800 1000 SE +/- 0.31, N = 3 SE +/- 0.20, N = 3 SE +/- 2.29, N = 3 SE +/- 0.47, N = 3 SE +/- 2.14, N = 3 197.57 372.76 1041.90 541.35 861.57 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 1000 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 20K 40K 60K 80K 100K SE +/- 98.61, N = 3 SE +/- 276.10, N = 3 SE +/- 83.83, N = 3 SE +/- 397.88, N = 3 SE +/- 335.63, N = 3 19278.68 46629.45 72719.33 71537.11 79830.96 1. (CC) gcc options: -shared -fPIC -O2
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 100 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 80K 160K 240K 320K 400K SE +/- 22.67, N = 3 SE +/- 3992.58, N = 3 SE +/- 2009.97, N = 3 SE +/- 436.72, N = 3 SE +/- 1727.81, N = 3 143155.48 307349.36 345710.87 388010.76 356302.84 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 200 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 80K 160K 240K 320K 400K SE +/- 133.96, N = 3 SE +/- 1347.28, N = 3 SE +/- 3986.77, N = 3 SE +/- 1242.81, N = 3 SE +/- 1582.66, N = 3 141436.20 308938.67 352380.98 390932.79 356829.93 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 1000 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 80K 160K 240K 320K 400K SE +/- 66.96, N = 3 SE +/- 1677.89, N = 3 SE +/- 1410.11, N = 3 SE +/- 781.49, N = 3 SE +/- 2637.25, N = 3 138205.11 308213.13 346814.75 388657.76 347345.49 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 500 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 80K 160K 240K 320K 400K SE +/- 141.15, N = 3 SE +/- 3783.68, N = 3 SE +/- 1017.52, N = 3 SE +/- 771.95, N = 3 SE +/- 1620.39, N = 3 139414.84 310596.58 346613.34 389030.11 351672.92 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 200 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 20K 40K 60K 80K 100K SE +/- 59.55, N = 3 SE +/- 112.65, N = 3 SE +/- 649.31, N = 3 SE +/- 644.29, N = 3 SE +/- 615.05, N = 3 20887.58 50059.97 73676.95 83070.00 94458.22 1. (CC) gcc options: -shared -fPIC -O2
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 100 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 20K 40K 60K 80K 100K SE +/- 28.97, N = 3 SE +/- 93.03, N = 3 SE +/- 38.09, N = 3 SE +/- 211.56, N = 3 SE +/- 389.13, N = 3 18636.43 46995.35 67231.88 77567.69 86545.57 1. (CC) gcc options: -shared -fPIC -O2
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 40 80 120 160 200 SE +/- 0.15, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 SE +/- 0.12, N = 3 SE +/- 0.14, N = 3 182.58 41.02 29.13 110.77 69.22 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
m-queens A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better m-queens 1.2 Time To Solve a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 20 40 60 80 100 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.02, N = 3 SE +/- 0.06, N = 3 110.37 75.22 66.82 72.33 91.23 1. (CXX) g++ options: -fopenmp -O2 -march=native
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: CG.C a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 2K 4K 6K 8K 10K SE +/- 11.79, N = 6 SE +/- 9.95, N = 3 SE +/- 17.12, N = 3 SE +/- 81.25, N = 3 SE +/- 66.44, N = 3 1213.15 3520.86 6571.95 6169.22 9522.82 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: PartialTweets a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 0.8348 1.6696 2.5044 3.3392 4.174 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.78 1.51 2.62 3.64 3.71 1. (CXX) g++ options: -O3
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: DistinctUserID a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 0.9675 1.935 2.9025 3.87 4.8375 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 0.80 1.53 2.69 4.30 4.30 1. (CXX) g++ options: -O3
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 30 60 90 120 150 SE +/- 0.09, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.06, N = 3 SE +/- 0.34, N = 3 124.71 66.15 48.21 48.68 41.81 -ltiff -ltiff -ltiff -ltiff 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: Kostya a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 0.63 1.26 1.89 2.52 3.15 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.63 1.19 1.94 2.80 2.46 1. (CXX) g++ options: -O3
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Decompression Speed a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 700 1400 2100 2800 3500 SE +/- 15.28, N = 3 SE +/- 2.93, N = 3 SE +/- 6.93, N = 3 SE +/- 6.53, N = 3 SE +/- 7.82, N = 3 1213.9 2196.3 3240.6 2826.0 2666.1 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Compression Speed a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 9 18 27 36 45 SE +/- 0.00, N = 3 SE +/- 0.03, N = 3 SE +/- 0.23, N = 3 SE +/- 0.27, N = 3 SE +/- 0.10, N = 3 16.0 31.0 39.5 25.9 33.8 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
TensorFlow Lite This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception V4 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 40K 80K 120K 160K 200K SE +/- 1746.17, N = 3 SE +/- 197.89, N = 3 SE +/- 210.27, N = 3 SE +/- 53.95, N = 3 SE +/- 75.14, N = 3 188910.0 46793.9 41855.1 44920.6 41185.7
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 40K 80K 120K 160K 200K SE +/- 825.35, N = 3 SE +/- 336.95, N = 3 SE +/- 305.31, N = 3 SE +/- 27.66, N = 3 SE +/- 110.01, N = 3 171169.0 45955.7 40051.3 41366.6 41179.7
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Float a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 2K 4K 6K 8K 10K SE +/- 113.94, N = 3 SE +/- 28.63, N = 3 SE +/- 19.61, N = 3 SE +/- 1.03, N = 3 SE +/- 1.81, N = 3 9990.15 2500.87 2156.60 2159.72 1965.07
OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 3K 6K 9K 12K 15K SE +/- 46.48, N = 3 SE +/- 37.23, N = 3 SE +/- 22.07, N = 3 SE +/- 1.37, N = 3 SE +/- 3.54, N = 3 12014.70 3969.35 3257.94 3103.12 2983.93
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 40K 80K 120K 160K 200K SE +/- 63.75, N = 3 SE +/- 3.30, N = 3 SE +/- 82.61, N = 3 SE +/- 74.60, N = 3 SE +/- 47.94, N = 3 45328.6 53951.5 178460.4 136784.2 140964.4 -m64 -m64 1. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 500 1000 1500 2000 2500 SE +/- 0.12, N = 3 SE +/- 0.03, N = 3 SE +/- 0.23, N = 3 SE +/- 1.40, N = 3 SE +/- 4.47, N = 3 588.3 660.6 2546.4 2088.9 2161.3 -m64 -m64 1. (CC) gcc options: -pthread -O3 -lssl -lcrypto -ldl
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Decompression Speed a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 700 1400 2100 2800 3500 SE +/- 4.74, N = 3 SE +/- 12.10, N = 3 SE +/- 7.75, N = 3 SE +/- 3.25, N = 3 SE +/- 24.18, N = 3 1121.7 2051.6 3050.3 2907.5 2582.0 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Compression Speed a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 9 18 27 36 45 SE +/- 0.03, N = 3 SE +/- 0.06, N = 3 SE +/- 0.00, N = 3 SE +/- 0.21, N = 3 SE +/- 0.40, N = 3 16.9 34.6 41.2 30.0 38.1 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: FT.C a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 4K 8K 12K 16K 20K SE +/- 1.73, N = 3 SE +/- 1.10, N = 3 SE +/- 1.17, N = 3 SE +/- 45.90, N = 3 SE +/- 40.24, N = 3 2927.16 6244.48 11791.77 18299.96 20423.57 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: LargeRandom a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 0.2138 0.4276 0.6414 0.8552 1.069 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.30 0.49 0.70 0.95 0.86 1. (CXX) g++ options: -O3
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 14 28 42 56 70 SE +/- 0.06, N = 3 SE +/- 0.02, N = 3 SE +/- 0.09, N = 3 SE +/- 0.17, N = 15 SE +/- 0.03, N = 3 61.80 31.08 22.77 26.71 21.12 -ltiff -ltiff -ltiff -ltiff 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 13 Total Time a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 6M 12M 18M 24M 30M SE +/- 123749.22, N = 3 SE +/- 292329.99, N = 3 SE +/- 153578.64, N = 3 SE +/- 149731.77, N = 3 SE +/- 242448.39, N = 3 10980430 21679245 27608891 23857623 22081961 -m64 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -m64 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 1. (CXX) g++ options: -lgcov -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -flto -flto=jobserver
PHPBench PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Score, More Is Better PHPBench 0.8.1 PHP Benchmark Suite a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 200K 400K 600K 800K 1000K SE +/- 816.27, N = 3 SE +/- 743.13, N = 3 SE +/- 525.83, N = 3 SE +/- 2681.41, N = 3 SE +/- 983.65, N = 3 241259 449855 666484 480741 828186
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP Streamcluster a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 11 22 33 44 55 SE +/- 0.02, N = 3 SE +/- 0.26, N = 15 SE +/- 0.33, N = 12 SE +/- 0.05, N = 3 SE +/- 0.07, N = 3 47.43 15.48 13.30 18.38 23.51 1. (CXX) g++ options: -O2 -lOpenCL
PyBench This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milliseconds, Fewer Is Better PyBench 2018-02-16 Total For Average Test Times a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 700 1400 2100 2800 3500 SE +/- 18.15, N = 3 SE +/- 1.67, N = 3 SE +/- 0.33, N = 3 SE +/- 1.53, N = 3 SE +/- 3.84, N = 3 3452 1741 1185 1961 997
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 3 - Compression Speed c6g.4xlarge Graviton2 c6a.4xlarge EPYC a1.4xlarge Graviton c7g.4xlarge Graviton3 c6i.4xlarge Xeon 1000 2000 3000 4000 5000 SE +/- 3.74, N = 3 SE +/- 18.93, N = 3 SE +/- 4.47, N = 3 SE +/- 9.57, N = 3 SE +/- 29.53, N = 3 2878.8 2768.7 633.9 4639.1 3440.6 -llzma -llzma -llzma -llzma 1. (CC) gcc options: -O3 -pthread -lz
7-Zip Compression This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 21.06 Test: Decompression Rating a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 16K 32K 48K 64K 80K SE +/- 31.21, N = 3 SE +/- 239.68, N = 3 SE +/- 12.88, N = 3 SE +/- 142.56, N = 3 SE +/- 35.00, N = 3 40891 59445 73054 57318 45653 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 21.06 Test: Compression Rating a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 20K 40K 60K 80K 100K SE +/- 91.00, N = 3 SE +/- 44.77, N = 3 SE +/- 159.36, N = 3 SE +/- 16.02, N = 3 SE +/- 174.34, N = 3 32498 71285 97824 62562 66631 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Stress a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 3K 6K 9K 12K 15K SE +/- 0.16, N = 3 SE +/- 0.54, N = 3 SE +/- 0.41, N = 3 SE +/- 37.60, N = 3 SE +/- 155.66, N = 3 2366.00 3404.94 5029.71 13304.50 12527.16 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Google SynthMark SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Voices, More Is Better Google SynthMark 20201109 Test: VoiceMark_100 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 150 300 450 600 750 SE +/- 0.00, N = 3 SE +/- 0.33, N = 3 SE +/- 0.32, N = 3 SE +/- 7.09, N = 3 SE +/- 2.00, N = 3 331.07 470.39 675.64 663.07 565.69 1. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: IO_uring a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 200K 400K 600K 800K 1000K SE +/- 3840.04, N = 3 SE +/- 2395.13, N = 3 SE +/- 614.16, N = 3 SE +/- 713.03, N = 3 SE +/- 405.56, N = 3 918172.37 770521.81 843015.78 768723.46 1037943.37 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Memory Copying a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 1400 2800 4200 5600 7000 SE +/- 0.91, N = 3 SE +/- 3.75, N = 3 SE +/- 3.52, N = 3 SE +/- 11.57, N = 3 SE +/- 0.94, N = 3 798.24 2903.00 6693.32 3551.80 3150.49 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Crypto a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 5K 10K 15K 20K 25K SE +/- 6.29, N = 3 SE +/- 92.83, N = 3 SE +/- 32.01, N = 3 SE +/- 3.93, N = 3 SE +/- 5.89, N = 3 11985.38 17924.18 23181.81 13556.06 10210.34 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Vector Math a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 12K 24K 36K 48K 60K SE +/- 0.49, N = 3 SE +/- 15.72, N = 3 SE +/- 17.05, N = 3 SE +/- 2.46, N = 3 SE +/- 28.50, N = 3 27341.47 37753.89 55258.17 53787.61 40140.30 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 12 24 36 48 60 SE +/- 0.02862870, N = 3 SE +/- 0.01351889, N = 3 SE +/- 0.01401446, N = 3 SE +/- 0.03718674, N = 3 SE +/- 0.09619197, N = 3 53.77062740 11.57335470 8.01671425 28.27976610 17.86827720 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
Coremark This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations/Sec, More Is Better Coremark 1.0 CoreMark Size 666 - Iterations Per Second a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 90K 180K 270K 360K 450K SE +/- 116.54, N = 3 SE +/- 49.84, N = 3 SE +/- 3211.91, N = 3 SE +/- 2163.00, N = 3 SE +/- 80.93, N = 3 203869.40 315464.34 405413.86 345133.44 285378.84 1. (CC) gcc options: -O2 -lrt" -lrt
N-Queens This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better N-Queens 1.0 Elapsed Time a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 7 14 21 28 35 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 32.29 23.14 21.54 16.38 18.84 1. (CC) gcc options: -static -fopenmp -O3 -march=native
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Thorough a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 8 16 24 32 40 SE +/- 0.0061, N = 3 SE +/- 0.0064, N = 3 SE +/- 0.0011, N = 3 SE +/- 0.0154, N = 3 SE +/- 0.0001, N = 3 33.5198 16.5222 13.9248 7.9818 7.2625 1. (CXX) g++ options: -O3 -flto -pthread
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP CFD Solver a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 9 18 27 36 45 SE +/- 0.08, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 SE +/- 0.08, N = 3 SE +/- 0.02, N = 3 41.45 17.04 10.48 21.79 20.45 1. (CXX) g++ options: -O2 -lOpenCL
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: MG.C a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 6K 12K 18K 24K 30K SE +/- 1.64, N = 3 SE +/- 1.39, N = 3 SE +/- 4.69, N = 3 SE +/- 30.62, N = 3 SE +/- 184.24, N = 3 3266.36 6720.68 13481.61 16826.43 26298.81 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 16 - Buffer Length: 256 - Filter Length: 57 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 110M 220M 330M 440M 550M SE +/- 8819.17, N = 3 SE +/- 35118.85, N = 3 SE +/- 400097.21, N = 3 SE +/- 489364.67, N = 3 SE +/- 41633.32, N = 3 165513333 262890000 383606667 509746667 373100000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
libavif avifenc This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6, Lossless a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 8 16 24 32 40 SE +/- 0.31, N = 3 SE +/- 0.17, N = 3 SE +/- 0.01, N = 3 SE +/- 0.12, N = 3 SE +/- 0.03, N = 3 33.99 16.52 11.91 16.39 17.53 1. (CXX) g++ options: -O3 -fPIC -lm
Algebraic Multi-Grid Benchmark AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Figure Of Merit, More Is Better Algebraic Multi-Grid Benchmark 1.2 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 300M 600M 900M 1200M 1500M SE +/- 176548.39, N = 3 SE +/- 3420043.89, N = 3 SE +/- 952437.28, N = 3 SE +/- 103921.81, N = 3 SE +/- 5114517.12, N = 3 186716933 932652900 1258807333 267670700 661364767 1. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 9.12-MR1 Java Test: Jython a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 3K 6K 9K 12K 15K SE +/- 48.38, N = 4 SE +/- 23.29, N = 4 SE +/- 6.99, N = 4 SE +/- 23.33, N = 4 SE +/- 24.07, N = 4 12997 5626 3940 4616 4013
LULESH LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org z/s, More Is Better LULESH 2.0.3 a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 2K 4K 6K 8K 10K SE +/- 6.27, N = 3 SE +/- 4.88, N = 3 SE +/- 76.73, N = 3 SE +/- 5.52, N = 3 SE +/- 14.20, N = 3 2328.27 6016.16 10940.94 5452.11 8112.37 1. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
TSCP This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better TSCP 1.81 AI Chess Performance a1.4xlarge Graviton c6g.4xlarge Graviton2 c7g.4xlarge Graviton3 c6a.4xlarge EPYC c6i.4xlarge Xeon 300K 600K 900K 1200K 1500K SE +/- 196.86, N = 5 SE +/- 338.27, N = 5 SE +/- 0.00, N = 5 SE +/- 4180.17, N = 5 SE +/- 1099.67, N = 5 538500 872313 1370094 1442631 1272596 1. (CC) gcc options: -O3 -march=native
a1.4xlarge Graviton Processor: ARMv8 Cortex-A72 (16 Cores), Motherboard: Amazon EC2 a1.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Not affected + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Branch predictor hardening BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 25 May 2022 00:29 by user ubuntu.
c6g.4xlarge Graviton2 Processor: ARMv8 Neoverse-N1 (16 Cores), Motherboard: Amazon EC2 c6g.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 25 May 2022 13:01 by user ubuntu.
c7g.4xlarge Graviton3 Processor: ARMv8 Neoverse-V1 (16 Cores), Motherboard: Amazon EC2 c7g.4xlarge (1.0 BIOS), Chipset: Amazon Device 0200, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (aarch64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-target-system-zlib=auto -vJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 BHB + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 24 May 2022 11:30 by user ubuntu.
c6a.4xlarge EPYC Processor: AMD EPYC 7R13 (8 Cores / 16 Threads), Motherboard: Amazon EC2 c6a.4xlarge (1.0 BIOS), Chipset: Intel 440FX 82441FX PMC, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (x86_64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: CPU Microcode: 0xa001144Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 May 2022 00:31 by user ubuntu.
c6i.4xlarge Xeon Processor: Intel Xeon Platinum 8375C (8 Cores / 16 Threads), Motherboard: Amazon EC2 c6i.4xlarge (1.0 BIOS), Chipset: Intel 440FX 82441FX PMC, Memory: 32GB, Disk: 193GB Amazon Elastic Block Store, Network: Amazon Elastic
OS: Ubuntu 22.04, Kernel: 5.15.0-1004-aws (x86_64), Compiler: GCC 11.2.0, File-System: ext4, System Layer: amazon
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: CPU Microcode: 0xd000331Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 May 2022 00:32 by user ubuntu.