Granite Rapids MRDIMM vs. DDR5 Benchmarks Benchmarks for a future article. 2 x Intel Xeon 6980P testing with a Intel AvenueCity v0.01 (BHSDCRB1.IPC.0035.D44.2408292336 BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410037-NE-2410039NE61&gru .
Granite Rapids MRDIMM vs. DDR5 Benchmarks Processor Motherboard Chipset Memory Disk Graphics Network OS Kernel Compiler File-System Screen Resolution 24 x DDR5-6400 24 x MRDIMM 88800 2 x Intel Xeon 6980P @ 3.90GHz (256 Cores / 512 Threads) Intel AvenueCity v0.01 (BHSDCRB1.IPC.0035.D44.2408292336 BIOS) Intel Ice Lake IEH 1520GB 960GB SAMSUNG MZ1L2960HCJR-00A07 + 2 x 3201GB KIOXIA KCMYXVUG3T20 ASPEED Intel I210 + 2 x Intel 10-Gigabit X540-AT2 Ubuntu 24.04 6.8.0-22-generic (x86_64) GCC 13.2.0 ext4 1920x1200 2 x 1920GB KIOXIA KCD8XPUG1T92 + 960GB SAMSUNG MZ1L2960HCJR-00A07 6.10.0-phx (x86_64) OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-OiuXZC/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-OiuXZC/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x10002f0 Java Details - OpenJDK Runtime Environment (build 21.0.3-ea+7-Ubuntu-1build1) Python Details - Python 3.12.2 Security Details - 24 x DDR5-6400: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 24 x MRDIMM 88800: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: BHI_DIS_S + srbds: Not affected + tsx_async_abort: Not affected
Granite Rapids MRDIMM vs. DDR5 Benchmarks amg: hpcg: 104 104 104 - 60 hpcg: 144 144 144 - 60 libxsmm: 64 libxsmm: 128 stream: Copy stream: Scale stream: Add stream: Triad tinymembench: Standard Memcpy tinymembench: Standard Memset mbw: Memory Copy - 8192 MiB mbw: Memory Copy, Fixed Block Size - 8192 MiB gromacs: MPI CPU - water_GMX50_bare cassandra: Writes java-jmh: Throughput npb: BT.C npb: EP.D npb: LU.C npb: SP.B npb: IS.D npb: MG.C npb: CG.C pgbench: 100 - 800 - Read Write pgbench: 100 - 800 - Read Only pgbench: 100 - 1000 - Read Write pgbench: 100 - 1000 - Read Only lulesh: pennant: leblancbig pennant: sedovbig pgbench: 100 - 800 - Read Write - Average Latency pgbench: 100 - 800 - Read Only - Average Latency pgbench: 100 - 1000 - Read Write - Average Latency pgbench: 100 - 1000 - Read Only - Average Latency incompact3d: input.i3d 193 Cells Per Direction incompact3d: X3D-benchmarking input.i3d openfoam: drivaerFastback, Small Mesh Size - Mesh Time openfoam: drivaerFastback, Small Mesh Size - Execution Time openfoam: drivaerFastback, Medium Mesh Size - Mesh Time openfoam: drivaerFastback, Medium Mesh Size - Execution Time openradioss: Chrysler Neon 1M specfem3d: Layered Halfspace specfem3d: Water-layered Halfspace specfem3d: Homogeneous Halfspace specfem3d: Mount St. Helens specfem3d: Tomographic Model build-linux-kernel: defconfig build-linux-kernel: allmodconfig build-llvm: Ninja build-llvm: Unix Makefiles build-nodejs: Time To Compile 24 x DDR5-6400 24 x MRDIMM 88800 130.836 128.043 870313.7 848431.9 887756.1 893252.7 15023.6 30044.8 14219.822 8683.007 82480 759415.21 32963.42 735889.76 349232.51 13871.96 365424.17 113647.23 14207 503950 13686 637458 56.311 1.595 73.078 1.582 2.89296635 86.1528244 28.90647 18.999965 157.08742 80.046631 89.11 7.576594141 9.566090575 5.735451812 8606553000 170.946 169.491 4501.4 7398.5 861560.8 952124.6 937751.9 879963.6 15008.9 30001.0 15316.413 8766.213 33.306 87063 790759705068.81 804396.26 32324.72 769386.67 373559.46 15638.56 449470.32 118207.17 14452 509160 13757 475621 124013.27 1.342768 6.472671 55.372 1.596 72.692 2.121 2.62023497 69.8064524 23.416256 18.389208 137.84871 71.936334 60.82 7.113553561 9.216693066 5.417761321 3.955489179 4.379927179 23.431 131.839 76.600 198.677 138.989 OpenBenchmarking.org
System Power Consumption Monitor Phoronix Test Suite System Monitoring OpenBenchmarking.org Watts System Power Consumption Monitor Phoronix Test Suite System Monitoring 24 x MRDIMM 88800 200 400 600 800 1000 Min: 183.7 / Avg: 465.09 / Max: 1108.2
Algebraic Multi-Grid Benchmark OpenBenchmarking.org Figure Of Merit, More Is Better Algebraic Multi-Grid Benchmark 1.2 24 x MRDIMM 88800 2000M 4000M 6000M 8000M 10000M SE +/- 17997194.71, N = 3 8606553000 1. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
High Performance Conjugate Gradient X Y Z: 104 104 104 - RT: 60 OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 X Y Z: 104 104 104 - RT: 60 24 x DDR5-6400 24 x MRDIMM 88800 40 80 120 160 200 SE +/- 0.48, N = 3 SE +/- 0.19, N = 3 130.84 170.95 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
High Performance Conjugate Gradient X Y Z: 144 144 144 - RT: 60 OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 X Y Z: 144 144 144 - RT: 60 24 x DDR5-6400 24 x MRDIMM 88800 40 80 120 160 200 SE +/- 0.22, N = 3 SE +/- 0.12, N = 3 128.04 169.49 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
libxsmm M N K: 64 OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 64 24 x MRDIMM 88800 1000 2000 3000 4000 5000 SE +/- 244.26, N = 15 4501.4 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
libxsmm M N K: 128 OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 128 24 x MRDIMM 88800 1600 3200 4800 6400 8000 SE +/- 660.65, N = 6 7398.5 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
Stream Type: Copy OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Copy 24 x DDR5-6400 24 x MRDIMM 88800 200K 400K 600K 800K 1000K SE +/- 5160.21, N = 25 SE +/- 7612.22, N = 25 870313.7 861560.8 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Stream Type: Scale OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Scale 24 x DDR5-6400 24 x MRDIMM 88800 200K 400K 600K 800K 1000K SE +/- 18666.81, N = 5 SE +/- 15910.14, N = 5 848431.9 952124.6 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Stream Type: Add OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Add 24 x DDR5-6400 24 x MRDIMM 88800 200K 400K 600K 800K 1000K SE +/- 19594.57, N = 5 SE +/- 13677.37, N = 5 887756.1 937751.9 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Stream Type: Triad OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Triad 24 x DDR5-6400 24 x MRDIMM 88800 200K 400K 600K 800K 1000K SE +/- 18029.90, N = 5 SE +/- 16674.48, N = 5 893252.7 879963.6 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Tinymembench Standard Memcpy OpenBenchmarking.org MB/s, More Is Better Tinymembench 2018-05-28 Standard Memcpy 24 x DDR5-6400 24 x MRDIMM 88800 3K 6K 9K 12K 15K SE +/- 0.23, N = 3 SE +/- 10.01, N = 3 15023.6 15008.9 1. (CC) gcc options: -O2 -lm
Tinymembench Standard Memset OpenBenchmarking.org MB/s, More Is Better Tinymembench 2018-05-28 Standard Memset 24 x DDR5-6400 24 x MRDIMM 88800 6K 12K 18K 24K 30K SE +/- 3.48, N = 3 SE +/- 19.60, N = 3 30044.8 30001.0 1. (CC) gcc options: -O2 -lm
MBW Test: Memory Copy - Array Size: 8192 MiB OpenBenchmarking.org MiB/s, More Is Better MBW 2018-09-08 Test: Memory Copy - Array Size: 8192 MiB 24 x DDR5-6400 24 x MRDIMM 88800 3K 6K 9K 12K 15K SE +/- 119.22, N = 8 SE +/- 26.98, N = 3 14219.82 15316.41 1. (CC) gcc options: -O3 -march=native
MBW Test: Memory Copy, Fixed Block Size - Array Size: 8192 MiB OpenBenchmarking.org MiB/s, More Is Better MBW 2018-09-08 Test: Memory Copy, Fixed Block Size - Array Size: 8192 MiB 24 x DDR5-6400 24 x MRDIMM 88800 2K 4K 6K 8K 10K SE +/- 35.08, N = 3 SE +/- 34.71, N = 3 8683.01 8766.21 1. (CC) gcc options: -O3 -march=native
GROMACS Implementation: MPI CPU - Input: water_GMX50_bare OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2024 Implementation: MPI CPU - Input: water_GMX50_bare 24 x MRDIMM 88800 8 16 24 32 40 SE +/- 0.08, N = 2 33.31 1. (CXX) g++ options: -O3 -lm
Apache Cassandra Test: Writes OpenBenchmarking.org Op/s, More Is Better Apache Cassandra 5.0 Test: Writes 24 x DDR5-6400 24 x MRDIMM 88800 20K 40K 60K 80K 100K SE +/- 1732.33, N = 9 SE +/- 1099.82, N = 12 82480 87063
Java JMH Throughput OpenBenchmarking.org Ops/s, More Is Better Java JMH Throughput 24 x MRDIMM 88800 200000M 400000M 600000M 800000M 1000000M 790759705068.81
NAS Parallel Benchmarks Test / Class: BT.C OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: BT.C 24 x DDR5-6400 24 x MRDIMM 88800 200K 400K 600K 800K 1000K SE +/- 553.04, N = 3 SE +/- 1537.19, N = 3 759415.21 804396.26 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: EP.D OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.D 24 x DDR5-6400 24 x MRDIMM 88800 7K 14K 21K 28K 35K SE +/- 90.50, N = 3 SE +/- 1574.53, N = 13 32963.42 32324.72 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: LU.C OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C 24 x DDR5-6400 24 x MRDIMM 88800 160K 320K 480K 640K 800K SE +/- 786.48, N = 3 SE +/- 2025.53, N = 3 735889.76 769386.67 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: SP.B OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.B 24 x DDR5-6400 24 x MRDIMM 88800 80K 160K 240K 320K 400K SE +/- 2373.46, N = 3 SE +/- 3986.08, N = 4 349232.51 373559.46 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: IS.D OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: IS.D 24 x DDR5-6400 24 x MRDIMM 88800 3K 6K 9K 12K 15K SE +/- 60.96, N = 3 SE +/- 36.61, N = 3 13871.96 15638.56 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: MG.C OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: MG.C 24 x DDR5-6400 24 x MRDIMM 88800 100K 200K 300K 400K 500K SE +/- 966.97, N = 3 SE +/- 1588.36, N = 3 365424.17 449470.32 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: CG.C OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: CG.C 24 x DDR5-6400 24 x MRDIMM 88800 30K 60K 90K 120K 150K SE +/- 1173.69, N = 3 SE +/- 1589.02, N = 3 113647.23 118207.17 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Write 24 x DDR5-6400 24 x MRDIMM 88800 3K 6K 9K 12K 15K SE +/- 44.90, N = 3 SE +/- 174.94, N = 3 14207 14452 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Only 24 x DDR5-6400 24 x MRDIMM 88800 110K 220K 330K 440K 550K SE +/- 12217.99, N = 9 SE +/- 19798.84, N = 12 503950 509160 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write 24 x DDR5-6400 24 x MRDIMM 88800 3K 6K 9K 12K 15K SE +/- 104.01, N = 3 SE +/- 20.41, N = 3 13686 13757 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only 24 x DDR5-6400 24 x MRDIMM 88800 140K 280K 420K 560K 700K SE +/- 18280.50, N = 12 SE +/- 13155.72, N = 12 637458 475621 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
LULESH OpenBenchmarking.org z/s, More Is Better LULESH 2.0.3 24 x MRDIMM 88800 30K 60K 90K 120K 150K SE +/- 801.06, N = 15 124013.27 1. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
Pennant Test: leblancbig OpenBenchmarking.org Hydro Cycle Time - Seconds, Fewer Is Better Pennant 1.0.1 Test: leblancbig 24 x MRDIMM 88800 0.3021 0.6042 0.9063 1.2084 1.5105 SE +/- 0.011876, N = 15 1.342768 1. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
Pennant Test: sedovbig OpenBenchmarking.org Hydro Cycle Time - Seconds, Fewer Is Better Pennant 1.0.1 Test: sedovbig 24 x MRDIMM 88800 2 4 6 8 10 SE +/- 0.009571, N = 3 6.472671 1. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency 24 x DDR5-6400 24 x MRDIMM 88800 13 26 39 52 65 SE +/- 0.18, N = 3 SE +/- 0.67, N = 3 56.31 55.37 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency 24 x DDR5-6400 24 x MRDIMM 88800 0.3591 0.7182 1.0773 1.4364 1.7955 SE +/- 0.037, N = 9 SE +/- 0.058, N = 12 1.595 1.596 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency 24 x DDR5-6400 24 x MRDIMM 88800 16 32 48 64 80 SE +/- 0.55, N = 3 SE +/- 0.11, N = 3 73.08 72.69 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency 24 x DDR5-6400 24 x MRDIMM 88800 0.4772 0.9544 1.4316 1.9088 2.386 SE +/- 0.042, N = 12 SE +/- 0.060, N = 12 1.582 2.121 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
Xcompact3d Incompact3d Input: input.i3d 193 Cells Per Direction OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction 24 x DDR5-6400 24 x MRDIMM 88800 0.6509 1.3018 1.9527 2.6036 3.2545 SE +/- 0.00797243, N = 3 SE +/- 0.02984810, N = 3 2.89296635 2.62023497 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
Xcompact3d Incompact3d Input: X3D-benchmarking input.i3d OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: X3D-benchmarking input.i3d 24 x DDR5-6400 24 x MRDIMM 88800 20 40 60 80 100 SE +/- 0.38, N = 3 SE +/- 0.04, N = 3 86.15 69.81 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenFOAM Input: drivaerFastback, Small Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time 24 x DDR5-6400 24 x MRDIMM 88800 7 14 21 28 35 28.91 23.42 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenFOAM Input: drivaerFastback, Small Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time 24 x DDR5-6400 24 x MRDIMM 88800 5 10 15 20 25 19.00 18.39 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenFOAM Input: drivaerFastback, Medium Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time 24 x DDR5-6400 24 x MRDIMM 88800 30 60 90 120 150 157.09 137.85 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenFOAM Input: drivaerFastback, Medium Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time 24 x DDR5-6400 24 x MRDIMM 88800 20 40 60 80 100 80.05 71.94 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenRadioss Model: Chrysler Neon 1M OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Chrysler Neon 1M 24 x DDR5-6400 24 x MRDIMM 88800 20 40 60 80 100 SE +/- 0.10, N = 3 SE +/- 0.29, N = 3 89.11 60.82
SPECFEM3D Model: Layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Layered Halfspace 24 x DDR5-6400 24 x MRDIMM 88800 2 4 6 8 10 SE +/- 0.059546245, N = 3 SE +/- 0.043473328, N = 3 7.576594141 7.113553561 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
SPECFEM3D Model: Water-layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Water-layered Halfspace 24 x DDR5-6400 24 x MRDIMM 88800 3 6 9 12 15 SE +/- 0.071811125, N = 3 SE +/- 0.111895654, N = 4 9.566090575 9.216693066 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
SPECFEM3D Model: Homogeneous Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Homogeneous Halfspace 24 x DDR5-6400 24 x MRDIMM 88800 1.2905 2.581 3.8715 5.162 6.4525 SE +/- 0.044695936, N = 3 SE +/- 0.013586641, N = 3 5.735451812 5.417761321 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
SPECFEM3D Model: Mount St. Helens OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Mount St. Helens 24 x MRDIMM 88800 0.89 1.78 2.67 3.56 4.45 SE +/- 0.025428571, N = 3 3.955489179 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
SPECFEM3D Model: Tomographic Model OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Tomographic Model 24 x MRDIMM 88800 0.9855 1.971 2.9565 3.942 4.9275 SE +/- 0.009660353, N = 3 4.379927179 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.8 Build: defconfig 24 x MRDIMM 88800 6 12 18 24 30 SE +/- 0.15, N = 15 23.43
Timed Linux Kernel Compilation Build: allmodconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.8 Build: allmodconfig 24 x MRDIMM 88800 30 60 90 120 150 SE +/- 0.82, N = 3 131.84
Timed LLVM Compilation Build System: Ninja OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 16.0 Build System: Ninja 24 x MRDIMM 88800 20 40 60 80 100 SE +/- 0.18, N = 3 76.60
Timed LLVM Compilation Build System: Unix Makefiles OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 16.0 Build System: Unix Makefiles 24 x MRDIMM 88800 40 80 120 160 200 SE +/- 1.12, N = 3 198.68
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 21.7.2 Time To Compile 24 x MRDIMM 88800 30 60 90 120 150 SE +/- 1.02, N = 3 138.99
Stream System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 198 585 924 OpenBenchmarking.org Watts, Fewer Is Better Stream 2013-01-17 System Power Consumption Monitor 200 400 600 800 1000
MBW System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204.6 291.3 297.2 OpenBenchmarking.org Watts, Fewer Is Better MBW 2018-09-08 System Power Consumption Monitor 70 140 210 280 350
MBW System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 200.8 289.6 295.1 OpenBenchmarking.org Watts, Fewer Is Better MBW 2018-09-08 System Power Consumption Monitor 70 140 210 280 350
High Performance Conjugate Gradient System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 223 960 1108 OpenBenchmarking.org Watts, Fewer Is Better High Performance Conjugate Gradient 3.1 System Power Consumption Monitor 200 400 600 800 1000
High Performance Conjugate Gradient System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 354 1040 1107 OpenBenchmarking.org Watts, Fewer Is Better High Performance Conjugate Gradient 3.1 System Power Consumption Monitor 200 400 600 800 1000
Tinymembench System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 213.8 286.2 394.6 OpenBenchmarking.org Watts, Fewer Is Better Tinymembench 2018-05-28 System Power Consumption Monitor 110 220 330 440 550
Apache Cassandra System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 217.7 372.0 431.6 OpenBenchmarking.org Watts, Fewer Is Better Apache Cassandra 5.0 System Power Consumption Monitor 110 220 330 440 550
PostgreSQL System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 199.5 339.2 373.5 OpenBenchmarking.org Watts, Fewer Is Better PostgreSQL 17 System Power Consumption Monitor 100 200 300 400 500
PostgreSQL System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 197.7 488.2 566.6 OpenBenchmarking.org Watts, Fewer Is Better PostgreSQL 17 System Power Consumption Monitor 140 280 420 560 700
PostgreSQL System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204.9 344.0 378.0 OpenBenchmarking.org Watts, Fewer Is Better PostgreSQL 17 System Power Consumption Monitor 100 200 300 400 500
PostgreSQL System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 201.1 483.3 564.2 OpenBenchmarking.org Watts, Fewer Is Better PostgreSQL 17 System Power Consumption Monitor 140 280 420 560 700
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 203 476 900 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 200 400 600 800 1000
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204 435 782 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 200 400 600 800 1000
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 205 399 760 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 200 400 600 800 1000
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 220.3 386.6 564.3 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 140 280 420 560 700
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 202.8 367.3 498.6 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 130 260 390 520 650
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 201.5 368.8 500.7 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 130 260 390 520 650
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 199.6 357.7 502.0 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 130 260 390 520 650
Xcompact3d Incompact3d System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 205 424 852 OpenBenchmarking.org Watts, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 System Power Consumption Monitor 200 400 600 800 1000
Xcompact3d Incompact3d System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 202 807 1051 OpenBenchmarking.org Watts, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 System Power Consumption Monitor 200 400 600 800 1000
OpenFOAM System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 282 523 764 OpenBenchmarking.org Watts, Fewer Is Better OpenFOAM 10 System Power Consumption Monitor 200 400 600 800 1000
OpenFOAM System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 221 652 934 OpenBenchmarking.org Watts, Fewer Is Better OpenFOAM 10 System Power Consumption Monitor 200 400 600 800 1000
OpenRadioss System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 209 534 864 OpenBenchmarking.org Watts, Fewer Is Better OpenRadioss 2023.09.15 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204 422 804 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204 470 805 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 201 421 756 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 205 429 801 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 198 402 790 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
LULESH System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 198 496 911 OpenBenchmarking.org Watts, Fewer Is Better LULESH 2.0.3 System Power Consumption Monitor 200 400 600 800 1000
Timed Linux Kernel Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 198 366 848 OpenBenchmarking.org Watts, Fewer Is Better Timed Linux Kernel Compilation 6.8 System Power Consumption Monitor 200 400 600 800 1000
Timed Linux Kernel Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204 555 811 OpenBenchmarking.org Watts, Fewer Is Better Timed Linux Kernel Compilation 6.8 System Power Consumption Monitor 200 400 600 800 1000
Timed LLVM Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 203 470 858 OpenBenchmarking.org Watts, Fewer Is Better Timed LLVM Compilation 16.0 System Power Consumption Monitor 200 400 600 800 1000
Timed LLVM Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 192 359 827 OpenBenchmarking.org Watts, Fewer Is Better Timed LLVM Compilation 16.0 System Power Consumption Monitor 200 400 600 800 1000
Timed Node.js Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 207 445 834 OpenBenchmarking.org Watts, Fewer Is Better Timed Node.js Compilation 21.7.2 System Power Consumption Monitor 200 400 600 800 1000
GROMACS System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 184 310 809 OpenBenchmarking.org Watts, Fewer Is Better GROMACS 2024 System Power Consumption Monitor 200 400 600 800 1000
Java JMH System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 206 766 882 OpenBenchmarking.org Watts, Fewer Is Better Java JMH System Power Consumption Monitor 200 400 600 800 1000
libxsmm System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 203 466 879 OpenBenchmarking.org Watts, Fewer Is Better libxsmm 2-1.17-3645 System Power Consumption Monitor 200 400 600 800 1000
libxsmm System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 188.8 310.1 387.9 OpenBenchmarking.org Watts, Fewer Is Better libxsmm 2-1.17-3645 System Power Consumption Monitor 100 200 300 400 500
Algebraic Multi-Grid Benchmark System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 194 564 954 OpenBenchmarking.org Watts, Fewer Is Better Algebraic Multi-Grid Benchmark 1.2 System Power Consumption Monitor 200 400 600 800 1000
Pennant System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 200 394 681 OpenBenchmarking.org Watts, Fewer Is Better Pennant 1.0.1 System Power Consumption Monitor 200 400 600 800 1000
Pennant System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 207 455 805 OpenBenchmarking.org Watts, Fewer Is Better Pennant 1.0.1 System Power Consumption Monitor 200 400 600 800 1000
Phoronix Test Suite v10.8.5