Granite Rapids MRDIMM vs. DDR5 Benchmarks Benchmarks for a future article. 2 x Intel Xeon 6980P testing with a Intel AvenueCity v0.01 (BHSDCRB1.IPC.0035.D44.2408292336 BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410037-NE-2410039NE61&rdt&grs .
Granite Rapids MRDIMM vs. DDR5 Benchmarks Processor Motherboard Chipset Memory Disk Graphics Network OS Kernel Compiler File-System Screen Resolution 24 x MRDIMM 88800 24 x DDR5-6400 2 x Intel Xeon 6980P @ 3.90GHz (256 Cores / 512 Threads) Intel AvenueCity v0.01 (BHSDCRB1.IPC.0035.D44.2408292336 BIOS) Intel Ice Lake IEH 1520GB 2 x 1920GB KIOXIA KCD8XPUG1T92 + 960GB SAMSUNG MZ1L2960HCJR-00A07 ASPEED Intel I210 + 2 x Intel 10-Gigabit X540-AT2 Ubuntu 24.04 6.10.0-phx (x86_64) GCC 13.2.0 ext4 1920x1200 960GB SAMSUNG MZ1L2960HCJR-00A07 + 2 x 3201GB KIOXIA KCMYXVUG3T20 6.8.0-22-generic (x86_64) OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-OiuXZC/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-OiuXZC/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x10002f0 Java Details - OpenJDK Runtime Environment (build 21.0.3-ea+7-Ubuntu-1build1) Python Details - Python 3.12.2 Security Details - 24 x MRDIMM 88800: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: BHI_DIS_S + srbds: Not affected + tsx_async_abort: Not affected - 24 x DDR5-6400: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Granite Rapids MRDIMM vs. DDR5 Benchmarks openradioss: Chrysler Neon 1M hpcg: 144 144 144 - 60 hpcg: 104 104 104 - 60 openfoam: drivaerFastback, Small Mesh Size - Mesh Time incompact3d: X3D-benchmarking input.i3d npb: MG.C openfoam: drivaerFastback, Medium Mesh Size - Mesh Time npb: IS.D stream: Scale openfoam: drivaerFastback, Medium Mesh Size - Execution Time incompact3d: input.i3d 193 Cells Per Direction mbw: Memory Copy - 8192 MiB npb: SP.B specfem3d: Layered Halfspace npb: BT.C specfem3d: Homogeneous Halfspace stream: Add npb: LU.C npb: CG.C specfem3d: Water-layered Halfspace openfoam: drivaerFastback, Small Mesh Size - Execution Time pgbench: 100 - 800 - Read Write pgbench: 100 - 800 - Read Write - Average Latency stream: Triad stream: Copy mbw: Memory Copy, Fixed Block Size - 8192 MiB pgbench: 100 - 1000 - Read Write - Average Latency pgbench: 100 - 1000 - Read Write tinymembench: Standard Memset tinymembench: Standard Memcpy pennant: sedovbig pennant: leblancbig amg: java-jmh: Throughput gromacs: MPI CPU - water_GMX50_bare build-nodejs: Time To Compile build-llvm: Unix Makefiles build-llvm: Ninja build-linux-kernel: allmodconfig build-linux-kernel: defconfig lulesh: specfem3d: Tomographic Model specfem3d: Mount St. Helens libxsmm: 128 libxsmm: 64 npb: EP.D pgbench: 100 - 1000 - Read Only - Average Latency pgbench: 100 - 1000 - Read Only pgbench: 100 - 800 - Read Only - Average Latency pgbench: 100 - 800 - Read Only cassandra: Writes 24 x MRDIMM 88800 24 x DDR5-6400 60.82 169.491 170.946 23.416256 69.8064524 449470.32 137.84871 15638.56 952124.6 71.936334 2.62023497 15316.413 373559.46 7.113553561 804396.26 5.417761321 937751.9 769386.67 118207.17 9.216693066 18.389208 14452 55.372 879963.6 861560.8 8766.213 72.692 13757 30001.0 15008.9 6.472671 1.342768 8606553000 790759705068.81 33.306 138.989 198.677 76.600 131.839 23.431 124013.27 4.379927179 3.955489179 7398.5 4501.4 32324.72 2.121 475621 1.596 509160 87063 89.11 128.043 130.836 28.90647 86.1528244 365424.17 157.08742 13871.96 848431.9 80.046631 2.89296635 14219.822 349232.51 7.576594141 759415.21 5.735451812 887756.1 735889.76 113647.23 9.566090575 18.999965 14207 56.311 893252.7 870313.7 8683.007 73.078 13686 30044.8 15023.6 32963.42 1.582 637458 1.595 503950 82480 OpenBenchmarking.org
OpenRadioss Model: Chrysler Neon 1M OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Chrysler Neon 1M 24 x MRDIMM 88800 24 x DDR5-6400 20 40 60 80 100 SE +/- 0.29, N = 3 SE +/- 0.10, N = 3 60.82 89.11
High Performance Conjugate Gradient X Y Z: 144 144 144 - RT: 60 OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 X Y Z: 144 144 144 - RT: 60 24 x MRDIMM 88800 24 x DDR5-6400 40 80 120 160 200 SE +/- 0.12, N = 3 SE +/- 0.22, N = 3 169.49 128.04 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
High Performance Conjugate Gradient X Y Z: 104 104 104 - RT: 60 OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 X Y Z: 104 104 104 - RT: 60 24 x MRDIMM 88800 24 x DDR5-6400 40 80 120 160 200 SE +/- 0.19, N = 3 SE +/- 0.48, N = 3 170.95 130.84 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenFOAM Input: drivaerFastback, Small Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time 24 x MRDIMM 88800 24 x DDR5-6400 7 14 21 28 35 23.42 28.91 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
Xcompact3d Incompact3d Input: X3D-benchmarking input.i3d OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: X3D-benchmarking input.i3d 24 x MRDIMM 88800 24 x DDR5-6400 20 40 60 80 100 SE +/- 0.04, N = 3 SE +/- 0.38, N = 3 69.81 86.15 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
NAS Parallel Benchmarks Test / Class: MG.C OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: MG.C 24 x MRDIMM 88800 24 x DDR5-6400 100K 200K 300K 400K 500K SE +/- 1588.36, N = 3 SE +/- 966.97, N = 3 449470.32 365424.17 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
OpenFOAM Input: drivaerFastback, Medium Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time 24 x MRDIMM 88800 24 x DDR5-6400 30 60 90 120 150 137.85 157.09 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
NAS Parallel Benchmarks Test / Class: IS.D OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: IS.D 24 x MRDIMM 88800 24 x DDR5-6400 3K 6K 9K 12K 15K SE +/- 36.61, N = 3 SE +/- 60.96, N = 3 15638.56 13871.96 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
Stream Type: Scale OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Scale 24 x MRDIMM 88800 24 x DDR5-6400 200K 400K 600K 800K 1000K SE +/- 15910.14, N = 5 SE +/- 18666.81, N = 5 952124.6 848431.9 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
OpenFOAM Input: drivaerFastback, Medium Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time 24 x MRDIMM 88800 24 x DDR5-6400 20 40 60 80 100 71.94 80.05 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
Xcompact3d Incompact3d Input: input.i3d 193 Cells Per Direction OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction 24 x MRDIMM 88800 24 x DDR5-6400 0.6509 1.3018 1.9527 2.6036 3.2545 SE +/- 0.02984810, N = 3 SE +/- 0.00797243, N = 3 2.62023497 2.89296635 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
MBW Test: Memory Copy - Array Size: 8192 MiB OpenBenchmarking.org MiB/s, More Is Better MBW 2018-09-08 Test: Memory Copy - Array Size: 8192 MiB 24 x MRDIMM 88800 24 x DDR5-6400 3K 6K 9K 12K 15K SE +/- 26.98, N = 3 SE +/- 119.22, N = 8 15316.41 14219.82 1. (CC) gcc options: -O3 -march=native
NAS Parallel Benchmarks Test / Class: SP.B OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.B 24 x MRDIMM 88800 24 x DDR5-6400 80K 160K 240K 320K 400K SE +/- 3986.08, N = 4 SE +/- 2373.46, N = 3 373559.46 349232.51 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
SPECFEM3D Model: Layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Layered Halfspace 24 x MRDIMM 88800 24 x DDR5-6400 2 4 6 8 10 SE +/- 0.043473328, N = 3 SE +/- 0.059546245, N = 3 7.113553561 7.576594141 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
NAS Parallel Benchmarks Test / Class: BT.C OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: BT.C 24 x MRDIMM 88800 24 x DDR5-6400 200K 400K 600K 800K 1000K SE +/- 1537.19, N = 3 SE +/- 553.04, N = 3 804396.26 759415.21 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
SPECFEM3D Model: Homogeneous Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Homogeneous Halfspace 24 x MRDIMM 88800 24 x DDR5-6400 1.2905 2.581 3.8715 5.162 6.4525 SE +/- 0.013586641, N = 3 SE +/- 0.044695936, N = 3 5.417761321 5.735451812 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
Stream Type: Add OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Add 24 x MRDIMM 88800 24 x DDR5-6400 200K 400K 600K 800K 1000K SE +/- 13677.37, N = 5 SE +/- 19594.57, N = 5 937751.9 887756.1 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
NAS Parallel Benchmarks Test / Class: LU.C OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C 24 x MRDIMM 88800 24 x DDR5-6400 160K 320K 480K 640K 800K SE +/- 2025.53, N = 3 SE +/- 786.48, N = 3 769386.67 735889.76 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks Test / Class: CG.C OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: CG.C 24 x MRDIMM 88800 24 x DDR5-6400 30K 60K 90K 120K 150K SE +/- 1589.02, N = 3 SE +/- 1173.69, N = 3 118207.17 113647.23 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
SPECFEM3D Model: Water-layered Halfspace OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Water-layered Halfspace 24 x MRDIMM 88800 24 x DDR5-6400 3 6 9 12 15 SE +/- 0.111895654, N = 4 SE +/- 0.071811125, N = 3 9.216693066 9.566090575 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenFOAM Input: drivaerFastback, Small Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time 24 x MRDIMM 88800 24 x DDR5-6400 5 10 15 20 25 18.39 19.00 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Write 24 x MRDIMM 88800 24 x DDR5-6400 3K 6K 9K 12K 15K SE +/- 174.94, N = 3 SE +/- 44.90, N = 3 14452 14207 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency 24 x MRDIMM 88800 24 x DDR5-6400 13 26 39 52 65 SE +/- 0.67, N = 3 SE +/- 0.18, N = 3 55.37 56.31 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
Stream Type: Triad OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Triad 24 x MRDIMM 88800 24 x DDR5-6400 200K 400K 600K 800K 1000K SE +/- 16674.48, N = 5 SE +/- 18029.90, N = 5 879963.6 893252.7 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Stream Type: Copy OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Copy 24 x MRDIMM 88800 24 x DDR5-6400 200K 400K 600K 800K 1000K SE +/- 7612.22, N = 25 SE +/- 5160.21, N = 25 861560.8 870313.7 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
MBW Test: Memory Copy, Fixed Block Size - Array Size: 8192 MiB OpenBenchmarking.org MiB/s, More Is Better MBW 2018-09-08 Test: Memory Copy, Fixed Block Size - Array Size: 8192 MiB 24 x MRDIMM 88800 24 x DDR5-6400 2K 4K 6K 8K 10K SE +/- 34.71, N = 3 SE +/- 35.08, N = 3 8766.21 8683.01 1. (CC) gcc options: -O3 -march=native
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency 24 x MRDIMM 88800 24 x DDR5-6400 16 32 48 64 80 SE +/- 0.11, N = 3 SE +/- 0.55, N = 3 72.69 73.08 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write 24 x MRDIMM 88800 24 x DDR5-6400 3K 6K 9K 12K 15K SE +/- 20.41, N = 3 SE +/- 104.01, N = 3 13757 13686 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
Tinymembench Standard Memset OpenBenchmarking.org MB/s, More Is Better Tinymembench 2018-05-28 Standard Memset 24 x MRDIMM 88800 24 x DDR5-6400 6K 12K 18K 24K 30K SE +/- 19.60, N = 3 SE +/- 3.48, N = 3 30001.0 30044.8 1. (CC) gcc options: -O2 -lm
Tinymembench Standard Memcpy OpenBenchmarking.org MB/s, More Is Better Tinymembench 2018-05-28 Standard Memcpy 24 x MRDIMM 88800 24 x DDR5-6400 3K 6K 9K 12K 15K SE +/- 10.01, N = 3 SE +/- 0.23, N = 3 15008.9 15023.6 1. (CC) gcc options: -O2 -lm
Pennant Test: sedovbig OpenBenchmarking.org Hydro Cycle Time - Seconds, Fewer Is Better Pennant 1.0.1 Test: sedovbig 24 x MRDIMM 88800 2 4 6 8 10 SE +/- 0.009571, N = 3 6.472671 1. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
Pennant Test: leblancbig OpenBenchmarking.org Hydro Cycle Time - Seconds, Fewer Is Better Pennant 1.0.1 Test: leblancbig 24 x MRDIMM 88800 0.3021 0.6042 0.9063 1.2084 1.5105 SE +/- 0.011876, N = 15 1.342768 1. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
Algebraic Multi-Grid Benchmark OpenBenchmarking.org Figure Of Merit, More Is Better Algebraic Multi-Grid Benchmark 1.2 24 x MRDIMM 88800 2000M 4000M 6000M 8000M 10000M SE +/- 17997194.71, N = 3 8606553000 1. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
Java JMH Throughput OpenBenchmarking.org Ops/s, More Is Better Java JMH Throughput 24 x MRDIMM 88800 200000M 400000M 600000M 800000M 1000000M 790759705068.81
GROMACS Implementation: MPI CPU - Input: water_GMX50_bare OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2024 Implementation: MPI CPU - Input: water_GMX50_bare 24 x MRDIMM 88800 8 16 24 32 40 SE +/- 0.08, N = 2 33.31 1. (CXX) g++ options: -O3 -lm
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 21.7.2 Time To Compile 24 x MRDIMM 88800 30 60 90 120 150 SE +/- 1.02, N = 3 138.99
Timed LLVM Compilation Build System: Unix Makefiles OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 16.0 Build System: Unix Makefiles 24 x MRDIMM 88800 40 80 120 160 200 SE +/- 1.12, N = 3 198.68
Timed LLVM Compilation Build System: Ninja OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 16.0 Build System: Ninja 24 x MRDIMM 88800 20 40 60 80 100 SE +/- 0.18, N = 3 76.60
Timed Linux Kernel Compilation Build: allmodconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.8 Build: allmodconfig 24 x MRDIMM 88800 30 60 90 120 150 SE +/- 0.82, N = 3 131.84
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.8 Build: defconfig 24 x MRDIMM 88800 6 12 18 24 30 SE +/- 0.15, N = 15 23.43
LULESH OpenBenchmarking.org z/s, More Is Better LULESH 2.0.3 24 x MRDIMM 88800 30K 60K 90K 120K 150K SE +/- 801.06, N = 15 124013.27 1. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
SPECFEM3D Model: Tomographic Model OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Tomographic Model 24 x MRDIMM 88800 0.9855 1.971 2.9565 3.942 4.9275 SE +/- 0.009660353, N = 3 4.379927179 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
SPECFEM3D Model: Mount St. Helens OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.1.1 Model: Mount St. Helens 24 x MRDIMM 88800 0.89 1.78 2.67 3.56 4.45 SE +/- 0.025428571, N = 3 3.955489179 1. (F9X) gfortran options: -O2 -fopenmp -std=f2008 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
System Power Consumption Monitor Phoronix Test Suite System Monitoring OpenBenchmarking.org Watts System Power Consumption Monitor Phoronix Test Suite System Monitoring 24 x MRDIMM 88800 200 400 600 800 1000 Min: 183.7 / Avg: 465.09 / Max: 1108.2
Pennant System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 207 455 805 OpenBenchmarking.org Watts, Fewer Is Better Pennant 1.0.1 System Power Consumption Monitor 200 400 600 800 1000
Pennant System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 200 394 681 OpenBenchmarking.org Watts, Fewer Is Better Pennant 1.0.1 System Power Consumption Monitor 200 400 600 800 1000
Algebraic Multi-Grid Benchmark System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 194 564 954 OpenBenchmarking.org Watts, Fewer Is Better Algebraic Multi-Grid Benchmark 1.2 System Power Consumption Monitor 200 400 600 800 1000
libxsmm System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 188.8 310.1 387.9 OpenBenchmarking.org Watts, Fewer Is Better libxsmm 2-1.17-3645 System Power Consumption Monitor 100 200 300 400 500
libxsmm M N K: 128 OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 128 24 x MRDIMM 88800 1600 3200 4800 6400 8000 SE +/- 660.65, N = 6 7398.5 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
libxsmm System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 203 466 879 OpenBenchmarking.org Watts, Fewer Is Better libxsmm 2-1.17-3645 System Power Consumption Monitor 200 400 600 800 1000
libxsmm M N K: 64 OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 64 24 x MRDIMM 88800 1000 2000 3000 4000 5000 SE +/- 244.26, N = 15 4501.4 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
Java JMH System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 206 766 882 OpenBenchmarking.org Watts, Fewer Is Better Java JMH System Power Consumption Monitor 200 400 600 800 1000
GROMACS System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 184 310 809 OpenBenchmarking.org Watts, Fewer Is Better GROMACS 2024 System Power Consumption Monitor 200 400 600 800 1000
Timed Node.js Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 207 445 834 OpenBenchmarking.org Watts, Fewer Is Better Timed Node.js Compilation 21.7.2 System Power Consumption Monitor 200 400 600 800 1000
Timed LLVM Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 192 359 827 OpenBenchmarking.org Watts, Fewer Is Better Timed LLVM Compilation 16.0 System Power Consumption Monitor 200 400 600 800 1000
Timed LLVM Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 203 470 858 OpenBenchmarking.org Watts, Fewer Is Better Timed LLVM Compilation 16.0 System Power Consumption Monitor 200 400 600 800 1000
Timed Linux Kernel Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204 555 811 OpenBenchmarking.org Watts, Fewer Is Better Timed Linux Kernel Compilation 6.8 System Power Consumption Monitor 200 400 600 800 1000
Timed Linux Kernel Compilation System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 198 366 848 OpenBenchmarking.org Watts, Fewer Is Better Timed Linux Kernel Compilation 6.8 System Power Consumption Monitor 200 400 600 800 1000
LULESH System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 198 496 911 OpenBenchmarking.org Watts, Fewer Is Better LULESH 2.0.3 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 198 402 790 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 205 429 801 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 201 421 756 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204 470 805 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
SPECFEM3D System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204 422 804 OpenBenchmarking.org Watts, Fewer Is Better SPECFEM3D 4.1.1 System Power Consumption Monitor 200 400 600 800 1000
OpenRadioss System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 209 534 864 OpenBenchmarking.org Watts, Fewer Is Better OpenRadioss 2023.09.15 System Power Consumption Monitor 200 400 600 800 1000
OpenFOAM System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 221 652 934 OpenBenchmarking.org Watts, Fewer Is Better OpenFOAM 10 System Power Consumption Monitor 200 400 600 800 1000
OpenFOAM System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 282 523 764 OpenBenchmarking.org Watts, Fewer Is Better OpenFOAM 10 System Power Consumption Monitor 200 400 600 800 1000
Xcompact3d Incompact3d System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 202 807 1051 OpenBenchmarking.org Watts, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 System Power Consumption Monitor 200 400 600 800 1000
Xcompact3d Incompact3d System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 205 424 852 OpenBenchmarking.org Watts, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 System Power Consumption Monitor 200 400 600 800 1000
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 199.6 357.7 502.0 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 130 260 390 520 650
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 201.5 368.8 500.7 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 130 260 390 520 650
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 202.8 367.3 498.6 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 130 260 390 520 650
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 220.3 386.6 564.3 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 140 280 420 560 700
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 205 399 760 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 200 400 600 800 1000
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204 435 782 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 200 400 600 800 1000
NAS Parallel Benchmarks Test / Class: EP.D OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.D 24 x MRDIMM 88800 24 x DDR5-6400 7K 14K 21K 28K 35K SE +/- 1574.53, N = 13 SE +/- 90.50, N = 3 32324.72 32963.42 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.6
NAS Parallel Benchmarks System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 203 476 900 OpenBenchmarking.org Watts, Fewer Is Better NAS Parallel Benchmarks 3.4 System Power Consumption Monitor 200 400 600 800 1000
PostgreSQL System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 201.1 483.3 564.2 OpenBenchmarking.org Watts, Fewer Is Better PostgreSQL 17 System Power Consumption Monitor 140 280 420 560 700
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency 24 x MRDIMM 88800 24 x DDR5-6400 0.4772 0.9544 1.4316 1.9088 2.386 SE +/- 0.060, N = 12 SE +/- 0.042, N = 12 2.121 1.582 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 1000 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only 24 x MRDIMM 88800 24 x DDR5-6400 140K 280K 420K 560K 700K SE +/- 13155.72, N = 12 SE +/- 18280.50, N = 12 475621 637458 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204.9 344.0 378.0 OpenBenchmarking.org Watts, Fewer Is Better PostgreSQL 17 System Power Consumption Monitor 100 200 300 400 500
PostgreSQL System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 197.7 488.2 566.6 OpenBenchmarking.org Watts, Fewer Is Better PostgreSQL 17 System Power Consumption Monitor 140 280 420 560 700
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency 24 x MRDIMM 88800 24 x DDR5-6400 0.3591 0.7182 1.0773 1.4364 1.7955 SE +/- 0.058, N = 12 SE +/- 0.037, N = 9 1.596 1.595 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL Scaling Factor: 100 - Clients: 800 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 17 Scaling Factor: 100 - Clients: 800 - Mode: Read Only 24 x MRDIMM 88800 24 x DDR5-6400 110K 220K 330K 440K 550K SE +/- 19798.84, N = 12 SE +/- 12217.99, N = 9 509160 503950 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpq -lpgcommon -lpgport -lm
PostgreSQL System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 199.5 339.2 373.5 OpenBenchmarking.org Watts, Fewer Is Better PostgreSQL 17 System Power Consumption Monitor 100 200 300 400 500
Apache Cassandra System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 217.7 372.0 431.6 OpenBenchmarking.org Watts, Fewer Is Better Apache Cassandra 5.0 System Power Consumption Monitor 110 220 330 440 550
Apache Cassandra Test: Writes OpenBenchmarking.org Op/s, More Is Better Apache Cassandra 5.0 Test: Writes 24 x MRDIMM 88800 24 x DDR5-6400 20K 40K 60K 80K 100K SE +/- 1099.82, N = 12 SE +/- 1732.33, N = 9 87063 82480
Tinymembench System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 213.8 286.2 394.6 OpenBenchmarking.org Watts, Fewer Is Better Tinymembench 2018-05-28 System Power Consumption Monitor 110 220 330 440 550
High Performance Conjugate Gradient System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 354 1040 1107 OpenBenchmarking.org Watts, Fewer Is Better High Performance Conjugate Gradient 3.1 System Power Consumption Monitor 200 400 600 800 1000
High Performance Conjugate Gradient System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 223 960 1108 OpenBenchmarking.org Watts, Fewer Is Better High Performance Conjugate Gradient 3.1 System Power Consumption Monitor 200 400 600 800 1000
MBW System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 200.8 289.6 295.1 OpenBenchmarking.org Watts, Fewer Is Better MBW 2018-09-08 System Power Consumption Monitor 70 140 210 280 350
MBW System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 204.6 291.3 297.2 OpenBenchmarking.org Watts, Fewer Is Better MBW 2018-09-08 System Power Consumption Monitor 70 140 210 280 350
Stream System Power Consumption Monitor Min Avg Max 24 x MRDIMM 88800 198 585 924 OpenBenchmarking.org Watts, Fewer Is Better Stream 2013-01-17 System Power Consumption Monitor 200 400 600 800 1000
Phoronix Test Suite v10.8.5