NVMe Ext4 Processor: 4 x Intel Xeon E5-4650 v2 @ 2.90GHz (40 Cores / 80 Threads), Motherboard: Supermicro X9QR7-TF+/X9QRi-F+ v123456789 (3.0a BIOS), Chipset: Intel Xeon E7 v2/Xeon, Memory: 504GB, Disk: 2000GB Western Digital WD_BLACK SN750 2TB + 16001GB MR9361-8i, Graphics: Matrox MGA G200eW WPCM450, Network: 2 x Intel I350 + 2 x Intel X710 for 10GbE SFP+
OS: Ubuntu 16.04, Kernel: 4.7.0.intel.r5.0 (x86_64), Compiler: GCC 5.4.0 20160609, File-System: ext4, Screen Resolution: 640x480
Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-tune=generic -vDisk Notes: none / data=ordered,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x42ePython Notes: Python 2.7.12 + Python 3.5.2
NVMe XFS OS: Ubuntu 16.04, Kernel: 4.7.0.intel.r5.0 (x86_64), Compiler: GCC 5.4.0 20160609, File-System: xfs, Screen Resolution: 640x480
Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-tune=generic -vDisk Notes: none / attr2,inode64,noquota,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x42ePython Notes: Python 2.7.12 + Python 3.5.2
nvme_ext4_bench_disk_1 OpenBenchmarking.org Phoronix Test Suite 4 x Intel Xeon E5-4650 v2 @ 2.90GHz (40 Cores / 80 Threads) Supermicro X9QR7-TF+/X9QRi-F+ v123456789 (3.0a BIOS) Intel Xeon E7 v2/Xeon 504GB 2000GB Western Digital WD_BLACK SN750 2TB + 16001GB MR9361-8i Matrox MGA G200eW WPCM450 2 x Intel I350 + 2 x Intel X710 for 10GbE SFP+ Ubuntu 16.04 4.7.0.intel.r5.0 (x86_64) GCC 5.4.0 20160609 ext4 xfs 640x480 Processor Motherboard Chipset Memory Disk Graphics Network OS Kernel Compiler File-Systems Screen Resolution Nvme_ext4_bench_disk_1 Benchmarks System Logs - Transparent Huge Pages: always - --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-tune=generic -v - NVMe Ext4: none / data=ordered,relatime,rw / Block Size: 4096 - NVMe XFS: none / attr2,inode64,noquota,relatime,rw / Block Size: 4096 - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x42e - Python 2.7.12 + Python 3.5.2
NVMe Ext4 vs. NVMe XFS Comparison Phoronix Test Suite Baseline +9.2% +9.2% +18.4% +18.4% +27.6% +27.6% 11% 10.7% 3.4% 3.1% Initial Create 36.7% D.T.P 29.3% 1 Clients 28.5% 5.F.1.S.4.T 24.6% 1 16.9% 1.F.1.S 14.9% Rand Write - Linux AIO - No - Yes - 4KB 11.4% Rand Write - Linux AIO - No - Yes - 4KB 11.2% 12 Clients 128 Read Compiled Tree 8.6% 8 7.3% 4.F.3.S.D.1.S 6% Seq Write - Linux AIO - No - Yes - 4KB 5.4% Seq Write - Linux AIO - No - Yes - 4KB 5.3% 32 3.6% Rand Read - Linux AIO - No - Yes - 4KB Seq Read - Linux AIO - No - Yes - 4KB 3.2% Rand Read - Linux AIO - No - Yes - 4KB Seq Read - Linux AIO - No - Yes - 4KB 3% 64 2.2% Compile Bench PostMark Dbench FS-Mark SQLite FS-Mark Flexible IO Tester Flexible IO Tester Dbench SQLite Compile Bench SQLite FS-Mark Flexible IO Tester Flexible IO Tester SQLite Flexible IO Tester Flexible IO Tester Flexible IO Tester Flexible IO Tester SQLite NVMe Ext4 NVMe XFS
nvme_ext4_bench_disk_1 ior: 1024MB - Default Test Directory ior: 512MB - Default Test Directory ior: 256MB - Default Test Directory dbench: 12 Clients dbench: 1 Clients fs-mark: 4000 Files, 32 Sub Dirs, 1MB Size fs-mark: 5000 Files, 1MB Size, 4 Threads sqlite: 128 fs-mark: 1000 Files, 1MB Size, No Sync/FSync fs-mark: 1000 Files, 1MB Size ior: 64MB - Default Test Directory sqlite: 64 fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory ior: 32MB - Default Test Directory sqlite: 32 ior: 4MB - Default Test Directory compilebench: Compile fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory postmark: Disk Transaction Performance ior: 16MB - Default Test Directory sqlite: 1 sqlite: 8 ior: 8MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory ior: 2MB - Default Test Directory compilebench: Read Compiled Tree compilebench: Initial Create NVMe Ext4 NVMe XFS 1229.25 1242.74 1233.59 1435.95 281.974 425.1 885.8 416.790 998.7 447.5 1221.76 229.871 105367 412 105320 412 115167 450 1155.62 120.635 863.36 1052.26 99833 391 3659 1175.75 12.978 40.158 1103.77 1608 3224 1619 3246 1460 2928 1464 2935 608.40 1384.03 224.37 1593.48 219.416 401.0 711.0 376.587 992.6 389.5 234.965 94773 370 99980 391 111833 436 124.990 1068.37 103275 403 2830 15.174 43.107 1617 3241 1608 3224 1463 2934 1464 2935 1274.37 164.09 OpenBenchmarking.org
IOR IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 1024MB - Disk Target: Default Test Directory NVMe Ext4 300 600 900 1200 1500 SE +/- 2.83, N = 3 1229.25 MIN: 868.56 / MAX: 1349.32 1. (CC) gcc options: -O2 -lm -pthread -lmpi
IOR IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 512MB - Disk Target: Default Test Directory NVMe Ext4 300 600 900 1200 1500 SE +/- 3.32, N = 3 1242.74 MIN: 1019.35 / MAX: 1348.65 1. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 256MB - Disk Target: Default Test Directory NVMe Ext4 300 600 900 1200 1500 SE +/- 3.19, N = 3 1233.59 MIN: 1145.66 / MAX: 1344.24 1. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients NVMe XFS NVMe Ext4 60 120 180 240 300 SE +/- 2.60, N = 3 SE +/- 0.73, N = 3 219.42 281.97 1. (CC) gcc options: -lpopt -O2
OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 5000 Files, 1MB Size, 4 Threads NVMe XFS NVMe Ext4 200 400 600 800 1000 SE +/- 9.51, N = 12 SE +/- 10.26, N = 12 711.0 885.8
SQLite This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 128 NVMe Ext4 NVMe XFS 90 180 270 360 450 SE +/- 2.40, N = 3 SE +/- 3.16, N = 3 416.79 376.59 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm -ldl -lpthread
OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size NVMe XFS NVMe Ext4 100 200 300 400 500 SE +/- 5.74, N = 15 SE +/- 2.01, N = 3 389.5 447.5
IOR IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 64MB - Disk Target: Default Test Directory NVMe Ext4 300 600 900 1200 1500 SE +/- 0.67, N = 3 1221.76 MIN: 1122.88 / MAX: 1380.91 1. (CC) gcc options: -O2 -lm -pthread -lmpi
SQLite This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 64 NVMe XFS NVMe Ext4 50 100 150 200 250 SE +/- 1.80, N = 3 SE +/- 2.44, N = 3 234.97 229.87 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm -ldl -lpthread
Flexible IO Tester FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe XFS NVMe Ext4 20K 40K 60K 80K 100K SE +/- 2609.95, N = 15 SE +/- 2082.69, N = 15 94773 105367 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe XFS NVMe Ext4 90 180 270 360 450 SE +/- 10.18, N = 15 SE +/- 8.22, N = 15 370 412 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe XFS NVMe Ext4 20K 40K 60K 80K 100K SE +/- 1767.38, N = 15 SE +/- 1632.42, N = 15 99980 105320 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe XFS NVMe Ext4 90 180 270 360 450 SE +/- 6.94, N = 15 SE +/- 6.34, N = 15 391 412 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe XFS NVMe Ext4 20K 40K 60K 80K 100K SE +/- 2585.04, N = 15 SE +/- 2235.50, N = 12 111833 115167 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe XFS NVMe Ext4 100 200 300 400 500 SE +/- 9.96, N = 15 436 450 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
IOR IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 32MB - Disk Target: Default Test Directory NVMe Ext4 200 400 600 800 1000 SE +/- 0.85, N = 3 1155.62 MIN: 1017.48 / MAX: 1354.31 1. (CC) gcc options: -O2 -lm -pthread -lmpi
SQLite This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 32 NVMe XFS NVMe Ext4 30 60 90 120 150 SE +/- 0.41, N = 3 SE +/- 0.49, N = 3 124.99 120.64 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm -ldl -lpthread
IOR IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 4MB - Disk Target: Default Test Directory NVMe Ext4 200 400 600 800 1000 SE +/- 6.45, N = 15 863.36 MIN: 382.32 / MAX: 1244.05 1. (CC) gcc options: -O2 -lm -pthread -lmpi
Compile Bench Compilebench tries to age a filesystem by simulating some of the disk IO common in creating, compiling, patching, stating and reading kernel trees. It indirectly measures how well filesystems can maintain directory locality as the disk fills up and directories age. This current test is setup to use the makej mode with 10 initial directories Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile NVMe Ext4 NVMe XFS 200 400 600 800 1000 SE +/- 7.92, N = 3 SE +/- 8.05, N = 10 1052.26 1068.37
Flexible IO Tester FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 20K 40K 60K 80K 100K SE +/- 1166.67, N = 3 SE +/- 2345.95, N = 12 99833 103275 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 90 180 270 360 450 SE +/- 3.06, N = 3 SE +/- 9.06, N = 12 391 403 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
PostMark This is a test of NetApp's PostMark benchmark designed to simulate small-file testing similar to the tasks endured by web and mail servers. This test profile will set PostMark to perform 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance NVMe XFS NVMe Ext4 800 1600 2400 3200 4000 SE +/- 28.16, N = 3 SE +/- 36.00, N = 3 2830 3659 1. (CC) gcc options: -O3
IOR IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 16MB - Disk Target: Default Test Directory NVMe Ext4 300 600 900 1200 1500 SE +/- 3.19, N = 3 1175.75 MIN: 1081.99 / MAX: 1327.67 1. (CC) gcc options: -O2 -lm -pthread -lmpi
SQLite This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 1 NVMe XFS NVMe Ext4 4 8 12 16 20 SE +/- 0.11, N = 15 SE +/- 0.06, N = 3 15.17 12.98 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm -ldl -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 8 NVMe XFS NVMe Ext4 10 20 30 40 50 SE +/- 0.11, N = 3 SE +/- 0.01, N = 3 43.11 40.16 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm -ldl -lpthread
IOR IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 8MB - Disk Target: Default Test Directory NVMe Ext4 200 400 600 800 1000 SE +/- 7.12, N = 3 1103.77 MIN: 776.06 / MAX: 1318.22 1. (CC) gcc options: -O2 -lm -pthread -lmpi
Flexible IO Tester FIO, the Flexible I/O Tester, is an advanced Linux disk benchmark supporting multiple I/O engines and a wealth of options. FIO was written by Jens Axboe for testing of the Linux I/O subsystem and schedulers. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 300 600 900 1200 1500 SE +/- 1.45, N = 3 SE +/- 5.57, N = 3 1608 1617 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 700 1400 2100 2800 3500 SE +/- 2.33, N = 3 SE +/- 11.57, N = 3 3224 3241 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe XFS NVMe Ext4 300 600 900 1200 1500 SE +/- 1.45, N = 3 1608 1619 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe XFS NVMe Ext4 700 1400 2100 2800 3500 SE +/- 2.91, N = 3 3224 3246 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 300 600 900 1200 1500 SE +/- 1.20, N = 3 1460 1463 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 600 1200 1800 2400 3000 SE +/- 2.19, N = 3 2928 2934 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 300 600 900 1200 1500 SE +/- 0.67, N = 3 1464 1464 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 600 1200 1800 2400 3000 SE +/- 1.20, N = 3 2935 2935 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
IOR IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 130 260 390 520 650 SE +/- 3.51, N = 3 608.40 MIN: 390.99 / MAX: 1254.69 1. (CC) gcc options: -O2 -lm -pthread -lmpi
Compile Bench Compilebench tries to age a filesystem by simulating some of the disk IO common in creating, compiling, patching, stating and reading kernel trees. It indirectly measures how well filesystems can maintain directory locality as the disk fills up and directories age. This current test is setup to use the makej mode with 10 initial directories Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree NVMe XFS NVMe Ext4 300 600 900 1200 1500 SE +/- 80.25, N = 3 SE +/- 7.70, N = 3 1274.37 1384.03
NVMe Ext4 Processor: 4 x Intel Xeon E5-4650 v2 @ 2.90GHz (40 Cores / 80 Threads), Motherboard: Supermicro X9QR7-TF+/X9QRi-F+ v123456789 (3.0a BIOS), Chipset: Intel Xeon E7 v2/Xeon, Memory: 504GB, Disk: 2000GB Western Digital WD_BLACK SN750 2TB + 16001GB MR9361-8i, Graphics: Matrox MGA G200eW WPCM450, Network: 2 x Intel I350 + 2 x Intel X710 for 10GbE SFP+
OS: Ubuntu 16.04, Kernel: 4.7.0.intel.r5.0 (x86_64), Compiler: GCC 5.4.0 20160609, File-System: ext4, Screen Resolution: 640x480
Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-tune=generic -vDisk Notes: none / data=ordered,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x42ePython Notes: Python 2.7.12 + Python 3.5.2
Testing initiated at 22 November 2021 11:08 by user adm_abanman.
NVMe XFS Processor: 4 x Intel Xeon E5-4650 v2 @ 2.90GHz (40 Cores / 80 Threads), Motherboard: Supermicro X9QR7-TF+/X9QRi-F+ v123456789 (3.0a BIOS), Chipset: Intel Xeon E7 v2/Xeon, Memory: 504GB, Disk: 2000GB Western Digital WD_BLACK SN750 2TB + 16001GB MR9361-8i, Graphics: Matrox MGA G200eW WPCM450, Network: 2 x Intel I350 + 2 x Intel X710 for 10GbE SFP+
OS: Ubuntu 16.04, Kernel: 4.7.0.intel.r5.0 (x86_64), Compiler: GCC 5.4.0 20160609, File-System: xfs, Screen Resolution: 640x480
Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-tune=generic -vDisk Notes: none / attr2,inode64,noquota,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x42ePython Notes: Python 2.7.12 + Python 3.5.2
Testing initiated at 23 November 2021 10:30 by user adm_abanman.