Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2111298-TJ-2111232TJ87 nvme_ext4_bench_disk_1 - Phoronix Test Suite nvme_ext4_bench_disk_1 Mir WD Black NVMe XFS
HTML result view exported from: https://openbenchmarking.org/result/2111298-TJ-2111232TJ87&grt .
nvme_ext4_bench_disk_1 Processor Motherboard Chipset Memory Disk Graphics Network OS Kernel Compiler File-System Screen Resolution NVMe Ext4 NVMe XFS 4 x Intel Xeon E5-4650 v2 @ 2.90GHz (40 Cores / 80 Threads) Supermicro X9QR7-TF+/X9QRi-F+ v123456789 (3.0a BIOS) Intel Xeon E7 v2/Xeon 504GB 2000GB Western Digital WD_BLACK SN750 2TB + 16001GB MR9361-8i Matrox MGA G200eW WPCM450 2 x Intel I350 + 2 x Intel X710 for 10GbE SFP+ Ubuntu 16.04 4.7.0.intel.r5.0 (x86_64) GCC 5.4.0 20160609 ext4 640x480 xfs OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-linux-gnu --disable-browser-plugin --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch-directory=amd64 --with-default-libstdcxx-abi=new --with-multilib-list=m32,m64,mx32 --with-tune=generic -v Disk Details - NVMe Ext4: none / data=ordered,relatime,rw / Block Size: 4096 - NVMe XFS: none / attr2,inode64,noquota,relatime,rw / Block Size: 4096 Processor Details - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x42e Python Details - Python 2.7.12 + Python 3.5.2
nvme_ext4_bench_disk_1 compilebench: Compile compilebench: Initial Create compilebench: Read Compiled Tree dbench: 12 Clients dbench: 1 Clients fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Rand Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Read - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 2MB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fio: Seq Write - Linux AIO - No - Yes - 4KB - Default Test Directory fs-mark: 1000 Files, 1MB Size fs-mark: 5000 Files, 1MB Size, 4 Threads fs-mark: 4000 Files, 32 Sub Dirs, 1MB Size fs-mark: 1000 Files, 1MB Size, No Sync/FSync ior: 2MB - Default Test Directory ior: 4MB - Default Test Directory ior: 8MB - Default Test Directory ior: 16MB - Default Test Directory ior: 32MB - Default Test Directory ior: 64MB - Default Test Directory ior: 256MB - Default Test Directory ior: 512MB - Default Test Directory ior: 1024MB - Default Test Directory postmark: Disk Transaction Performance sqlite: 1 sqlite: 8 sqlite: 32 sqlite: 64 sqlite: 128 NVMe Ext4 NVMe XFS 1052.26 224.37 1384.03 1435.95 281.974 3224 1608 391 99833 2928 1460 412 105367 3246 1619 450 115167 2935 1464 412 105320 447.5 885.8 425.1 998.7 608.40 863.36 1103.77 1175.75 1155.62 1221.76 1233.59 1242.74 1229.25 3659 12.978 40.158 120.635 229.871 416.790 1068.37 164.09 1274.37 1593.48 219.416 3241 1617 403 103275 2934 1463 370 94773 3224 1608 436 111833 2935 1464 391 99980 389.5 711.0 401.0 992.6 2830 15.174 43.107 124.990 234.965 376.587 OpenBenchmarking.org
Compile Bench Test: Compile OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Compile NVMe Ext4 NVMe XFS 200 400 600 800 1000 SE +/- 7.92, N = 3 SE +/- 8.05, N = 10 1052.26 1068.37
Compile Bench Test: Initial Create OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Initial Create NVMe Ext4 NVMe XFS 50 100 150 200 250 SE +/- 3.49, N = 3 SE +/- 0.84, N = 3 224.37 164.09
Compile Bench Test: Read Compiled Tree OpenBenchmarking.org MB/s, More Is Better Compile Bench 0.6 Test: Read Compiled Tree NVMe Ext4 NVMe XFS 300 600 900 1200 1500 SE +/- 7.70, N = 3 SE +/- 80.25, N = 3 1384.03 1274.37
Dbench 12 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 12 Clients NVMe Ext4 NVMe XFS 300 600 900 1200 1500 SE +/- 19.90, N = 3 SE +/- 7.40, N = 3 1435.95 1593.48 1. (CC) gcc options: -lpopt -O2
Dbench 1 Clients OpenBenchmarking.org MB/s, More Is Better Dbench 4.0 1 Clients NVMe Ext4 NVMe XFS 60 120 180 240 300 SE +/- 0.73, N = 3 SE +/- 2.60, N = 3 281.97 219.42 1. (CC) gcc options: -lpopt -O2
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 700 1400 2100 2800 3500 SE +/- 2.33, N = 3 SE +/- 11.57, N = 3 3224 3241 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 300 600 900 1200 1500 SE +/- 1.45, N = 3 SE +/- 5.57, N = 3 1608 1617 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 90 180 270 360 450 SE +/- 3.06, N = 3 SE +/- 9.06, N = 12 391 403 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 20K 40K 60K 80K 100K SE +/- 1166.67, N = 3 SE +/- 2345.95, N = 12 99833 103275 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 600 1200 1800 2400 3000 SE +/- 2.19, N = 3 2928 2934 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 300 600 900 1200 1500 SE +/- 1.20, N = 3 1460 1463 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 90 180 270 360 450 SE +/- 8.22, N = 15 SE +/- 10.18, N = 15 412 370 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Random Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 20K 40K 60K 80K 100K SE +/- 2082.69, N = 15 SE +/- 2609.95, N = 15 105367 94773 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 700 1400 2100 2800 3500 SE +/- 2.91, N = 3 3246 3224 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 300 600 900 1200 1500 SE +/- 1.45, N = 3 1619 1608 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 100 200 300 400 500 SE +/- 9.96, N = 15 450 436 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Sequential Read - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 20K 40K 60K 80K 100K SE +/- 2235.50, N = 12 SE +/- 2585.04, N = 15 115167 111833 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 600 1200 1800 2400 3000 SE +/- 1.20, N = 3 2935 2935 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 300 600 900 1200 1500 SE +/- 0.67, N = 3 1464 1464 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better Flexible IO Tester 3.25 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 90 180 270 360 450 SE +/- 6.34, N = 15 SE +/- 6.94, N = 15 412 391 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
Flexible IO Tester Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory OpenBenchmarking.org IOPS, More Is Better Flexible IO Tester 3.25 Type: Sequential Write - IO Engine: Linux AIO - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory NVMe Ext4 NVMe XFS 20K 40K 60K 80K 100K SE +/- 1632.42, N = 15 SE +/- 1767.38, N = 15 105320 99980 1. (CC) gcc options: -rdynamic -lnuma -lrt -lz -lpthread -lm -ldl -laio -lcurl -lssl -lcrypto -std=gnu99 -ffast-math -include -O3 -fcommon -U_FORTIFY_SOURCE -march=native
FS-Mark Test: 1000 Files, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size NVMe Ext4 NVMe XFS 100 200 300 400 500 SE +/- 2.01, N = 3 SE +/- 5.74, N = 15 447.5 389.5
FS-Mark Test: 5000 Files, 1MB Size, 4 Threads OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 5000 Files, 1MB Size, 4 Threads NVMe Ext4 NVMe XFS 200 400 600 800 1000 SE +/- 10.26, N = 12 SE +/- 9.51, N = 12 885.8 711.0
FS-Mark Test: 4000 Files, 32 Sub Dirs, 1MB Size OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 4000 Files, 32 Sub Dirs, 1MB Size NVMe Ext4 NVMe XFS 90 180 270 360 450 SE +/- 5.49, N = 12 SE +/- 9.40, N = 12 425.1 401.0
FS-Mark Test: 1000 Files, 1MB Size, No Sync/FSync OpenBenchmarking.org Files/s, More Is Better FS-Mark 3.3 Test: 1000 Files, 1MB Size, No Sync/FSync NVMe Ext4 NVMe XFS 200 400 600 800 1000 SE +/- 7.38, N = 3 SE +/- 28.95, N = 12 998.7 992.6
IOR Block Size: 2MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 2MB - Disk Target: Default Test Directory NVMe Ext4 130 260 390 520 650 SE +/- 3.51, N = 3 608.40 MIN: 390.99 / MAX: 1254.69 1. (CC) gcc options: -O2 -lm -pthread -lmpi
IOR Block Size: 4MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 4MB - Disk Target: Default Test Directory NVMe Ext4 200 400 600 800 1000 SE +/- 6.45, N = 15 863.36 MIN: 382.32 / MAX: 1244.05 1. (CC) gcc options: -O2 -lm -pthread -lmpi
IOR Block Size: 8MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 8MB - Disk Target: Default Test Directory NVMe Ext4 200 400 600 800 1000 SE +/- 7.12, N = 3 1103.77 MIN: 776.06 / MAX: 1318.22 1. (CC) gcc options: -O2 -lm -pthread -lmpi
IOR Block Size: 16MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 16MB - Disk Target: Default Test Directory NVMe Ext4 300 600 900 1200 1500 SE +/- 3.19, N = 3 1175.75 MIN: 1081.99 / MAX: 1327.67 1. (CC) gcc options: -O2 -lm -pthread -lmpi
IOR Block Size: 32MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 32MB - Disk Target: Default Test Directory NVMe Ext4 200 400 600 800 1000 SE +/- 0.85, N = 3 1155.62 MIN: 1017.48 / MAX: 1354.31 1. (CC) gcc options: -O2 -lm -pthread -lmpi
IOR Block Size: 64MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 64MB - Disk Target: Default Test Directory NVMe Ext4 300 600 900 1200 1500 SE +/- 0.67, N = 3 1221.76 MIN: 1122.88 / MAX: 1380.91 1. (CC) gcc options: -O2 -lm -pthread -lmpi
IOR Block Size: 256MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 256MB - Disk Target: Default Test Directory NVMe Ext4 300 600 900 1200 1500 SE +/- 3.19, N = 3 1233.59 MIN: 1145.66 / MAX: 1344.24 1. (CC) gcc options: -O2 -lm -pthread -lmpi
IOR Block Size: 512MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 512MB - Disk Target: Default Test Directory NVMe Ext4 300 600 900 1200 1500 SE +/- 3.32, N = 3 1242.74 MIN: 1019.35 / MAX: 1348.65 1. (CC) gcc options: -O2 -lm -pthread -lmpi
IOR Block Size: 1024MB - Disk Target: Default Test Directory OpenBenchmarking.org MB/s, More Is Better IOR 3.3.0 Block Size: 1024MB - Disk Target: Default Test Directory NVMe Ext4 300 600 900 1200 1500 SE +/- 2.83, N = 3 1229.25 MIN: 868.56 / MAX: 1349.32 1. (CC) gcc options: -O2 -lm -pthread -lmpi
PostMark Disk Transaction Performance OpenBenchmarking.org TPS, More Is Better PostMark 1.51 Disk Transaction Performance NVMe Ext4 NVMe XFS 800 1600 2400 3200 4000 SE +/- 36.00, N = 3 SE +/- 28.16, N = 3 3659 2830 1. (CC) gcc options: -O3
SQLite Threads / Copies: 1 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 1 NVMe Ext4 NVMe XFS 4 8 12 16 20 SE +/- 0.06, N = 3 SE +/- 0.11, N = 15 12.98 15.17 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm -ldl -lpthread
SQLite Threads / Copies: 8 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 8 NVMe Ext4 NVMe XFS 10 20 30 40 50 SE +/- 0.01, N = 3 SE +/- 0.11, N = 3 40.16 43.11 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm -ldl -lpthread
SQLite Threads / Copies: 32 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 32 NVMe Ext4 NVMe XFS 30 60 90 120 150 SE +/- 0.49, N = 3 SE +/- 0.41, N = 3 120.64 124.99 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm -ldl -lpthread
SQLite Threads / Copies: 64 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 64 NVMe Ext4 NVMe XFS 50 100 150 200 250 SE +/- 2.44, N = 3 SE +/- 1.80, N = 3 229.87 234.97 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm -ldl -lpthread
SQLite Threads / Copies: 128 OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.30.1 Threads / Copies: 128 NVMe Ext4 NVMe XFS 90 180 270 360 450 SE +/- 2.40, N = 3 SE +/- 3.16, N = 3 416.79 376.59 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm -ldl -lpthread
Phoronix Test Suite v10.8.5