Intel Xeon E3-1275 v6 testing with a ASUS P10S-C (4503 BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Disk Notes: NONE / compress=zstd:3,discard=async,relatime,rw,space_cache=v2,ssd,subvol=/,subvolid=5 / RAID1 Block Size: 4096
Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf8 - Thermald 2.5.6
Java Notes: OpenJDK Runtime Environment (build 11.0.23+9-post-Ubuntu-1ubuntu1)
Python Notes: Python 3.12.3
Security Notes: gather_data_sampling: Mitigation of Microcode + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + reg_file_data_sampling: Not affected + retbleed: Mitigation of IBRS + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS; IBPB: conditional; STIBP: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
Processor: Intel Core i3-7100 @ 3.90GHz (2 Cores / 4 Threads), Motherboard: ASUS P10S-C (4503 BIOS), Chipset: Intel Xeon E3-1200 v6/7th, Memory: 2 x 8 GB DDR4-2133MT/s W-MEM21E4D88GHL, Disk: 2 x 1000GB Samsung SSD 980 1TB, Graphics: ASPEED, Network: 4 x Intel I210
OS: Ubuntu 24.04, Kernel: 6.8.0-35-generic (x86_64), Compiler: GCC 13.2.0, File-System: btrfs, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Disk Notes: NONE / compress=zstd:3,discard=async,relatime,rw,space_cache=v2,ssd,subvol=/,subvolid=5 / RAID1 Block Size: 4096
Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf8 - Thermald 2.5.6
Python Notes: Python 3.12.3
Security Notes: gather_data_sampling: Mitigation of Microcode + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + reg_file_data_sampling: Not affected + retbleed: Mitigation of IBRS + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS; IBPB: conditional; STIBP: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
Changed Processor to Intel Xeon E3-1275 v6 @ 4.20GHz (4 Cores / 8 Threads).
Changed Memory to 4 x 16 GB DDR4-2400MT/s HMA82GU7CJR8N-VK.
Security Change: gather_data_sampling: Mitigation of Microcode + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + reg_file_data_sampling: Not affected + retbleed: Mitigation of IBRS + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS; IBPB: conditional; STIBP: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of TSX disabled
LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.
Benchmark: Hot Read
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Fill Sync
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Overwrite
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Random Fill
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Random Read
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Seek Random
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Random Delete
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
Benchmark: Sequential Fill
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found
This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.
BlogBench is designed to replicate the load of a real-world busy file server by stressing the file-system with multiple threads of random reads, writes, and rewrites. The behavior is mimicked of that of a blog by creating blogs with content and pictures, modifying blog posts, adding comments to these blogs, and then reading the content of the blogs. All of these blogs generated are created locally with fake content and pictures. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.
i3-7100 btrfs 16GB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.
E3-1200 btrfs 64GB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.
This is a test of ebizzy, a program to generate workloads resembling web server workloads. Learn more via the OpenBenchmarking.org test page.
Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.
OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.
ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.
Etcd is a distributed, reliable key-value store intended for critical data of a distributed system. Etcd is written in Golang and part of the Cloud Native Computing Foundation (CNCF) and used by Kubernetes, Rook, CoreDNS, and other open-source software. This test profile uses Etcd's built-in benchmark to stress the PUT and RANGE performance of a single node / local system. Learn more via the OpenBenchmarking.org test page.
Test: PUT - Connections: 50 - Clients: 100
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./etcd: 5: ./bin/etcd: not found
Test: PUT - Connections: 100 - Clients: 100
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./etcd: 5: ./bin/etcd: not found
Test: PUT - Connections: 50 - Clients: 1000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./etcd: 5: ./bin/etcd: not found
Test: PUT - Connections: 500 - Clients: 100
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./etcd: 5: ./bin/etcd: not found
Test: PUT - Connections: 100 - Clients: 1000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./etcd: 5: ./bin/etcd: not found
Test: PUT - Connections: 500 - Clients: 1000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./etcd: 5: ./bin/etcd: not found
Test: RANGE - Connections: 50 - Clients: 100
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./etcd: 5: ./bin/etcd: not found
Test: RANGE - Connections: 100 - Clients: 100
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./etcd: 5: ./bin/etcd: not found
Test: RANGE - Connections: 50 - Clients: 1000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./etcd: 5: ./bin/etcd: not found
Test: RANGE - Connections: 500 - Clients: 100
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./etcd: 5: ./bin/etcd: not found
Test: RANGE - Connections: 100 - Clients: 1000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ./etcd: 5: ./bin/etcd: not found
Test: RANGE - Connections: 500 - Clients: 1000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.
This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.
Row Count: 1000000 - Partitions: 100
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 1000000 - Partitions: 500
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 1000000 - Partitions: 1000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 1000000 - Partitions: 2000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 10000000 - Partitions: 100
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 10000000 - Partitions: 500
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 20000000 - Partitions: 100
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 20000000 - Partitions: 500
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 40000000 - Partitions: 100
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 40000000 - Partitions: 500
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 10000000 - Partitions: 1000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 10000000 - Partitions: 2000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 20000000 - Partitions: 1000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 20000000 - Partitions: 2000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 40000000 - Partitions: 1000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
Row Count: 40000000 - Partitions: 2000
2 x 1000GB Samsung SSD 980 1TB: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: _pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
This is a benchmark of Apache Spark using the TPC-DS data-set. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmarks the Apache Spark in a single-system configuration and leverages the https://github.com/databricks/tpcds-kit and https://github.com/IBM/spark-tpc-ds-performance-test/ projects for testing. Learn more via the OpenBenchmarking.org test page.