Tests
Suites
Latest Results
Search
Register
Login
Popular Tests
Timed Linux Kernel Compilation
SVT-AV1
7-Zip Compression
Stockfish
FFmpeg
Blender
Newest Tests
Rustls
LiteRT
WarpX
Epoch
Valkey
Whisperfile
Recently Updated Tests
Blender
vkpeak
ProjectPhysX OpenCL-Benchmark
FluidX3D
Mobile Neural Network
ACES DGEMM
New & Recently Updated Tests
Recently Updated Suites
Database Test Suite
Machine Learning
Steam
New & Recently Updated Suites
Component Benchmarks
CPUs / Processors
GPUs / Graphics
OpenGL
Disks / Storage
Motherboards
File-Systems
Operating Systems
OpenBenchmarking.org
Corporate / Organization Info
Bug Reports / Feature Requests
oneDNN 1.8.0
pts/onednn-1.8.0
- 29 March 2022 -
Update against oneDNN 2.6 upstream.
downloads.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.2--> <PhoronixTestSuite> <Downloads> <Package> <URL>https://github.com/oneapi-src/oneDNN/archive/refs/tags/v2.6.tar.gz</URL> <MD5>2ef4cf81912f55abfe1d45bb14a33c1c</MD5> <SHA256>9695640f55acd833ddcef4776af15e03446c4655f9296e5074b1b178dd7a4fb2</SHA256> <FileName>oneDNN-2.6.tar.gz</FileName> <FileSize>5840464</FileSize> </Package> </Downloads> </PhoronixTestSuite>
install.sh
#!/bin/sh tar -xf oneDNN-2.6.tar.gz cd oneDNN-2.6/ mkdir build cd build CFLAGS="-O3 -march=native $CFLAGS" CXXFLAGS="-O3 -march=native $CXXFLAGS" cmake -DCMAKE_BUILD_TYPE=Release MKLDNN_ARCH_OPT_FLAGS="-O3 -march=native $CFLAGS" $CMAKE_OPTIONS .. make -j $NUM_CPU_CORES echo $? > ~/install-exit-status cd ~ echo "#!/bin/bash export DNNL_CPU_RUNTIME=OMP export OMP_PLACES=cores export OMP_PROC_BIND=close cd oneDNN-2.6/build/tests/benchdnn ./benchdnn \$4 --mode=p \$1 \$3 \$2 > \$LOG_FILE 2>&1 echo \$? > ~/test-exit-status" > onednn chmod +x onednn
results-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.2--> <PhoronixTestSuite> <ResultsParser> <OutputTemplate>total perf min(ms) #_MIN_RESULT_# avg(ms) #_RESULT_#</OutputTemplate> <LineHint>total perf</LineHint> <TurnCharsToSpace>:</TurnCharsToSpace> </ResultsParser> </PhoronixTestSuite>
test-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.2--> <PhoronixTestSuite> <TestInformation> <Title>oneDNN</Title> <AppVersion>2.6</AppVersion> <Description>This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI.</Description> <ResultScale>ms</ResultScale> <Proportion>LIB</Proportion> <TimesToRun>3</TimesToRun> </TestInformation> <TestProfile> <Version>1.8.0</Version> <SupportedPlatforms>Linux, MacOSX</SupportedPlatforms> <SoftwareType>Utility</SoftwareType> <TestType>Processor</TestType> <License>Free</License> <Status>Verified</Status> <ExternalDependencies>build-utilities, cmake</ExternalDependencies> <EnvironmentSize>287</EnvironmentSize> <ProjectURL>https://www.intel.com/content/www/us/en/developer/tools/oneapi/onednn.html</ProjectURL> <RepositoryURL>https://github.com/oneapi-src/oneDNN</RepositoryURL> <InternalTags>SMP</InternalTags> <Maintainer>Michael Larabel</Maintainer> </TestProfile> <TestSettings> <Option> <DisplayName>Harness</DisplayName> <Identifier>harness</Identifier> <Menu> <Entry> <Name>Convolution Batch Shapes Auto</Name> <Value>--conv --batch=inputs/conv/shapes_auto</Value> </Entry> <Entry> <Name>Deconvolution Batch shapes_1d</Name> <Value>--deconv --batch=inputs/deconv/shapes_1d</Value> </Entry> <Entry> <Name>Deconvolution Batch shapes_3d</Name> <Value>--deconv --batch=inputs/deconv/shapes_3d</Value> </Entry> <Entry> <Name>IP Shapes 1D</Name> <Value>--ip --batch=inputs/ip/shapes_1d</Value> </Entry> <Entry> <Name>IP Shapes 3D</Name> <Value>--ip --batch=inputs/ip/shapes_3d</Value> </Entry> <Entry> <Name>Matrix Multiply Batch Shapes Transformer</Name> <Value>--matmul --batch=inputs/matmul/shapes_transformer</Value> </Entry> <Entry> <Name>Recurrent Neural Network Training</Name> <Value>--rnn --batch=inputs/rnn/perf_rnn_training</Value> </Entry> <Entry> <Name>Recurrent Neural Network Inference</Name> <Value>--rnn --batch=inputs/rnn/perf_rnn_inference_lb</Value> </Entry> </Menu> </Option> <Option> <DisplayName>Data Type</DisplayName> <Identifier>data-type</Identifier> <ArgumentPrefix>--cfg=</ArgumentPrefix> <Menu> <Entry> <Name>f32</Name> <Value>f32</Value> </Entry> <Entry> <Name>u8s8f32</Name> <Value>u8s8f32</Value> <Message>Optimized For AVX-512</Message> </Entry> <Entry> <Name>bf16bf16bf16</Name> <Value>bf16bf16bf16</Value> <Message>Optimized For AVX-512 + VNNI</Message> </Entry> </Menu> </Option> <Option> <DisplayName>Engine</DisplayName> <Identifier>engine</Identifier> <Menu> <Entry> <Name>CPU</Name> <Value>--engine=cpu</Value> </Entry> </Menu> </Option> </TestSettings> </PhoronixTestSuite>