Tests
Suites
Latest Results
Search
Register
Login
Popular Tests
Timed Linux Kernel Compilation
Blender
7-Zip Compression
SVT-AV1
FFmpeg
Newest Tests
OpenVINO GenAI
Rustls
LiteRT
WarpX
Epoch
Valkey
Recently Updated Tests
Llama.cpp
OpenVINO
Renaissance
Blender
vkpeak
ProjectPhysX OpenCL-Benchmark
New & Recently Updated Tests
Recently Updated Suites
Machine Learning
Server Motherboard
HPC - High Performance Computing
New & Recently Updated Suites
Component Benchmarks
CPUs / Processors
GPUs / Graphics
OpenGL
Disks / Storage
Motherboards
File-Systems
Operating Systems
OpenBenchmarking.org
Corporate / Organization Info
Bug Reports / Feature Requests
oneDNN 1.7.0
pts/onednn-1.7.0
- 13 March 2021 -
Update against oneDNN 2.1.2 upstream.
downloads.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.2.2--> <PhoronixTestSuite> <Downloads> <Package> <URL>https://github.com/oneapi-src/oneDNN/archive/v2.1.2.tar.gz</URL> <MD5>1df4f16f650b7ea08610a10af013faa3</MD5> <SHA256>cca53231ec99878dc7ef3cf4984525df4691b8174e703b40dd530c50531ecea0</SHA256> <FileName>oneDNN-2.1.2.tar.gz</FileName> <FileSize>9277121</FileSize> </Package> </Downloads> </PhoronixTestSuite>
install.sh
#!/bin/sh tar -xf oneDNN-2.1.2.tar.gz cd oneDNN-2.1.2/ mkdir build cd build CFLAGS="-O3 -march=native $CFLAGS" CXXFLAGS="-O3 -march=native $CXXFLAGS" cmake -DCMAKE_BUILD_TYPE=Release MKLDNN_ARCH_OPT_FLAGS="-O3 -march=native $CFLAGS" $CMAKE_OPTIONS .. make -j $NUM_CPU_CORES echo $? > ~/install-exit-status cd ~ echo "#!/bin/bash export DNNL_CPU_RUNTIME=OMP export OMP_PLACES=cores export OMP_PROC_BIND=close cd oneDNN-2.1.2/build/tests/benchdnn ./benchdnn \$4 --mode=p \$1 \$3 \$2 > \$LOG_FILE 2>&1 echo \$? > ~/test-exit-status" > onednn chmod +x onednn
results-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.2.2--> <PhoronixTestSuite> <ResultsParser> <OutputTemplate>total perf min(ms) #_MIN_RESULT_# avg(ms) #_RESULT_#</OutputTemplate> <LineHint>total perf</LineHint> <TurnCharsToSpace>:</TurnCharsToSpace> </ResultsParser> </PhoronixTestSuite>
test-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.2.2--> <PhoronixTestSuite> <TestInformation> <Title>oneDNN</Title> <AppVersion>2.1.2</AppVersion> <Description>This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative.</Description> <ResultScale>ms</ResultScale> <Proportion>LIB</Proportion> <TimesToRun>3</TimesToRun> </TestInformation> <TestProfile> <Version>1.7.0</Version> <SupportedPlatforms>Linux, MacOSX</SupportedPlatforms> <SoftwareType>Utility</SoftwareType> <TestType>Processor</TestType> <License>Free</License> <Status>Verified</Status> <ExternalDependencies>build-utilities, cmake</ExternalDependencies> <EnvironmentSize>287</EnvironmentSize> <ProjectURL>https://github.com/oneapi-src/oneDNN</ProjectURL> <InternalTags>SMP</InternalTags> <Maintainer>Michael Larabel</Maintainer> </TestProfile> <TestSettings> <Option> <DisplayName>Harness</DisplayName> <Identifier>harness</Identifier> <Menu> <Entry> <Name>Convolution Batch Shapes Auto</Name> <Value>--conv --batch=inputs/conv/shapes_auto</Value> </Entry> <Entry> <Name>Deconvolution Batch shapes_1d</Name> <Value>--deconv --batch=inputs/deconv/shapes_1d</Value> </Entry> <Entry> <Name>Deconvolution Batch shapes_3d</Name> <Value>--deconv --batch=inputs/deconv/shapes_3d</Value> </Entry> <Entry> <Name>IP Shapes 1D</Name> <Value>--ip --batch=inputs/ip/shapes_1d</Value> </Entry> <Entry> <Name>IP Shapes 3D</Name> <Value>--ip --batch=inputs/ip/shapes_3d</Value> </Entry> <Entry> <Name>Matrix Multiply Batch Shapes Transformer</Name> <Value>--matmul --batch=inputs/matmul/shapes_transformer</Value> </Entry> <Entry> <Name>Recurrent Neural Network Training</Name> <Value>--rnn --batch=inputs/rnn/perf_rnn_training</Value> </Entry> <Entry> <Name>Recurrent Neural Network Inference</Name> <Value>--rnn --batch=inputs/rnn/perf_rnn_inference_lb</Value> </Entry> </Menu> </Option> <Option> <DisplayName>Data Type</DisplayName> <Identifier>data-type</Identifier> <ArgumentPrefix>--cfg=</ArgumentPrefix> <Menu> <Entry> <Name>f32</Name> <Value>f32</Value> </Entry> <Entry> <Name>u8s8f32</Name> <Value>u8s8f32</Value> <Message>Optimized For AVX-512</Message> </Entry> <Entry> <Name>bf16bf16bf16</Name> <Value>bf16bf16bf16</Value> <Message>Optimized For AVX-512 + VNNI</Message> </Entry> </Menu> </Option> <Option> <DisplayName>Engine</DisplayName> <Identifier>engine</Identifier> <Menu> <Entry> <Name>CPU</Name> <Value>--engine=cpu</Value> </Entry> </Menu> </Option> </TestSettings> </PhoronixTestSuite>