Tests
Suites
Latest Results
Search
Register
Login
Popular Tests
Timed Linux Kernel Compilation
SVT-AV1
7-Zip Compression
Stockfish
FFmpeg
x265
Newest Tests
Rustls
LiteRT
WarpX
Epoch
Valkey
Whisperfile
Recently Updated Tests
Mobile Neural Network
ACES DGEMM
NWChem
SuperTuxKart
ASTC Encoder
SVT-AV1
New & Recently Updated Tests
Recently Updated Suites
Database Test Suite
Machine Learning
Steam
New & Recently Updated Suites
Component Benchmarks
CPUs / Processors
GPUs / Graphics
OpenGL
Disks / Storage
Motherboards
File-Systems
Operating Systems
OpenBenchmarking.org
Corporate / Organization Info
Bug Reports / Feature Requests
oneDNN MKL-DNN 1.2.0
pts/mkl-dnn-1.2.0
- 09 April 2020 -
Update against oneDNN 1.3 sources.
downloads.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v9.6.0m2--> <PhoronixTestSuite> <Downloads> <Package> <URL>https://github.com/oneapi-src/oneDNN/archive/v1.3.tar.gz</URL> <MD5>fcc9c37d9d418b6ccb4adeab84e7e8e4</MD5> <SHA256>b87c23b40a93ef5e479c81028db71c4847225b1a170f82af5e79f1cda826d3bf</SHA256> <FileName>oneDNN-1.3.tar.gz</FileName> <FileSize>5173993</FileSize> </Package> </Downloads> </PhoronixTestSuite>
install.sh
#!/bin/sh tar -xf oneDNN-1.3.tar.gz cd oneDNN-1.3/ #./scripts/prepare_mkl.sh mkdir build cd build CFLAGS="-O3 -march=native" CXXFLAGS="-O3 -march=native" cmake -DCMAKE_BUILD_TYPE=Release MKLDNN_ARCH_OPT_FLAGS="-O3 -march=native" $CMAKE_OPTIONS .. make -j $NUM_CPU_CORES echo $? > ~/install-exit-status cd ~ echo "#!/bin/bash export DNNL_CPU_RUNTIME=OMP export OMP_PLACES=cores export OMP_PROC_BIND=close cd oneDNN-1.3/build/tests/benchdnn ./benchdnn --engine=cpu --mode=p \$1 \$3 \$2 > \$LOG_FILE 2>&1 echo \$? > ~/test-exit-status" > mkl-dnn chmod +x mkl-dnn
results-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v9.6.0m2--> <PhoronixTestSuite> <ResultsParser> <OutputTemplate>total perf min(ms) #_MIN_RESULT_# avg(ms) #_RESULT_#</OutputTemplate> <LineHint>total perf</LineHint> <TurnCharsToSpace>:</TurnCharsToSpace> </ResultsParser> </PhoronixTestSuite>
test-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v9.6.0m2--> <PhoronixTestSuite> <TestInformation> <Title>oneDNN MKL-DNN</Title> <AppVersion>1.3</AppVersion> <Description>This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported.</Description> <ResultScale>ms</ResultScale> <Proportion>LIB</Proportion> <TimesToRun>3</TimesToRun> </TestInformation> <TestProfile> <Version>1.2.0</Version> <SupportedPlatforms>Linux</SupportedPlatforms> <SoftwareType>Utility</SoftwareType> <TestType>Processor</TestType> <License>Free</License> <Status>Verified</Status> <ExternalDependencies>build-utilities, cmake</ExternalDependencies> <EnvironmentSize>355</EnvironmentSize> <ProjectURL>https://github.com/oneapi-src/oneDNN</ProjectURL> <InternalTags>SMP, OpenMP</InternalTags> <Maintainer>Michael Larabel</Maintainer> </TestProfile> <TestSettings> <Option> <DisplayName>Harness</DisplayName> <Identifier>harness</Identifier> <Menu> <Entry> <Name>Deconvolution Batch deconv_1d</Name> <Value>--deconv --batch=inputs/deconv/deconv_1d</Value> </Entry> <Entry> <Name>Deconvolution Batch deconv_3d</Name> <Value>--deconv --batch=inputs/deconv/deconv_3d</Value> </Entry> <Entry> <Name>IP Batch 1D</Name> <Value>--ip --batch=inputs/ip/ip_1d</Value> </Entry> <Entry> <Name>IP Batch All</Name> <Value>--ip --batch=inputs/ip/ip_all</Value> </Entry> <Entry> <Name>Recurrent Neural Network Training</Name> <Value>--rnn --batch=inputs/rnn/rnn_training</Value> </Entry> <Entry> <Name>Recurrent Neural Network Inference</Name> <Value>--rnn --batch=inputs/rnn/rnn_inference</Value> </Entry> </Menu> </Option> <Option> <DisplayName>Data Type</DisplayName> <Identifier>data-type</Identifier> <ArgumentPrefix>--cfg=</ArgumentPrefix> <Menu> <Entry> <Name>f32</Name> <Value>f32</Value> </Entry> <Entry> <Name>f16</Name> <Value>f16</Value> </Entry> <Entry> <Name>u8s8f32</Name> <Value>u8s8f32</Value> <Message>Optimized For AVX-512</Message> </Entry> <Entry> <Name>bf16bf16bf16</Name> <Value>bf16bf16bf16</Value> <Message>Optimized For AVX-512 + VNNI</Message> </Entry> </Menu> </Option> </TestSettings> </PhoronixTestSuite>