ODROID-N2 Benchmark Comparison

ODROID-N2 benchmarks for a future article.

HTML result view exported from: https://openbenchmarking.org/result/1904251-HV-ODROIDN2760&rdt&grw.

ODROID-N2 Benchmark ComparisonProcessorMotherboardMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N2ODROID-C2ARMv8 rev 0 @ 2.27GHz (8 Cores)jetson-xavier16384MB31GB HBG4a2NVIDIA Tegra XavierVE228Ubuntu 18.044.9.108-tegra (aarch64)Unity 7.5.0X Server 1.19.6NVIDIA 31.0.24.6.01.1.76GCC 7.3.0 + CUDA 10.0ext41920x1080ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads)quill8192MB31GB 032G34NVIDIA TEGRAUbuntu 16.044.4.38-tegra (aarch64)Unity 7.4.0X Server 1.18.4NVIDIA 28.2.14.5.0GCC 5.4.0 20160609 + CUDA 9.0ARMv8 rev 3 @ 1.27GHz (4 Cores / 6 Threads)ARMv7 rev 4 @ 1.40GHz (4 Cores)BCM2835 Raspberry Pi 3 Model B Plus Rev 1.3926MB32GB GB2MWBCM2708Raspbian 9.64.19.23-v7+ (armv7l)LXDEX Server 1.19.2GCC 6.3.0 20170516656x416ARMv7 rev 1 @ 1.80GHz (4 Cores)Rockchip (Device Tree)2048MB32GB GB1QTDebian 9.04.4.16-00006-g4431f98-dirty (armv7l)X Server 1.18.41024x768ARMv8 rev 1 @ 1.73GHz (4 Cores)jetson_tx14096MB16GB 016G32NVIDIA Tegra X1VE228Ubuntu 16.044.4.38-tegra (aarch64)Unity 7.4.5NVIDIA 28.1.04.5.01.0.8GCC 5.4.0 201606091920x1080ARMv7 rev 3 @ 1.50GHz (8 Cores)ODROID-XU4 Hardkernel Odroid XU42048MB16GB AJTD4Rllvmpipe 2GBUbuntu 18.044.14.37-135 (armv7l)X Server 1.19.63.3 Mesa 18.0.0-rc5 (LLVM 6.0 128 bits)GCC 7.3.0ARMv8 rev 1 @ 1.43GHz (4 Cores)jetson-nano4096MB32GB GB1QTNVIDIA TEGRARealtek RTL8111/8168/84114.9.140-tegra (aarch64)Unity 7.5.0NVIDIA 1.0.01.1.85GCC 7.3.0 + CUDA 10.0ARMv8 Cortex-A73 @ 1.90GHz (6 Cores)Hardkernel ODROID-N216GB AJTD4ROSD4.9.156-14 (aarch64)GCC 7.3.01920x2160Amlogic ARMv8 Cortex-A53 @ 1.54GHz (4 Cores)ODROID-C22048MB32GB GB1QT3.16.57-20 (aarch64)X Server 1.19.61920x1080OpenBenchmarking.orgCompiler Details- Jetson AGX Xavier: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Jetson TX2 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-Q: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Raspberry Pi 3 Model B+: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv6 --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfp --with-target-system-zlib -v - ASUS TinkerBoard: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-mode=thumb --with-target-system-zlib -v - Jetson TX1 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - ODROID-XU4: --build=arm-linux-gnueabihf --disable-libitm --disable-libquadmath --disable-libquadmath-support --disable-sjlj-exceptions --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-gcc-major-version-only --with-mode=thumb --with-target-system-zlib -v - Jetson Nano: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - ODROID-N2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - ODROID-C2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-as=/usr/bin/aarch64-linux-gnu-as --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-ld=/usr/bin/aarch64-linux-gnu-ld -v Processor Details- Jetson AGX Xavier: Scaling Governor: tegra_cpufreq schedutil- Jetson TX2 Max-P: Scaling Governor: tegra_cpufreq schedutil- Jetson TX2 Max-Q: Scaling Governor: tegra_cpufreq schedutil- Raspberry Pi 3 Model B+: Scaling Governor: BCM2835 Freq ondemand- ASUS TinkerBoard: Scaling Governor: cpufreq-dt interactive- Jetson TX1 Max-P: Scaling Governor: tegra-cpufreq interactive- ODROID-XU4: Scaling Governor: cpufreq-dt ondemand- Jetson Nano: Scaling Governor: tegra-cpufreq schedutil- ODROID-N2: Scaling Governor: arm-big-little performance- ODROID-C2: Scaling Governor: meson_cpufreq interactivePython Details- Jetson AGX Xavier: Python 2.7.15rc1 + Python 3.6.7- Jetson TX2 Max-P: Python 2.7.12 + Python 3.5.2- Jetson TX2 Max-Q: Python 2.7.12 + Python 3.5.2- Raspberry Pi 3 Model B+: Python 2.7.13 + Python 3.5.3- ASUS TinkerBoard: Python 2.7.13 + Python 3.5.3- Jetson TX1 Max-P: Python 2.7.12 + Python 3.5.2- ODROID-XU4: Python 2.7.15rc1 + Python 3.6.7- Jetson Nano: Python 2.7.15rc1 + Python 3.6.7- ODROID-N2: Python 2.7.15rc1 + Python 3.6.7- ODROID-C2: Python 2.7.15rc1 + Python 3.6.7Kernel Details- ODROID-XU4: usbhid.quirks=0x0eef:0x0005:0x0004Graphics Details- ODROID-XU4: EXA

ODROID-N2 Benchmark Comparisonencode-flac: WAV To FLACtesseract-ocr: Time To OCR 7 Imagestensorrt-inference: GoogleNet - INT8 - 4 - Disabledlczero: BLAStensorrt-inference: AlexNet - FP16 - 32 - Disabledlczero: CUDA + cuDNNtensorrt-inference: VGG19 - FP16 - 4 - Disabledtensorrt-inference: ResNet152 - FP16 - 4 - Disabledtensorrt-inference: VGG16 - FP16 - 4 - Disabledtensorrt-inference: VGG19 - FP16 - 32 - Disabledtensorrt-inference: VGG16 - FP16 - 32 - Disabledtensorrt-inference: GoogleNet - FP16 - 4 - Disabledtensorrt-inference: AlexNet - FP16 - 4 - Disabledtensorrt-inference: ResNet50 - FP16 - 4 - Disabledlczero: CUDA + cuDNN FP16rust-prime: Prime Number Test To 200,000,000compress-7zip: Compress Speed Testcompress-zstd: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19c-ray: Total Time - 4K, 16 Rays Per Pixelttsiod-renderer: Phong Rendering With Soft-Shadow Mappingopencv-bench: pybench: Total For Average Test Timestensorrt-inference: ResNet50 - INT8 - 4 - Disabledtensorrt-inference: ResNet152 - INT8 - 32 - Disabledtensorrt-inference: AlexNet - INT8 - 32 - Disabledtensorrt-inference: ResNet152 - FP16 - 32 - Disabledtensorrt-inference: AlexNet - INT8 - 4 - Disabledtensorrt-inference: GoogleNet - INT8 - 32 - Disabledtensorrt-inference: VGG19 - INT8 - 32 - Disabledtensorrt-inference: GoogleNet - FP16 - 32 - Disabledtensorrt-inference: VGG16 - INT8 - 32 - Disabledtensorrt-inference: ResNet50 - INT8 - 32 - Disabledtensorrt-inference: VGG19 - INT8 - 4 - Disabledtensorrt-inference: ResNet50 - FP16 - 32 - Disabledtensorrt-inference: VGG16 - INT8 - 4 - Disabledtensorrt-inference: ResNet152 - INT8 - 4 - Disabledcuda-mini-nbody: Originalglmark2: 1920 x 1080Jetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N2ODROID-C254.4771.94114647.622038953172.50224.19208.76203.96247.957961200547.502515.0132.371921280.063551331283007902.78493.223143259.8211431693394.661006475.081215.08265.81636303.78372.7347.13287665.0711346226.5635.1132.6429.8336.8719726492.28104.965593144.9758549.26296540849.9722.0730141.9118413015.9223319.9159.6914.3211117.5618.298.24104.2888.8837421.0427.3425.9923.9429.8315621672.01170.253294253.8086928.85493873539.1517.3623732.6714810412.5917915.7947.1511.4586.0814.2414.506.77339.531097.692013342.23203017.662.7420913279.051821.052836496.62171821.221150279.20128.454508145.8075345.09633997.03180.66574.11412082741.96520.705009104.77132.6747.8215.3720114011.5915.7614.3583.3711841.04150.194049129.8792140.94271.04708420.9612817.3884.1055.6698.9325.0846.517.764.0764695.59110.7324.3973.115970152.0449257.42243.055231262.31220.447.33125.812121314.33153522.10474.3512184OpenBenchmarking.org

FLAC Audio Encoding

WAV To FLAC

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N2ODROID-C270140210280350SE +/- 0.61, N = 5SE +/- 0.15, N = 5SE +/- 0.18, N = 5SE +/- 0.98, N = 5SE +/- 2.51, N = 5SE +/- 0.74, N = 5SE +/- 0.31, N = 5SE +/- 0.83, N = 5SE +/- 0.27, N = 5SE +/- 1.49, N = 554.4765.07104.28339.53279.0579.2097.03104.7795.59262.311. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

Tesseract OCR

Time To OCR 7 Images

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.0.0-beta.1Time To OCR 7 ImagesJetson AGX XavierODROID-XU4Jetson NanoODROID-N2ODROID-C250100150200250SE +/- 0.89, N = 3SE +/- 1.38, N = 3SE +/- 1.50, N = 3SE +/- 0.05, N = 3SE +/- 0.86, N = 371.94180.66132.67110.73220.44

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano2004006008001000SE +/- 4.31, N = 3SE +/- 1.65, N = 3SE +/- 1.32, N = 3SE +/- 0.60, N = 31146.00113.0088.8847.82

LeelaChessZero

Backend: BLAS

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: BLASJetson AGX XavierJetson NanoODROID-N2ODROID-C21122334455SE +/- 0.62, N = 7SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.09, N = 747.6215.3724.397.331. (CXX) g++ options: -lpthread -lz

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano400800120016002000SE +/- 2.07, N = 3SE +/- 7.68, N = 12SE +/- 2.82, N = 3SE +/- 1.59, N = 32038462374201

LeelaChessZero

Backend: CUDA + cuDNN

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNNJetson AGX XavierJetson Nano2004006008001000SE +/- 6.14, N = 3SE +/- 0.26, N = 39531401. (CXX) g++ options: -lpthread -lz

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano4080120160200SE +/- 0.50, N = 3SE +/- 0.38, N = 3SE +/- 0.34, N = 3SE +/- 0.05, N = 2172.5026.5621.0411.59

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano50100150200250SE +/- 0.22, N = 3SE +/- 0.36, N = 3SE +/- 0.34, N = 3SE +/- 0.04, N = 3224.1935.1127.3415.76

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano50100150200250SE +/- 0.10, N = 3SE +/- 0.50, N = 4SE +/- 0.13, N = 3SE +/- 0.02, N = 2208.7632.6425.9914.35

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q4080120160200SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3203.9629.8323.94

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.12, N = 3SE +/- 0.31, N = 3SE +/- 0.18, N = 3247.9536.8729.83

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano2004006008001000SE +/- 2.48, N = 3SE +/- 2.27, N = 3SE +/- 1.90, N = 12SE +/- 0.70, N = 3796.00197.00156.0083.37

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano30060090012001500SE +/- 1.82, N = 3SE +/- 7.77, N = 12SE +/- 3.03, N = 6SE +/- 2.12, N = 121200264216118

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano120240360480600SE +/- 0.03, N = 3SE +/- 1.32, N = 12SE +/- 1.10, N = 12SE +/- 0.25, N = 3547.5092.2872.0141.04

LeelaChessZero

Backend: CUDA + cuDNN FP16

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNN FP16Jetson AGX Xavier5001000150020002500SE +/- 7.60, N = 32515.011. (CXX) g++ options: -lpthread -lz

Rust Prime Benchmark

Prime Number Test To 200,000,000

OpenBenchmarking.orgSeconds, Fewer Is BetterRust Prime BenchmarkPrime Number Test To 200,000,000Jetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N2ODROID-C2400800120016002000SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 1.55, N = 3SE +/- 187.90, N = 6SE +/- 0.77, N = 3SE +/- 0.37, N = 3SE +/- 0.22, N = 3SE +/- 0.02, N = 3SE +/- 0.30, N = 332.37104.96170.251097.691821.05128.45574.11150.1973.11125.81-ldl -lrt -lpthread -lgcc_s -lc -lm -lutil1. (CC) gcc options: -pie -nodefaultlibs

7-Zip Compression

Compress Speed Test

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N2ODROID-C24K8K12K16K20KSE +/- 274.18, N = 12SE +/- 20.85, N = 3SE +/- 13.05, N = 3SE +/- 23.74, N = 11SE +/- 34.93, N = 3SE +/- 13.43, N = 3SE +/- 89.16, N = 12SE +/- 18.00, N = 3SE +/- 2.40, N = 3SE +/- 7.36, N = 3192125593329420132836450841204049597021211. (CXX) g++ options: -pipe -lpthread

Zstd Compression

Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19

OpenBenchmarking.orgSeconds, Fewer Is BetterZstd Compression 1.3.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19Jetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PJetson NanoODROID-N2ODROID-C2110220330440550SE +/- 0.91, N = 3SE +/- 0.29, N = 3SE +/- 1.02, N = 3SE +/- 1.03, N = 3SE +/- 2.16, N = 3SE +/- 0.42, N = 3SE +/- 0.23, N = 3SE +/- 1.77, N = 3SE +/- 1.41, N = 380.06144.97253.80342.23496.62145.80129.87152.04314.331. (CC) gcc options: -O3 -pthread -lz -llzma

C-Ray

Total Time - 4K, 16 Rays Per Pixel

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N2ODROID-C2400800120016002000SE +/- 7.17, N = 9SE +/- 49.09, N = 9SE +/- 1.44, N = 3SE +/- 2.46, N = 3SE +/- 22.09, N = 3SE +/- 10.23, N = 3SE +/- 29.65, N = 9SE +/- 0.35, N = 3SE +/- 0.25, N = 3SE +/- 0.16, N = 33555858692030171875382792149215351. (CC) gcc options: -lm -lpthread -O3

TTSIOD 3D Renderer

Phong Rendering With Soft-Shadow Mapping

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N2ODROID-C2306090120150SE +/- 1.63, N = 12SE +/- 0.15, N = 3SE +/- 0.46, N = 4SE +/- 0.16, N = 3SE +/- 0.27, N = 9SE +/- 0.04, N = 3SE +/- 0.97, N = 9SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3133.0049.2628.8517.6621.2245.0941.9640.9457.4222.101. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++

OpenCV Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenCV Benchmark 3.3.0Jetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ODROID-XU4Jetson NanoODROID-N2ODROID-C2110220330440550SE +/- 1.57, N = 3SE +/- 0.27, N = 3SE +/- 5.74, N = 3SE +/- 5.31, N = 3SE +/- 4.66, N = 9SE +/- 0.26, N = 3SE +/- 3.48, N = 3128.00296.00493.002.74520.70271.04243.05474.351. (CXX) g++ options: -std=c++11 -rdynamic

PyBench

Total For Average Test Times

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N2ODROID-C24K8K12K16K20KSE +/- 4.67, N = 3SE +/- 33.86, N = 3SE +/- 42.52, N = 3SE +/- 43.80, N = 3SE +/- 854.75, N = 9SE +/- 18.55, N = 3SE +/- 30.99, N = 3SE +/- 37.23, N = 3SE +/- 9.24, N = 3SE +/- 28.15, N = 33007540887352091311502633950097084523112184

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano2004006008001000SE +/- 1.86, N = 3SE +/- 0.79, N = 4SE +/- 0.64, N = 3SE +/- 0.36, N = 3902.7849.9739.1520.96

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q110220330440550SE +/- 0.81, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3493.2222.0717.36

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano7001400210028003500SE +/- 1.06, N = 3SE +/- 0.52, N = 3SE +/- 1.39, N = 3SE +/- 0.06, N = 33143301237128

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano60120180240300SE +/- 0.26, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.01, N = 3259.8241.9132.6717.38

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano2004006008001000SE +/- 2.59, N = 3SE +/- 2.79, N = 5SE +/- 0.91, N = 3SE +/- 0.72, N = 31143.00184.00148.0084.10

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano400800120016002000SE +/- 8.72, N = 3SE +/- 0.74, N = 3SE +/- 0.07, N = 3SE +/- 0.18, N = 31693.00130.00104.0055.66

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q90180270360450SE +/- 0.23, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3394.6615.9212.59

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano2004006008001000SE +/- 0.21, N = 3SE +/- 4.50, N = 3SE +/- 2.17, N = 8SE +/- 0.19, N = 31006.00233.00179.0098.93

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q100200300400500SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3475.0819.9115.79

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano30060090012001500SE +/- 0.25, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 31215.0859.6947.1525.08

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q60120180240300SE +/- 0.20, N = 3SE +/- 0.25, N = 4SE +/- 0.23, N = 3265.8114.3211.45

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano140280420560700SE +/- 1.23, N = 3SE +/- 1.22, N = 3SE +/- 0.86, N = 3SE +/- 0.02, N = 3636.00111.0086.0846.51

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q70140210280350SE +/- 0.46, N = 3SE +/- 0.25, N = 6SE +/- 0.20, N = 5303.7817.5614.24

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano80160240320400SE +/- 1.59, N = 3SE +/- 0.14, N = 3SE +/- 0.15, N = 3SE +/- 0.03, N = 3372.7318.2914.507.76

CUDA Mini-Nbody

Test: Original

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: OriginalJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano1122334455SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 347.138.246.774.07

GLmark2

Resolution: 1920 x 1080

OpenBenchmarking.orgScore, More Is BetterGLmark2Resolution: 1920 x 1080Jetson AGX XavierJetson Nano60012001800240030002876646

TTSIOD 3D Renderer

Performance / Cost - Phong Rendering With Soft-Shadow Mapping

OpenBenchmarking.orgFPS Per Dollar, More Is BetterTTSIOD 3D Renderer 2.3bPerformance / Cost - Phong Rendering With Soft-Shadow MappingJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N20.1980.3960.5940.7920.990.100.080.050.500.320.090.680.410.881. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Raspberry Pi 3 Model B+: $35 reported cost.5. ASUS TinkerBoard: $66 reported cost.6. Jetson TX1 Max-P: $499 reported cost.7. ODROID-XU4: $62 reported cost.8. Jetson Nano: $99 reported cost.9. ODROID-N2: $64.95 reported cost.

7-Zip Compression

Performance / Cost - Compress Speed Test

OpenBenchmarking.orgMIPS Per Dollar, More Is Better7-Zip Compression 16.02Performance / Cost - Compress Speed TestJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N22040608010014.799.345.5057.5142.979.0366.4540.9091.921. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Raspberry Pi 3 Model B+: $35 reported cost.5. ASUS TinkerBoard: $66 reported cost.6. Jetson TX1 Max-P: $499 reported cost.7. ODROID-XU4: $62 reported cost.8. Jetson Nano: $99 reported cost.9. ODROID-N2: $64.95 reported cost.

C-Ray

Performance / Cost - Total Time - 4K, 16 Rays Per Pixel

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterC-Ray 1.1Performance / Cost - Total Time - 4K, 16 Rays Per PixelJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N2110K220K330K440K550K461145.00350415.00520531.0071050.00113388.00375747.0051274.0091179.0031940.461. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Raspberry Pi 3 Model B+: $35 reported cost.5. ASUS TinkerBoard: $66 reported cost.6. Jetson TX1 Max-P: $499 reported cost.7. ODROID-XU4: $62 reported cost.8. Jetson Nano: $99 reported cost.9. ODROID-N2: $64.95 reported cost.

Rust Prime Benchmark

Performance / Cost - Prime Number Test To 200,000,000

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterRust Prime BenchmarkPerformance / Cost - Prime Number Test To 200,000,000Jetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N230K60K90K120K150K42048.6362871.04101979.7538419.15120189.3064096.5535594.8214868.814748.491. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Raspberry Pi 3 Model B+: $35 reported cost.5. ASUS TinkerBoard: $66 reported cost.6. Jetson TX1 Max-P: $499 reported cost.7. ODROID-XU4: $62 reported cost.8. Jetson Nano: $99 reported cost.9. ODROID-N2: $64.95 reported cost.

Zstd Compression

Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterZstd Compression 1.3.4Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19Jetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PJetson NanoODROID-N230K60K90K120K150K103997.9486837.03152026.2011978.0532776.9272754.2012857.139875.001. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Raspberry Pi 3 Model B+: $35 reported cost.5. ASUS TinkerBoard: $66 reported cost.6. Jetson TX1 Max-P: $499 reported cost.7. Jetson Nano: $99 reported cost.8. ODROID-N2: $64.95 reported cost.

FLAC Audio Encoding

Performance / Cost - WAV To FLAC

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterFLAC Audio Encoding 1.3.2Performance / Cost - WAV To FLACJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N215K30K45K60K75K70756.5338976.9362463.7211883.5518417.3039520.806015.8610372.236208.571. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Raspberry Pi 3 Model B+: $35 reported cost.5. ASUS TinkerBoard: $66 reported cost.6. Jetson TX1 Max-P: $499 reported cost.7. ODROID-XU4: $62 reported cost.8. Jetson Nano: $99 reported cost.9. ODROID-N2: $64.95 reported cost.

PyBench

Performance / Cost - Total For Average Test Times

OpenBenchmarking.orgMilliseconds x Dollar, Fewer Is BetterPyBench 2018-02-16Performance / Cost - Total For Average Test TimesJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PODROID-XU4Jetson NanoODROID-N21.1M2.2M3.3M4.4M5.5M3906093.003239392.005232265.00731955.00759132.003163161.00310558.00701316.00339753.451. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Raspberry Pi 3 Model B+: $35 reported cost.5. ASUS TinkerBoard: $66 reported cost.6. Jetson TX1 Max-P: $499 reported cost.7. ODROID-XU4: $62 reported cost.8. Jetson Nano: $99 reported cost.9. ODROID-N2: $64.95 reported cost.

CUDA Mini-Nbody

Performance / Cost - Test: Original

OpenBenchmarking.org(NBody^2)/s Per Dollar, More Is BetterCUDA Mini-Nbody 2015-11-10Performance / Cost - Test: OriginalJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.0090.0180.0270.0360.0450.040.010.010.041. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.0360.0720.1080.1440.180.160.050.040.141. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.05180.10360.15540.20720.2590.230.030.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.02930.05860.08790.11720.14650.130.040.040.121. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.0450.090.1350.180.2250.200.020.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.04280.08560.12840.17120.2140.190.060.051. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.08330.16660.24990.33320.41650.370.030.031. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.0360.0720.1080.1440.180.160.050.041. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.06750.1350.20250.270.33750.300.030.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.26780.53560.80341.07121.3390.920.440.361.191. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.1980.3960.5940.7920.990.880.310.250.851. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.45680.91361.37041.82722.2841.570.770.622.031. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.54451.0891.63352.1782.72252.420.500.401.291. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.09450.1890.28350.3780.47250.420.150.120.411. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.15530.31060.46590.62120.77650.690.080.070.211. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.1890.3780.5670.7560.9450.610.330.260.841. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.1980.3960.5940.7920.990.880.190.150.481. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.03830.07660.11490.15320.19150.170.060.050.161. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.06530.13060.19590.26120.32650.290.030.020.081. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.11030.22060.33090.44120.55150.490.190.140.471. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.21150.4230.63450.8461.05750.940.100.080.251. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.2250.450.6750.91.1250.770.390.301.001. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.29250.5850.87751.171.46251.300.220.170.561. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QJetson Nano0.0450.090.1350.180.2250.200.070.050.181. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.08550.1710.25650.3420.42750.380.040.031. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

OpenCV Benchmark

Performance / Cost -

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterOpenCV Benchmark 3.3.0Performance / Cost -Jetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ODROID-XU4Jetson NanoODROID-N260K120K180K240K300K166272.00177304.00295307.0095.9032283.4026832.9615786.101. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.4. Raspberry Pi 3 Model B+: $35 reported cost.5. ODROID-XU4: $62 reported cost.6. Jetson Nano: $99 reported cost.7. ODROID-N2: $64.95 reported cost.

GLmark2

Performance / Cost - Resolution: 1920 x 1080

OpenBenchmarking.orgScore Per Dollar, More Is BetterGLmark2Performance / Cost - Resolution: 1920 x 1080Jetson AGX XavierJetson Nano2468102.216.531. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.

LeelaChessZero

Performance / Cost - Backend: BLAS

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: BLASJetson AGX XavierJetson NanoODROID-N20.08550.1710.25650.3420.42750.040.160.381. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. ODROID-N2: $64.95 reported cost.

LeelaChessZero

Performance / Cost - Backend: CUDA + cuDNN

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: CUDA + cuDNNJetson AGX XavierJetson Nano0.31730.63460.95191.26921.58650.731.411. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.

LeelaChessZero

Performance / Cost - Backend: CUDA + cuDNN FP16

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: CUDA + cuDNN FP16Jetson AGX Xavier0.43650.8731.30951.7462.18251.941. $1299 reported cost.

Tesseract OCR

Performance / Cost - Time To OCR 7 Images

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterTesseract OCR 4.0.0-beta.1Performance / Cost - Time To OCR 7 ImagesJetson AGX XavierODROID-XU4Jetson NanoODROID-N220K40K60K80K100K93450.0611200.9213134.337191.911. Jetson AGX Xavier: $1299 reported cost.2. ODROID-XU4: $62 reported cost.3. Jetson Nano: $99 reported cost.4. ODROID-N2: $64.95 reported cost.

Meta Performance Per Dollar

Performance Per Dollar

OpenBenchmarking.orgPerformance Per Dollar, More Is BetterMeta Performance Per DollarPerformance Per DollarODROID-N251015202519.171. $64.95 reported value. Average value: 1244.91.


Phoronix Test Suite v10.8.5