Jetson Nano Developer Kit

ODROID-N2

HTML result view exported from: https://openbenchmarking.org/result/1904211-HV-ODROID75005&sro.

Jetson Nano Developer KitProcessorMotherboardMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4ARMv8 Cortex-A73ARMv8 rev 1 @ 1.73GHz (4 Cores)jetson_tx14096MB16GB 016G32NVIDIA Tegra X1VE228Ubuntu 16.044.4.38-tegra (aarch64)Unity 7.4.5X Server 1.18.4NVIDIA 28.1.04.5.01.0.8GCC 5.4.0 20160609ext41920x1080ARMv8 rev 3 @ 1.27GHz (4 Cores / 6 Threads)quill8192MB31GB 032G34NVIDIA TEGRAUnity 7.4.0NVIDIA 28.2.1GCC 5.4.0 20160609 + CUDA 9.0ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads)ARMv8 rev 0 @ 2.27GHz (8 Cores)jetson-xavier16384MB31GB HBG4a2NVIDIA Tegra XavierUbuntu 18.044.9.108-tegra (aarch64)Unity 7.5.0X Server 1.19.6NVIDIA 31.0.24.6.01.1.76GCC 7.3.0 + CUDA 10.0ARMv8 rev 1 @ 1.43GHz (4 Cores)jetson-nano4096MB32GB GB1QTNVIDIA TEGRARealtek RTL8111/8168/84114.9.140-tegra (aarch64)NVIDIA 1.0.01.1.85ARMv7 rev 4 @ 1.40GHz (4 Cores)BCM2835 Raspberry Pi 3 Model B Plus Rev 1.3926MB32GB GB2MWBCM2708Raspbian 9.64.19.23-v7+ (armv7l)LXDEX Server 1.19.2GCC 6.3.0 20170516656x416ARMv7 rev 1 @ 1.80GHz (4 Cores)Rockchip (Device Tree)2048MB32GB GB1QTDebian 9.04.4.16-00006-g4431f98-dirty (armv7l)X Server 1.18.41024x768ARMv7 rev 3 @ 1.50GHz (8 Cores)ODROID-XU4 Hardkernel Odroid XU416GB AJTD4Rllvmpipe 2GBVE228Ubuntu 18.044.14.37-135 (armv7l)X Server 1.19.63.3 Mesa 18.0.0-rc5 (LLVM 6.0 128 bits)GCC 7.3.01920x1080ARMv8 Cortex-A73 @ 1.90GHz (6 Cores)Hardkernel ODROID-N24096MBOSD4.9.156-14 (aarch64)1920x2160OpenBenchmarking.orgCompiler Details- Jetson TX1 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-Q: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson AGX Xavier: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Jetson Nano: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Raspberry Pi 3 Model B+: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv6 --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfp --with-target-system-zlib -v - ASUS TinkerBoard: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-mode=thumb --with-target-system-zlib -v - ODROID-XU4: --build=arm-linux-gnueabihf --disable-libitm --disable-libquadmath --disable-libquadmath-support --disable-sjlj-exceptions --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-gcc-major-version-only --with-mode=thumb --with-target-system-zlib -v - ARMv8 Cortex-A73: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v Processor Details- Jetson TX1 Max-P: Scaling Governor: tegra-cpufreq interactive- Jetson TX2 Max-Q: Scaling Governor: tegra_cpufreq schedutil- Jetson TX2 Max-P: Scaling Governor: tegra_cpufreq schedutil- Jetson AGX Xavier: Scaling Governor: tegra_cpufreq schedutil- Jetson Nano: Scaling Governor: tegra-cpufreq schedutil- Raspberry Pi 3 Model B+: Scaling Governor: BCM2835 Freq ondemand- ASUS TinkerBoard: Scaling Governor: cpufreq-dt interactive- ODROID-XU4: Scaling Governor: cpufreq-dt ondemand- ARMv8 Cortex-A73: Scaling Governor: arm-big-little performancePython Details- Jetson TX1 Max-P: Python 2.7.12 + Python 3.5.2- Jetson TX2 Max-Q: Python 2.7.12 + Python 3.5.2- Jetson TX2 Max-P: Python 2.7.12 + Python 3.5.2- Jetson AGX Xavier: Python 2.7.15rc1 + Python 3.6.7- Jetson Nano: Python 2.7.15rc1 + Python 3.6.7- Raspberry Pi 3 Model B+: Python 2.7.13 + Python 3.5.3- ASUS TinkerBoard: Python 2.7.13 + Python 3.5.3- ODROID-XU4: Python 2.7.15rc1 + Python 3.6.7- ARMv8 Cortex-A73: Python 2.7.15rc1 + Python 3.6.7Kernel Details- ODROID-XU4: usbhid.quirks=0x0eef:0x0005:0x0004Graphics Details- ODROID-XU4: EXA

Jetson Nano Developer Kitcuda-mini-nbody: Originalglmark2: 1920 x 1080tensorrt-inference: VGG16 - FP16 - 4 - Disabledtensorrt-inference: VGG16 - INT8 - 4 - Disabledtensorrt-inference: VGG19 - FP16 - 4 - Disabledtensorrt-inference: VGG19 - INT8 - 4 - Disabledtensorrt-inference: VGG16 - FP16 - 32 - Disabledtensorrt-inference: VGG16 - INT8 - 32 - Disabledtensorrt-inference: VGG19 - FP16 - 32 - Disabledtensorrt-inference: VGG19 - INT8 - 32 - Disabledtensorrt-inference: AlexNet - FP16 - 4 - Disabledtensorrt-inference: AlexNet - INT8 - 4 - Disabledtensorrt-inference: AlexNet - FP16 - 32 - Disabledtensorrt-inference: AlexNet - INT8 - 32 - Disabledtensorrt-inference: ResNet50 - FP16 - 4 - Disabledtensorrt-inference: ResNet50 - INT8 - 4 - Disabledtensorrt-inference: GoogleNet - FP16 - 4 - Disabledtensorrt-inference: GoogleNet - INT8 - 4 - Disabledtensorrt-inference: ResNet152 - FP16 - 4 - Disabledtensorrt-inference: ResNet152 - INT8 - 4 - Disabledtensorrt-inference: ResNet50 - FP16 - 32 - Disabledtensorrt-inference: ResNet50 - INT8 - 32 - Disabledtensorrt-inference: GoogleNet - FP16 - 32 - Disabledtensorrt-inference: GoogleNet - INT8 - 32 - Disabledtensorrt-inference: ResNet152 - FP16 - 32 - Disabledtensorrt-inference: ResNet152 - INT8 - 32 - Disabledlczero: BLASlczero: CUDA + cuDNNlczero: CUDA + cuDNN FP16ttsiod-renderer: Phong Rendering With Soft-Shadow Mappingcompress-7zip: Compress Speed Testc-ray: Total Time - 4K, 16 Rays Per Pixelrust-prime: Prime Number Test To 200,000,000compress-zstd: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19encode-flac: WAV To FLACopencv-bench: pybench: Total For Average Test Timestesseract-ocr: Time To OCR 7 ImagesJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4ARMv8 Cortex-A7345.094508753128.45145.8079.2063396.7725.9914.2421.0411.4529.8315.7923.9412.5921614837423772.0139.1515688.8827.3414.5086.0847.1517910432.6717.3628.853294869170.25253.80104.2849387358.2432.6417.5626.5614.3236.8719.9129.8315.9226418446230192.2849.9719711335.1118.2911159.6923313041.9122.0749.265593585104.96144.9765.07296540847.132876208.76303.78172.50265.81247.95475.08203.96394.661200114320383143547.50902.787961146224.19372.736361215.0810061693259.82493.2247.629532515.011331921235532.3780.0654.47128300771.944.0764614.3511.5911884.1020112841.0420.9683.3747.8215.767.7646.5125.0898.9355.6617.3815.3714040.944049921150.19129.87104.77271.047084132.6717.66201320301097.69342.23339.532.742091321.22283617181821.05496.62279.051150241.964120827574.1197.03520.705009180.6624.3957.42597049273.11152.0495.59243.055231110.73OpenBenchmarking.org

CUDA Mini-Nbody

Test: Original

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: OriginalJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q1122334455SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 347.134.078.246.77

GLmark2

Resolution: 1920 x 1080

OpenBenchmarking.orgScore, More Is BetterGLmark2Resolution: 1920 x 1080Jetson AGX XavierJetson Nano60012001800240030002876646

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.10, N = 3SE +/- 0.02, N = 2SE +/- 0.50, N = 4SE +/- 0.13, N = 3208.7614.3532.6425.99

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q70140210280350SE +/- 0.46, N = 3SE +/- 0.25, N = 6SE +/- 0.20, N = 5303.7817.5614.24

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q4080120160200SE +/- 0.50, N = 3SE +/- 0.05, N = 2SE +/- 0.38, N = 3SE +/- 0.34, N = 3172.5011.5926.5621.04

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q60120180240300SE +/- 0.20, N = 3SE +/- 0.25, N = 4SE +/- 0.23, N = 3265.8114.3211.45

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.12, N = 3SE +/- 0.31, N = 3SE +/- 0.18, N = 3247.9536.8729.83

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q100200300400500SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3475.0819.9115.79

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q4080120160200SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3203.9629.8323.94

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q90180270360450SE +/- 0.23, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3394.6615.9212.59

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q30060090012001500SE +/- 1.82, N = 3SE +/- 2.12, N = 12SE +/- 7.77, N = 12SE +/- 3.03, N = 61200118264216

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 2.59, N = 3SE +/- 0.72, N = 3SE +/- 2.79, N = 5SE +/- 0.91, N = 31143.0084.10184.00148.00

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q400800120016002000SE +/- 2.07, N = 3SE +/- 1.59, N = 3SE +/- 7.68, N = 12SE +/- 2.82, N = 32038201462374

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q7001400210028003500SE +/- 1.06, N = 3SE +/- 0.06, N = 3SE +/- 0.52, N = 3SE +/- 1.39, N = 33143128301237

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q120240360480600SE +/- 0.03, N = 3SE +/- 0.25, N = 3SE +/- 1.32, N = 12SE +/- 1.10, N = 12547.5041.0492.2872.01

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 1.86, N = 3SE +/- 0.36, N = 3SE +/- 0.79, N = 4SE +/- 0.64, N = 3902.7820.9649.9739.15

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 2.48, N = 3SE +/- 0.70, N = 3SE +/- 2.27, N = 3SE +/- 1.90, N = 12796.0083.37197.00156.00

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 4.31, N = 3SE +/- 0.60, N = 3SE +/- 1.65, N = 3SE +/- 1.32, N = 31146.0047.82113.0088.88

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.22, N = 3SE +/- 0.04, N = 3SE +/- 0.36, N = 3SE +/- 0.34, N = 3224.1915.7635.1127.34

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q80160240320400SE +/- 1.59, N = 3SE +/- 0.03, N = 3SE +/- 0.14, N = 3SE +/- 0.15, N = 3372.737.7618.2914.50

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q140280420560700SE +/- 1.23, N = 3SE +/- 0.02, N = 3SE +/- 1.22, N = 3SE +/- 0.86, N = 3636.0046.51111.0086.08

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q30060090012001500SE +/- 0.25, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 31215.0825.0859.6947.15

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 0.21, N = 3SE +/- 0.19, N = 3SE +/- 4.50, N = 3SE +/- 2.17, N = 81006.0098.93233.00179.00

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q400800120016002000SE +/- 8.72, N = 3SE +/- 0.18, N = 3SE +/- 0.74, N = 3SE +/- 0.07, N = 31693.0055.66130.00104.00

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q60120180240300SE +/- 0.26, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3259.8217.3841.9132.67

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q110220330440550SE +/- 0.81, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3493.2222.0717.36

LeelaChessZero

Backend: BLAS

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: BLASARMv8 Cortex-A73Jetson AGX XavierJetson Nano1122334455SE +/- 0.10, N = 3SE +/- 0.62, N = 7SE +/- 0.03, N = 324.3947.6215.371. (CXX) g++ options: -lpthread -lz

LeelaChessZero

Backend: CUDA + cuDNN

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNNJetson AGX XavierJetson Nano2004006008001000SE +/- 6.14, N = 3SE +/- 0.26, N = 39531401. (CXX) g++ options: -lpthread -lz

LeelaChessZero

Backend: CUDA + cuDNN FP16

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNN FP16Jetson AGX Xavier5001000150020002500SE +/- 7.60, N = 32515.011. (CXX) g++ options: -lpthread -lz

TTSIOD 3D Renderer

Phong Rendering With Soft-Shadow Mapping

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+306090120150SE +/- 0.05, N = 3SE +/- 0.27, N = 9SE +/- 1.63, N = 12SE +/- 0.11, N = 3SE +/- 0.04, N = 3SE +/- 0.15, N = 3SE +/- 0.46, N = 4SE +/- 0.97, N = 9SE +/- 0.16, N = 357.4221.22133.0040.9445.0949.2628.8541.9617.661. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++

7-Zip Compression

Compress Speed Test

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+4K8K12K16K20KSE +/- 2.40, N = 3SE +/- 34.93, N = 3SE +/- 274.18, N = 12SE +/- 18.00, N = 3SE +/- 13.43, N = 3SE +/- 20.85, N = 3SE +/- 13.05, N = 3SE +/- 89.16, N = 12SE +/- 23.74, N = 1159702836192124049450855933294412020131. (CXX) g++ options: -pipe -lpthread

C-Ray

Total Time - 4K, 16 Rays Per Pixel

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+400800120016002000SE +/- 0.25, N = 3SE +/- 22.09, N = 3SE +/- 7.17, N = 9SE +/- 0.35, N = 3SE +/- 10.23, N = 3SE +/- 49.09, N = 9SE +/- 1.44, N = 3SE +/- 29.65, N = 9SE +/- 2.46, N = 3492171835592175358586982720301. (CC) gcc options: -lm -lpthread -O3

Rust Prime Benchmark

Prime Number Test To 200,000,000

OpenBenchmarking.orgSeconds, Fewer Is BetterRust Prime BenchmarkPrime Number Test To 200,000,000ARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+400800120016002000SE +/- 0.02, N = 3SE +/- 187.90, N = 6SE +/- 0.00, N = 3SE +/- 0.22, N = 3SE +/- 0.77, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.37, N = 3SE +/- 1.55, N = 373.111821.0532.37150.19128.45104.96170.25574.111097.69-ldl -lrt -lpthread -lgcc_s -lc -lm -lutil1. (CC) gcc options: -pie -nodefaultlibs

Zstd Compression

Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19

OpenBenchmarking.orgSeconds, Fewer Is BetterZstd Compression 1.3.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19ARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+110220330440550SE +/- 1.77, N = 3SE +/- 2.16, N = 3SE +/- 0.91, N = 3SE +/- 0.23, N = 3SE +/- 0.42, N = 3SE +/- 0.29, N = 3SE +/- 1.02, N = 3SE +/- 1.03, N = 3152.04496.6280.06129.87145.80144.97253.80342.231. (CC) gcc options: -O3 -pthread -lz -llzma

FLAC Audio Encoding

WAV To FLAC

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+70140210280350SE +/- 0.27, N = 5SE +/- 2.51, N = 5SE +/- 0.61, N = 5SE +/- 0.83, N = 5SE +/- 0.74, N = 5SE +/- 0.15, N = 5SE +/- 0.18, N = 5SE +/- 0.31, N = 5SE +/- 0.98, N = 595.59279.0554.47104.7779.2065.07104.2897.03339.531. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

OpenCV Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenCV Benchmark 3.3.0ARMv8 Cortex-A73Jetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+110220330440550SE +/- 0.26, N = 3SE +/- 1.57, N = 3SE +/- 4.66, N = 9SE +/- 0.27, N = 3SE +/- 5.74, N = 3SE +/- 5.31, N = 3243.05128.00271.04296.00493.00520.702.741. (CXX) g++ options: -std=c++11 -rdynamic

PyBench

Total For Average Test Times

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+4K8K12K16K20KSE +/- 9.24, N = 3SE +/- 854.75, N = 9SE +/- 4.67, N = 3SE +/- 37.23, N = 3SE +/- 18.55, N = 3SE +/- 33.86, N = 3SE +/- 42.52, N = 3SE +/- 30.99, N = 3SE +/- 43.80, N = 352311150230077084633954088735500920913

Tesseract OCR

Time To OCR 7 Images

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.0.0-beta.1Time To OCR 7 ImagesARMv8 Cortex-A73Jetson AGX XavierJetson NanoODROID-XU44080120160200SE +/- 0.05, N = 3SE +/- 0.89, N = 3SE +/- 1.50, N = 3SE +/- 1.38, N = 3110.7371.94132.67180.66

TTSIOD 3D Renderer

Performance / Cost - Phong Rendering With Soft-Shadow Mapping

OpenBenchmarking.orgFPS Per Dollar, More Is BetterTTSIOD 3D Renderer 2.3bPerformance / Cost - Phong Rendering With Soft-Shadow MappingARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+0.1980.3960.5940.7920.990.880.320.100.410.090.080.050.680.501. ARMv8 Cortex-A73: $64.95 reported cost.2. ASUS TinkerBoard: $66 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.5. Jetson TX1 Max-P: $499 reported cost.6. Jetson TX2 Max-P: $599 reported cost.7. Jetson TX2 Max-Q: $599 reported cost.8. ODROID-XU4: $62 reported cost.9. Raspberry Pi 3 Model B+: $35 reported cost.

7-Zip Compression

Performance / Cost - Compress Speed Test

OpenBenchmarking.orgMIPS Per Dollar, More Is Better7-Zip Compression 16.02Performance / Cost - Compress Speed TestARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+2040608010091.9242.9714.7940.909.039.345.5066.4557.511. ARMv8 Cortex-A73: $64.95 reported cost.2. ASUS TinkerBoard: $66 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.5. Jetson TX1 Max-P: $499 reported cost.6. Jetson TX2 Max-P: $599 reported cost.7. Jetson TX2 Max-Q: $599 reported cost.8. ODROID-XU4: $62 reported cost.9. Raspberry Pi 3 Model B+: $35 reported cost.

C-Ray

Performance / Cost - Total Time - 4K, 16 Rays Per Pixel

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterC-Ray 1.1Performance / Cost - Total Time - 4K, 16 Rays Per PixelARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+110K220K330K440K550K31940.46113388.00461145.0091179.00375747.00350415.00520531.0051274.0071050.001. ARMv8 Cortex-A73: $64.95 reported cost.2. ASUS TinkerBoard: $66 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.5. Jetson TX1 Max-P: $499 reported cost.6. Jetson TX2 Max-P: $599 reported cost.7. Jetson TX2 Max-Q: $599 reported cost.8. ODROID-XU4: $62 reported cost.9. Raspberry Pi 3 Model B+: $35 reported cost.

Rust Prime Benchmark

Performance / Cost - Prime Number Test To 200,000,000

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterRust Prime BenchmarkPerformance / Cost - Prime Number Test To 200,000,000ARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+30K60K90K120K150K4748.49120189.3042048.6314868.8164096.5562871.04101979.7535594.8238419.151. ARMv8 Cortex-A73: $64.95 reported cost.2. ASUS TinkerBoard: $66 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.5. Jetson TX1 Max-P: $499 reported cost.6. Jetson TX2 Max-P: $599 reported cost.7. Jetson TX2 Max-Q: $599 reported cost.8. ODROID-XU4: $62 reported cost.9. Raspberry Pi 3 Model B+: $35 reported cost.

Zstd Compression

Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterZstd Compression 1.3.4Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19ARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+30K60K90K120K150K9875.0032776.92103997.9412857.1372754.2086837.03152026.2011978.051. ARMv8 Cortex-A73: $64.95 reported cost.2. ASUS TinkerBoard: $66 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.5. Jetson TX1 Max-P: $499 reported cost.6. Jetson TX2 Max-P: $599 reported cost.7. Jetson TX2 Max-Q: $599 reported cost.8. Raspberry Pi 3 Model B+: $35 reported cost.

FLAC Audio Encoding

Performance / Cost - WAV To FLAC

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterFLAC Audio Encoding 1.3.2Performance / Cost - WAV To FLACARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+15K30K45K60K75K6208.5718417.3070756.5310372.2339520.8038976.9362463.726015.8611883.551. ARMv8 Cortex-A73: $64.95 reported cost.2. ASUS TinkerBoard: $66 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.5. Jetson TX1 Max-P: $499 reported cost.6. Jetson TX2 Max-P: $599 reported cost.7. Jetson TX2 Max-Q: $599 reported cost.8. ODROID-XU4: $62 reported cost.9. Raspberry Pi 3 Model B+: $35 reported cost.

PyBench

Performance / Cost - Total For Average Test Times

OpenBenchmarking.orgMilliseconds x Dollar, Fewer Is BetterPyBench 2018-02-16Performance / Cost - Total For Average Test TimesARMv8 Cortex-A73ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+1.1M2.2M3.3M4.4M5.5M339753.45759132.003906093.00701316.003163161.003239392.005232265.00310558.00731955.001. ARMv8 Cortex-A73: $64.95 reported cost.2. ASUS TinkerBoard: $66 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.5. Jetson TX1 Max-P: $499 reported cost.6. Jetson TX2 Max-P: $599 reported cost.7. Jetson TX2 Max-Q: $599 reported cost.8. ODROID-XU4: $62 reported cost.9. Raspberry Pi 3 Model B+: $35 reported cost.

CUDA Mini-Nbody

Performance / Cost - Test: Original

OpenBenchmarking.org(NBody^2)/s Per Dollar, More Is BetterCUDA Mini-Nbody 2015-11-10Performance / Cost - Test: OriginalJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.0090.0180.0270.0360.0450.040.040.010.011. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.0360.0720.1080.1440.180.160.140.050.041. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.05180.10360.15540.20720.2590.230.030.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.02930.05860.08790.11720.14650.130.120.040.041. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.0450.090.1350.180.2250.200.020.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.04280.08560.12840.17120.2140.190.060.051. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.08330.16660.24990.33320.41650.370.030.031. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.0360.0720.1080.1440.180.160.050.041. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.06750.1350.20250.270.33750.300.030.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.26780.53560.80341.07121.3390.921.190.440.361. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.1980.3960.5940.7920.990.880.850.310.251. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.45680.91361.37041.82722.2841.572.030.770.621. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.54451.0891.63352.1782.72252.421.290.500.401. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.09450.1890.28350.3780.47250.420.410.150.121. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.15530.31060.46590.62120.77650.690.210.080.071. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.1890.3780.5670.7560.9450.610.840.330.261. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.1980.3960.5940.7920.990.880.480.190.151. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.03830.07660.11490.15320.19150.170.160.060.051. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.06530.13060.19590.26120.32650.290.080.030.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.11030.22060.33090.44120.55150.490.470.190.141. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.21150.4230.63450.8461.05750.940.250.100.081. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.2250.450.6750.91.1250.771.000.390.301. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.29250.5850.87751.171.46251.300.560.220.171. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.0450.090.1350.180.2250.200.180.070.051. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.08550.1710.25650.3420.42750.380.040.031. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

OpenCV Benchmark

Performance / Cost -

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterOpenCV Benchmark 3.3.0Performance / Cost -ARMv8 Cortex-A73Jetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+60K120K180K240K300K15786.10166272.0026832.96177304.00295307.0032283.4095.901. ARMv8 Cortex-A73: $64.95 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX2 Max-P: $599 reported cost.5. Jetson TX2 Max-Q: $599 reported cost.6. ODROID-XU4: $62 reported cost.7. Raspberry Pi 3 Model B+: $35 reported cost.

GLmark2

Performance / Cost - Resolution: 1920 x 1080

OpenBenchmarking.orgScore Per Dollar, More Is BetterGLmark2Performance / Cost - Resolution: 1920 x 1080Jetson AGX XavierJetson Nano2468102.216.531. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.

LeelaChessZero

Performance / Cost - Backend: BLAS

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: BLASARMv8 Cortex-A73Jetson AGX XavierJetson Nano0.08550.1710.25650.3420.42750.380.040.161. ARMv8 Cortex-A73: $64.95 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.

LeelaChessZero

Performance / Cost - Backend: CUDA + cuDNN

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: CUDA + cuDNNJetson AGX XavierJetson Nano0.31730.63460.95191.26921.58650.731.411. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.

LeelaChessZero

Performance / Cost - Backend: CUDA + cuDNN FP16

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: CUDA + cuDNN FP16Jetson AGX Xavier0.43650.8731.30951.7462.18251.941. $1299 reported cost.

Tesseract OCR

Performance / Cost - Time To OCR 7 Images

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterTesseract OCR 4.0.0-beta.1Performance / Cost - Time To OCR 7 ImagesARMv8 Cortex-A73Jetson AGX XavierJetson NanoODROID-XU420K40K60K80K100K7191.9193450.0613134.3311200.921. ARMv8 Cortex-A73: $64.95 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. ODROID-XU4: $62 reported cost.

Meta Performance Per Dollar

Performance Per Dollar

OpenBenchmarking.orgPerformance Per Dollar, More Is BetterMeta Performance Per DollarPerformance Per DollarARMv8 Cortex-A7351015202519.171. $64.95 reported value. Average value: 1244.91.


Phoronix Test Suite v10.8.5