ODROID-N2 Benchmark Comparison

ODROID-N2 benchmarks for a future article.

HTML result view exported from: https://openbenchmarking.org/result/1904251-HV-ODROIDN2760&sro&grt.

ODROID-N2 Benchmark ComparisonProcessorMotherboardMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4ODROID-N2ODROID-C2ARMv8 rev 1 @ 1.73GHz (4 Cores)jetson_tx14096MB16GB 016G32NVIDIA Tegra X1VE228Ubuntu 16.044.4.38-tegra (aarch64)Unity 7.4.5X Server 1.18.4NVIDIA 28.1.04.5.01.0.8GCC 5.4.0 20160609ext41920x1080ARMv8 rev 3 @ 1.27GHz (4 Cores / 6 Threads)quill8192MB31GB 032G34NVIDIA TEGRAUnity 7.4.0NVIDIA 28.2.1GCC 5.4.0 20160609 + CUDA 9.0ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads)ARMv8 rev 0 @ 2.27GHz (8 Cores)jetson-xavier16384MB31GB HBG4a2NVIDIA Tegra XavierUbuntu 18.044.9.108-tegra (aarch64)Unity 7.5.0X Server 1.19.6NVIDIA 31.0.24.6.01.1.76GCC 7.3.0 + CUDA 10.0ARMv8 rev 1 @ 1.43GHz (4 Cores)jetson-nano4096MB32GB GB1QTNVIDIA TEGRARealtek RTL8111/8168/84114.9.140-tegra (aarch64)NVIDIA 1.0.01.1.85ARMv7 rev 4 @ 1.40GHz (4 Cores)BCM2835 Raspberry Pi 3 Model B Plus Rev 1.3926MB32GB GB2MWBCM2708Raspbian 9.64.19.23-v7+ (armv7l)LXDEX Server 1.19.2GCC 6.3.0 20170516656x416ARMv7 rev 1 @ 1.80GHz (4 Cores)Rockchip (Device Tree)2048MB32GB GB1QTDebian 9.04.4.16-00006-g4431f98-dirty (armv7l)X Server 1.18.41024x768ARMv7 rev 3 @ 1.50GHz (8 Cores)ODROID-XU4 Hardkernel Odroid XU416GB AJTD4Rllvmpipe 2GBVE228Ubuntu 18.044.14.37-135 (armv7l)X Server 1.19.63.3 Mesa 18.0.0-rc5 (LLVM 6.0 128 bits)GCC 7.3.01920x1080ARMv8 Cortex-A73 @ 1.90GHz (6 Cores)Hardkernel ODROID-N24096MBOSD4.9.156-14 (aarch64)1920x2160Amlogic ARMv8 Cortex-A53 @ 1.54GHz (4 Cores)ODROID-C22048MB32GB GB1QT3.16.57-20 (aarch64)X Server 1.19.61920x1080OpenBenchmarking.orgCompiler Details- Jetson TX1 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-Q: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson AGX Xavier: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Jetson Nano: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Raspberry Pi 3 Model B+: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv6 --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfp --with-target-system-zlib -v - ASUS TinkerBoard: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-mode=thumb --with-target-system-zlib -v - ODROID-XU4: --build=arm-linux-gnueabihf --disable-libitm --disable-libquadmath --disable-libquadmath-support --disable-sjlj-exceptions --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-gcc-major-version-only --with-mode=thumb --with-target-system-zlib -v - ODROID-N2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - ODROID-C2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-as=/usr/bin/aarch64-linux-gnu-as --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-ld=/usr/bin/aarch64-linux-gnu-ld -v Processor Details- Jetson TX1 Max-P: Scaling Governor: tegra-cpufreq interactive- Jetson TX2 Max-Q: Scaling Governor: tegra_cpufreq schedutil- Jetson TX2 Max-P: Scaling Governor: tegra_cpufreq schedutil- Jetson AGX Xavier: Scaling Governor: tegra_cpufreq schedutil- Jetson Nano: Scaling Governor: tegra-cpufreq schedutil- Raspberry Pi 3 Model B+: Scaling Governor: BCM2835 Freq ondemand- ASUS TinkerBoard: Scaling Governor: cpufreq-dt interactive- ODROID-XU4: Scaling Governor: cpufreq-dt ondemand- ODROID-N2: Scaling Governor: arm-big-little performance- ODROID-C2: Scaling Governor: meson_cpufreq interactivePython Details- Jetson TX1 Max-P: Python 2.7.12 + Python 3.5.2- Jetson TX2 Max-Q: Python 2.7.12 + Python 3.5.2- Jetson TX2 Max-P: Python 2.7.12 + Python 3.5.2- Jetson AGX Xavier: Python 2.7.15rc1 + Python 3.6.7- Jetson Nano: Python 2.7.15rc1 + Python 3.6.7- Raspberry Pi 3 Model B+: Python 2.7.13 + Python 3.5.3- ASUS TinkerBoard: Python 2.7.13 + Python 3.5.3- ODROID-XU4: Python 2.7.15rc1 + Python 3.6.7- ODROID-N2: Python 2.7.15rc1 + Python 3.6.7- ODROID-C2: Python 2.7.15rc1 + Python 3.6.7Kernel Details- ODROID-XU4: usbhid.quirks=0x0eef:0x0005:0x0004Graphics Details- ODROID-XU4: EXA

ODROID-N2 Benchmark Comparisoncompress-7zip: Compress Speed Testc-ray: Total Time - 4K, 16 Rays Per Pixelcuda-mini-nbody: Originalencode-flac: WAV To FLACglmark2: 1920 x 1080lczero: BLASlczero: CUDA + cuDNNlczero: CUDA + cuDNN FP16tensorrt-inference: VGG16 - FP16 - 4 - Disabledtensorrt-inference: VGG16 - INT8 - 4 - Disabledtensorrt-inference: VGG19 - FP16 - 4 - Disabledtensorrt-inference: VGG19 - INT8 - 4 - Disabledtensorrt-inference: VGG16 - FP16 - 32 - Disabledtensorrt-inference: VGG16 - INT8 - 32 - Disabledtensorrt-inference: VGG19 - FP16 - 32 - Disabledtensorrt-inference: VGG19 - INT8 - 32 - Disabledtensorrt-inference: AlexNet - FP16 - 4 - Disabledtensorrt-inference: AlexNet - INT8 - 4 - Disabledtensorrt-inference: AlexNet - FP16 - 32 - Disabledtensorrt-inference: AlexNet - INT8 - 32 - Disabledtensorrt-inference: ResNet50 - FP16 - 4 - Disabledtensorrt-inference: ResNet50 - INT8 - 4 - Disabledtensorrt-inference: GoogleNet - FP16 - 4 - Disabledtensorrt-inference: GoogleNet - INT8 - 4 - Disabledtensorrt-inference: ResNet152 - FP16 - 4 - Disabledtensorrt-inference: ResNet152 - INT8 - 4 - Disabledtensorrt-inference: ResNet50 - FP16 - 32 - Disabledtensorrt-inference: ResNet50 - INT8 - 32 - Disabledtensorrt-inference: GoogleNet - FP16 - 32 - Disabledtensorrt-inference: GoogleNet - INT8 - 32 - Disabledtensorrt-inference: ResNet152 - FP16 - 32 - Disabledtensorrt-inference: ResNet152 - INT8 - 32 - Disabledopencv-bench: pybench: Total For Average Test Timesrust-prime: Prime Number Test To 200,000,000tesseract-ocr: Time To OCR 7 Imagesttsiod-renderer: Phong Rendering With Soft-Shadow Mappingcompress-zstd: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19Jetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4ODROID-N2ODROID-C2450875379.206339128.4545.09145.8032948696.77104.2825.9914.2421.0411.4529.8315.7923.9412.5921614837423772.0139.1515688.8827.3414.5086.0847.1517910432.6717.364938735170.2528.85253.8055935858.2465.0732.6417.5626.5614.3236.8719.9129.8315.9226418446230192.2849.9719711335.1118.2911159.6923313041.9122.072965408104.9649.26144.971921235547.1354.47287647.629532515.01208.76303.78172.50265.81247.95475.08203.96394.661200114320383143547.50902.787961146224.19372.736361215.0810061693259.82493.22128300732.3771.9413380.0640499214.07104.7764615.3714014.3511.5911884.1020112841.0420.9683.3747.8215.767.7646.5125.0898.9355.6617.38271.047084150.19132.6740.94129.8720132030339.532.74209131097.6917.66342.2328361718279.05115021821.0521.22496.62412082797.03520.705009574.11180.6641.96597049295.5924.39243.05523173.11110.7357.42152.0421211535262.317.33474.3512184125.81220.4422.10314.33OpenBenchmarking.org

7-Zip Compression

Compress Speed Test

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-C2ODROID-N2ODROID-XU4Raspberry Pi 3 Model B+4K8K12K16K20KSE +/- 34.93, N = 3SE +/- 274.18, N = 12SE +/- 18.00, N = 3SE +/- 13.43, N = 3SE +/- 20.85, N = 3SE +/- 13.05, N = 3SE +/- 7.36, N = 3SE +/- 2.40, N = 3SE +/- 89.16, N = 12SE +/- 23.74, N = 11283619212404945085593329421215970412020131. (CXX) g++ options: -pipe -lpthread

7-Zip Compression

Performance / Cost - Compress Speed Test

OpenBenchmarking.orgMIPS Per Dollar, More Is Better7-Zip Compression 16.02Performance / Cost - Compress Speed TestASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-N2ODROID-XU4Raspberry Pi 3 Model B+2040608010042.9714.7940.909.039.345.5091.9266.4557.511. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-N2: $64.95 reported cost.8. ODROID-XU4: $62 reported cost.9. Raspberry Pi 3 Model B+: $35 reported cost.

C-Ray

Total Time - 4K, 16 Rays Per Pixel

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-C2ODROID-N2ODROID-XU4Raspberry Pi 3 Model B+400800120016002000SE +/- 22.09, N = 3SE +/- 7.17, N = 9SE +/- 0.35, N = 3SE +/- 10.23, N = 3SE +/- 49.09, N = 9SE +/- 1.44, N = 3SE +/- 0.16, N = 3SE +/- 0.25, N = 3SE +/- 29.65, N = 9SE +/- 2.46, N = 31718355921753585869153549282720301. (CC) gcc options: -lm -lpthread -O3

C-Ray

Performance / Cost - Total Time - 4K, 16 Rays Per Pixel

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterC-Ray 1.1Performance / Cost - Total Time - 4K, 16 Rays Per PixelASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-N2ODROID-XU4Raspberry Pi 3 Model B+110K220K330K440K550K113388.00461145.0091179.00375747.00350415.00520531.0031940.4651274.0071050.001. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-N2: $64.95 reported cost.8. ODROID-XU4: $62 reported cost.9. Raspberry Pi 3 Model B+: $35 reported cost.

CUDA Mini-Nbody

Test: Original

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: OriginalJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q1122334455SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 347.134.078.246.77

CUDA Mini-Nbody

Performance / Cost - Test: Original

OpenBenchmarking.org(NBody^2)/s Per Dollar, More Is BetterCUDA Mini-Nbody 2015-11-10Performance / Cost - Test: OriginalJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.0090.0180.0270.0360.0450.040.040.010.011. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

FLAC Audio Encoding

WAV To FLAC

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-C2ODROID-N2ODROID-XU4Raspberry Pi 3 Model B+70140210280350SE +/- 2.51, N = 5SE +/- 0.61, N = 5SE +/- 0.83, N = 5SE +/- 0.74, N = 5SE +/- 0.15, N = 5SE +/- 0.18, N = 5SE +/- 1.49, N = 5SE +/- 0.27, N = 5SE +/- 0.31, N = 5SE +/- 0.98, N = 5279.0554.47104.7779.2065.07104.28262.3195.5997.03339.531. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

FLAC Audio Encoding

Performance / Cost - WAV To FLAC

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterFLAC Audio Encoding 1.3.2Performance / Cost - WAV To FLACASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-N2ODROID-XU4Raspberry Pi 3 Model B+15K30K45K60K75K18417.3070756.5310372.2339520.8038976.9362463.726208.576015.8611883.551. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-N2: $64.95 reported cost.8. ODROID-XU4: $62 reported cost.9. Raspberry Pi 3 Model B+: $35 reported cost.

GLmark2

Resolution: 1920 x 1080

OpenBenchmarking.orgScore, More Is BetterGLmark2Resolution: 1920 x 1080Jetson AGX XavierJetson Nano60012001800240030002876646

GLmark2

Performance / Cost - Resolution: 1920 x 1080

OpenBenchmarking.orgScore Per Dollar, More Is BetterGLmark2Performance / Cost - Resolution: 1920 x 1080Jetson AGX XavierJetson Nano2468102.216.531. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.

LeelaChessZero

Backend: BLAS

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: BLASJetson AGX XavierJetson NanoODROID-C2ODROID-N21122334455SE +/- 0.62, N = 7SE +/- 0.03, N = 3SE +/- 0.09, N = 7SE +/- 0.10, N = 347.6215.377.3324.391. (CXX) g++ options: -lpthread -lz

LeelaChessZero

Backend: CUDA + cuDNN

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNNJetson AGX XavierJetson Nano2004006008001000SE +/- 6.14, N = 3SE +/- 0.26, N = 39531401. (CXX) g++ options: -lpthread -lz

LeelaChessZero

Backend: CUDA + cuDNN FP16

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNN FP16Jetson AGX Xavier5001000150020002500SE +/- 7.60, N = 32515.011. (CXX) g++ options: -lpthread -lz

LeelaChessZero

Performance / Cost - Backend: BLAS

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: BLASJetson AGX XavierJetson NanoODROID-N20.08550.1710.25650.3420.42750.040.160.381. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. ODROID-N2: $64.95 reported cost.

LeelaChessZero

Performance / Cost - Backend: CUDA + cuDNN

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: CUDA + cuDNNJetson AGX XavierJetson Nano0.31730.63460.95191.26921.58650.731.411. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.

LeelaChessZero

Performance / Cost - Backend: CUDA + cuDNN FP16

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: CUDA + cuDNN FP16Jetson AGX Xavier0.43650.8731.30951.7462.18251.941. $1299 reported cost.

Meta Performance Per Dollar

Performance Per Dollar

OpenBenchmarking.orgPerformance Per Dollar, More Is BetterMeta Performance Per DollarPerformance Per DollarODROID-N251015202519.171. $64.95 reported value. Average value: 1244.91.

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.10, N = 3SE +/- 0.02, N = 2SE +/- 0.50, N = 4SE +/- 0.13, N = 3208.7614.3532.6425.99

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q70140210280350SE +/- 0.46, N = 3SE +/- 0.25, N = 6SE +/- 0.20, N = 5303.7817.5614.24

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q4080120160200SE +/- 0.50, N = 3SE +/- 0.05, N = 2SE +/- 0.38, N = 3SE +/- 0.34, N = 3172.5011.5926.5621.04

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q60120180240300SE +/- 0.20, N = 3SE +/- 0.25, N = 4SE +/- 0.23, N = 3265.8114.3211.45

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.12, N = 3SE +/- 0.31, N = 3SE +/- 0.18, N = 3247.9536.8729.83

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q100200300400500SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3475.0819.9115.79

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q4080120160200SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3203.9629.8323.94

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q90180270360450SE +/- 0.23, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3394.6615.9212.59

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q30060090012001500SE +/- 1.82, N = 3SE +/- 2.12, N = 12SE +/- 7.77, N = 12SE +/- 3.03, N = 61200118264216

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 2.59, N = 3SE +/- 0.72, N = 3SE +/- 2.79, N = 5SE +/- 0.91, N = 31143.0084.10184.00148.00

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q400800120016002000SE +/- 2.07, N = 3SE +/- 1.59, N = 3SE +/- 7.68, N = 12SE +/- 2.82, N = 32038201462374

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q7001400210028003500SE +/- 1.06, N = 3SE +/- 0.06, N = 3SE +/- 0.52, N = 3SE +/- 1.39, N = 33143128301237

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q120240360480600SE +/- 0.03, N = 3SE +/- 0.25, N = 3SE +/- 1.32, N = 12SE +/- 1.10, N = 12547.5041.0492.2872.01

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 1.86, N = 3SE +/- 0.36, N = 3SE +/- 0.79, N = 4SE +/- 0.64, N = 3902.7820.9649.9739.15

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 2.48, N = 3SE +/- 0.70, N = 3SE +/- 2.27, N = 3SE +/- 1.90, N = 12796.0083.37197.00156.00

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 4.31, N = 3SE +/- 0.60, N = 3SE +/- 1.65, N = 3SE +/- 1.32, N = 31146.0047.82113.0088.88

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.22, N = 3SE +/- 0.04, N = 3SE +/- 0.36, N = 3SE +/- 0.34, N = 3224.1915.7635.1127.34

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q80160240320400SE +/- 1.59, N = 3SE +/- 0.03, N = 3SE +/- 0.14, N = 3SE +/- 0.15, N = 3372.737.7618.2914.50

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q140280420560700SE +/- 1.23, N = 3SE +/- 0.02, N = 3SE +/- 1.22, N = 3SE +/- 0.86, N = 3636.0046.51111.0086.08

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q30060090012001500SE +/- 0.25, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 31215.0825.0859.6947.15

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 0.21, N = 3SE +/- 0.19, N = 3SE +/- 4.50, N = 3SE +/- 2.17, N = 81006.0098.93233.00179.00

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q400800120016002000SE +/- 8.72, N = 3SE +/- 0.18, N = 3SE +/- 0.74, N = 3SE +/- 0.07, N = 31693.0055.66130.00104.00

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q60120180240300SE +/- 0.26, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3259.8217.3841.9132.67

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q110220330440550SE +/- 0.81, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3493.2222.0717.36

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.0360.0720.1080.1440.180.160.140.050.041. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.05180.10360.15540.20720.2590.230.030.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.02930.05860.08790.11720.14650.130.120.040.041. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.0450.090.1350.180.2250.200.020.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.04280.08560.12840.17120.2140.190.060.051. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.08330.16660.24990.33320.41650.370.030.031. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.0360.0720.1080.1440.180.160.050.041. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.06750.1350.20250.270.33750.300.030.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.26780.53560.80341.07121.3390.921.190.440.361. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.1980.3960.5940.7920.990.880.850.310.251. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.45680.91361.37041.82722.2841.572.030.770.621. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.54451.0891.63352.1782.72252.421.290.500.401. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.09450.1890.28350.3780.47250.420.410.150.121. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.15530.31060.46590.62120.77650.690.210.080.071. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.1890.3780.5670.7560.9450.610.840.330.261. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.1980.3960.5940.7920.990.880.480.190.151. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.03830.07660.11490.15320.19150.170.160.060.051. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.06530.13060.19590.26120.32650.290.080.030.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.11030.22060.33090.44120.55150.490.470.190.141. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.21150.4230.63450.8461.05750.940.250.100.081. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.2250.450.6750.91.1250.771.000.390.301. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.29250.5850.87751.171.46251.300.560.220.171. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.0450.090.1350.180.2250.200.180.070.051. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.08550.1710.25650.3420.42750.380.040.031. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

OpenCV Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenCV Benchmark 3.3.0Jetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-QODROID-C2ODROID-N2ODROID-XU4Raspberry Pi 3 Model B+110220330440550SE +/- 1.57, N = 3SE +/- 4.66, N = 9SE +/- 0.27, N = 3SE +/- 5.74, N = 3SE +/- 3.48, N = 3SE +/- 0.26, N = 3SE +/- 5.31, N = 3128.00271.04296.00493.00474.35243.05520.702.741. (CXX) g++ options: -std=c++11 -rdynamic

OpenCV Benchmark

Performance / Cost -

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterOpenCV Benchmark 3.3.0Performance / Cost -Jetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-QODROID-N2ODROID-XU4Raspberry Pi 3 Model B+60K120K180K240K300K166272.0026832.96177304.00295307.0015786.1032283.4095.901. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.5. ODROID-N2: $64.95 reported cost.6. ODROID-XU4: $62 reported cost.7. Raspberry Pi 3 Model B+: $35 reported cost.

PyBench

Total For Average Test Times

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-C2ODROID-N2ODROID-XU4Raspberry Pi 3 Model B+4K8K12K16K20KSE +/- 854.75, N = 9SE +/- 4.67, N = 3SE +/- 37.23, N = 3SE +/- 18.55, N = 3SE +/- 33.86, N = 3SE +/- 42.52, N = 3SE +/- 28.15, N = 3SE +/- 9.24, N = 3SE +/- 30.99, N = 3SE +/- 43.80, N = 31150230077084633954088735121845231500920913

PyBench

Performance / Cost - Total For Average Test Times

OpenBenchmarking.orgMilliseconds x Dollar, Fewer Is BetterPyBench 2018-02-16Performance / Cost - Total For Average Test TimesASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-N2ODROID-XU4Raspberry Pi 3 Model B+1.1M2.2M3.3M4.4M5.5M759132.003906093.00701316.003163161.003239392.005232265.00339753.45310558.00731955.001. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-N2: $64.95 reported cost.8. ODROID-XU4: $62 reported cost.9. Raspberry Pi 3 Model B+: $35 reported cost.

Rust Prime Benchmark

Prime Number Test To 200,000,000

OpenBenchmarking.orgSeconds, Fewer Is BetterRust Prime BenchmarkPrime Number Test To 200,000,000ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-C2ODROID-N2ODROID-XU4Raspberry Pi 3 Model B+400800120016002000SE +/- 187.90, N = 6SE +/- 0.00, N = 3SE +/- 0.22, N = 3SE +/- 0.77, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.30, N = 3SE +/- 0.02, N = 3SE +/- 0.37, N = 3SE +/- 1.55, N = 31821.0532.37150.19128.45104.96170.25125.8173.11574.111097.69-ldl -lrt -lpthread -lgcc_s -lc -lm -lutil1. (CC) gcc options: -pie -nodefaultlibs

Rust Prime Benchmark

Performance / Cost - Prime Number Test To 200,000,000

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterRust Prime BenchmarkPerformance / Cost - Prime Number Test To 200,000,000ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-N2ODROID-XU4Raspberry Pi 3 Model B+30K60K90K120K150K120189.3042048.6314868.8164096.5562871.04101979.754748.4935594.8238419.151. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-N2: $64.95 reported cost.8. ODROID-XU4: $62 reported cost.9. Raspberry Pi 3 Model B+: $35 reported cost.

Tesseract OCR

Time To OCR 7 Images

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.0.0-beta.1Time To OCR 7 ImagesJetson AGX XavierJetson NanoODROID-C2ODROID-N2ODROID-XU450100150200250SE +/- 0.89, N = 3SE +/- 1.50, N = 3SE +/- 0.86, N = 3SE +/- 0.05, N = 3SE +/- 1.38, N = 371.94132.67220.44110.73180.66

Tesseract OCR

Performance / Cost - Time To OCR 7 Images

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterTesseract OCR 4.0.0-beta.1Performance / Cost - Time To OCR 7 ImagesJetson AGX XavierJetson NanoODROID-N2ODROID-XU420K40K60K80K100K93450.0613134.337191.9111200.921. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. ODROID-N2: $64.95 reported cost.4. ODROID-XU4: $62 reported cost.

TTSIOD 3D Renderer

Phong Rendering With Soft-Shadow Mapping

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-C2ODROID-N2ODROID-XU4Raspberry Pi 3 Model B+306090120150SE +/- 0.27, N = 9SE +/- 1.63, N = 12SE +/- 0.11, N = 3SE +/- 0.04, N = 3SE +/- 0.15, N = 3SE +/- 0.46, N = 4SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.97, N = 9SE +/- 0.16, N = 321.22133.0040.9445.0949.2628.8522.1057.4241.9617.661. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++

TTSIOD 3D Renderer

Performance / Cost - Phong Rendering With Soft-Shadow Mapping

OpenBenchmarking.orgFPS Per Dollar, More Is BetterTTSIOD 3D Renderer 2.3bPerformance / Cost - Phong Rendering With Soft-Shadow MappingASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-N2ODROID-XU4Raspberry Pi 3 Model B+0.1980.3960.5940.7920.990.320.100.410.090.080.050.880.680.501. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-N2: $64.95 reported cost.8. ODROID-XU4: $62 reported cost.9. Raspberry Pi 3 Model B+: $35 reported cost.

Zstd Compression

Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19

OpenBenchmarking.orgSeconds, Fewer Is BetterZstd Compression 1.3.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-C2ODROID-N2Raspberry Pi 3 Model B+110220330440550SE +/- 2.16, N = 3SE +/- 0.91, N = 3SE +/- 0.23, N = 3SE +/- 0.42, N = 3SE +/- 0.29, N = 3SE +/- 1.02, N = 3SE +/- 1.41, N = 3SE +/- 1.77, N = 3SE +/- 1.03, N = 3496.6280.06129.87145.80144.97253.80314.33152.04342.231. (CC) gcc options: -O3 -pthread -lz -llzma

Zstd Compression

Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterZstd Compression 1.3.4Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-N2Raspberry Pi 3 Model B+30K60K90K120K150K32776.92103997.9412857.1372754.2086837.03152026.209875.0011978.051. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-N2: $64.95 reported cost.8. Raspberry Pi 3 Model B+: $35 reported cost.


Phoronix Test Suite v10.8.5