ODROID-N2 Benchmark Comparison
ODROID-N2 benchmarks for a future article.
HTML result view exported from: https://openbenchmarking.org/result/1904251-HV-ODROIDN2760&rdt&grw.
FLAC Audio Encoding
WAV To FLAC
Tesseract OCR
Time To OCR 7 Images
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
LeelaChessZero
Backend: BLAS
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
LeelaChessZero
Backend: CUDA + cuDNN
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
LeelaChessZero
Backend: CUDA + cuDNN FP16
Rust Prime Benchmark
Prime Number Test To 200,000,000
7-Zip Compression
Compress Speed Test
Zstd Compression
Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19
C-Ray
Total Time - 4K, 16 Rays Per Pixel
TTSIOD 3D Renderer
Phong Rendering With Soft-Shadow Mapping
OpenCV Benchmark
PyBench
Total For Average Test Times
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
CUDA Mini-Nbody
Test: Original
GLmark2
Resolution: 1920 x 1080
TTSIOD 3D Renderer
Performance / Cost - Phong Rendering With Soft-Shadow Mapping
7-Zip Compression
Performance / Cost - Compress Speed Test
C-Ray
Performance / Cost - Total Time - 4K, 16 Rays Per Pixel
Rust Prime Benchmark
Performance / Cost - Prime Number Test To 200,000,000
Zstd Compression
Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19
FLAC Audio Encoding
Performance / Cost - WAV To FLAC
PyBench
Performance / Cost - Total For Average Test Times
CUDA Mini-Nbody
Performance / Cost - Test: Original
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
OpenCV Benchmark
Performance / Cost -
GLmark2
Performance / Cost - Resolution: 1920 x 1080
LeelaChessZero
Performance / Cost - Backend: BLAS
LeelaChessZero
Performance / Cost - Backend: CUDA + cuDNN
LeelaChessZero
Performance / Cost - Backend: CUDA + cuDNN FP16
Tesseract OCR
Performance / Cost - Time To OCR 7 Images
Meta Performance Per Dollar
Performance Per Dollar
Phoronix Test Suite v10.8.5