tflite

ARMv8 Cortex-A78E testing with a NVIDIA Jetson Orin NX Engineering Developer Kit (36.3.0-gcid-36191598 BIOS) and Orin on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2408266-NE-TFLITE07592
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
all the tests
August 26
  19 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


tfliteOpenBenchmarking.orgPhoronix Test SuiteARMv8 Cortex-A78E @ 1.98GHz (8 Cores)NVIDIA Jetson Orin NX Engineering Developer Kit (36.3.0-gcid-36191598 BIOS)16GB128GB FORESEE XP1000F128GOrinRealtek RTL8111/8168/8411Ubuntu 22.045.15.136-tegra (aarch64)GNOME Shell 42.9X Server 1.21.1.4NVIDIA1.3.251GCC 11.4.0 + CUDA 12.2ext46582x1234ProcessorMotherboardMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerDisplay DriverVulkanCompilerFile-SystemScreen ResolutionTflite BenchmarksSystem Logs- Transparent Huge Pages: always- Scaling Governor: tegra194 performance- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 but not BHB + srbds: Not affected + tsx_async_abort: Not affected

tflitetensorflow-lite: Inception V4tensorflow-lite: Inception ResNet V2tensorflow-lite: NASNet Mobiletensorflow-lite: SqueezeNettensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantall the tests12884011769422728.58917.336786.493301.55OpenBenchmarking.org

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4all the tests30K60K90K120K150KSE +/- 626.35, N = 3128840

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2all the tests30K60K90K120K150KSE +/- 254.65, N = 3117694

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobileall the tests5K10K15K20K25KSE +/- 140.88, N = 322728.5

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetall the tests2K4K6K8K10KSE +/- 43.22, N = 38917.33

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Floatall the tests15003000450060007500SE +/- 22.54, N = 36786.49

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quantall the tests7001400210028003500SE +/- 4.21, N = 33301.55