tflite

ARMv8 Cortex-A78E testing with a NVIDIA Jetson Orin Nano Developer Kit (36.3.0-gcid-36191598 BIOS) and Orin on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2408268-NE-TFLITE59672
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
all of them
August 26
  19 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


tfliteOpenBenchmarking.orgPhoronix Test SuiteARMv8 Cortex-A78E @ 1.51GHz (6 Cores)NVIDIA Jetson Orin Nano Developer Kit (36.3.0-gcid-36191598 BIOS)8GB1000GB Western Digital WD_BLACK SN770 1TBOrinRealtek RTL8111/8168/8411 + Realtek RTL8822CE 802.11ac PCIeUbuntu 22.045.15.136-tegra (aarch64)GNOME Shell 42.9X Server 1.21.1.4NVIDIA1.3.251GCC 11.4.0 + CUDA 12.2ext46582x1234ProcessorMotherboardMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerDisplay DriverVulkanCompilerFile-SystemScreen ResolutionTflite BenchmarksSystem Logs- Transparent Huge Pages: always- Scaling Governor: tegra194 performance- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of CSV2 but not BHB + srbds: Not affected + tsx_async_abort: Not affected

tflitetensorflow-lite: Inception V4tensorflow-lite: Inception ResNet V2tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: SqueezeNettensorflow-lite: Mobilenet Quantall of them20198218512230431.510103.614598.65164.66OpenBenchmarking.org

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4all of them40K80K120K160K200KSE +/- 373.30, N = 3201982

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2all of them40K80K120K160K200KSE +/- 76.78, N = 3185122

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobileall of them7K14K21K28K35KSE +/- 251.51, N = 330431.5

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Floatall of them2K4K6K8K10KSE +/- 39.85, N = 310103.6

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetall of them3K6K9K12K15KSE +/- 43.56, N = 314598.6

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quantall of them11002200330044005500SE +/- 29.50, N = 35164.66