9950X onnx svt AMD Ryzen 9 9950X 16-Core testing with a ASUS ROG STRIX X670E-E GAMING WIFI (2204 BIOS) and AMD Radeon RX 7900 GRE 16GB on Ubuntu 24.04 via the Phoronix Test Suite. a: Processor: AMD Ryzen 9 9950X 16-Core @ 5.75GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG STRIX X670E-E GAMING WIFI (2204 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 32GB DDR5-6400MT/s Corsair CMK64GX5M2B6400C32, Disk: 2000GB Corsair MP700 PRO, Graphics: AMD Radeon RX 7900 GRE 16GB, Audio: AMD Navi 31 HDMI/DP, Monitor: DELL U2723QE, Network: Intel I225-V + Intel Wi-Fi 6E OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.2~git2406040600.8112d4~oibaf~n (git-8112d44 2024-06-04 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3840x2160 b: Processor: AMD Ryzen 9 9950X 16-Core @ 5.75GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG STRIX X670E-E GAMING WIFI (2204 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 32GB DDR5-6400MT/s Corsair CMK64GX5M2B6400C32, Disk: 2000GB Corsair MP700 PRO, Graphics: AMD Radeon RX 7900 GRE 16GB, Audio: AMD Navi 31 HDMI/DP, Monitor: DELL U2723QE, Network: Intel I225-V + Intel Wi-Fi 6E OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.2~git2406040600.8112d4~oibaf~n (git-8112d44 2024-06-04 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3840x2160 c: Processor: AMD Ryzen 9 9950X 16-Core @ 5.75GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG STRIX X670E-E GAMING WIFI (2204 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 32GB DDR5-6400MT/s Corsair CMK64GX5M2B6400C32, Disk: 2000GB Corsair MP700 PRO, Graphics: AMD Radeon RX 7900 GRE 16GB, Audio: AMD Navi 31 HDMI/DP, Monitor: DELL U2723QE, Network: Intel I225-V + Intel Wi-Fi 6E OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.2~git2406040600.8112d4~oibaf~n (git-8112d44 2024-06-04 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3840x2160 d: Processor: AMD Ryzen 9 9950X 16-Core @ 5.75GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG STRIX X670E-E GAMING WIFI (2204 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 32GB DDR5-6400MT/s Corsair CMK64GX5M2B6400C32, Disk: 2000GB Corsair MP700 PRO, Graphics: AMD Radeon RX 7900 GRE 16GB, Audio: AMD Navi 31 HDMI/DP, Monitor: DELL U2723QE, Network: Intel I225-V + Intel Wi-Fi 6E OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.2~git2406040600.8112d4~oibaf~n (git-8112d44 2024-06-04 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3840x2160 e: Processor: AMD Ryzen 9 9950X 16-Core @ 5.75GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG STRIX X670E-E GAMING WIFI (2204 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 32GB DDR5-6400MT/s Corsair CMK64GX5M2B6400C32, Disk: 2000GB Corsair MP700 PRO, Graphics: AMD Radeon RX 7900 GRE 16GB, Audio: AMD Navi 31 HDMI/DP, Monitor: DELL U2723QE, Network: Intel I225-V + Intel Wi-Fi 6E OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.2~git2406040600.8112d4~oibaf~n (git-8112d44 2024-06-04 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3840x2160 ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 7.71368 |=========================================================== b . 7.68899 |=========================================================== c . 8.03773 |============================================================= d . 8.62837 |================================================================== e . 7.82444 |============================================================ ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 25.10 |==================================================================== b . 25.05 |==================================================================== c . 25.05 |==================================================================== d . 23.41 |=============================================================== e . 25.02 |==================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 190.62 |================================================================== b . 192.68 |=================================================================== c . 192.82 |=================================================================== d . 183.02 |=============================================================== e . 193.25 |=================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 221.94 |=================================================================== b . 220.47 |================================================================== c . 214.48 |================================================================ d . 223.21 |=================================================================== e . 217.92 |================================================================= ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 11.78 |=================================================================== b . 11.60 |================================================================== c . 11.98 |==================================================================== d . 11.71 |================================================================== e . 11.75 |=================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 61.50 |=================================================================== b . 60.48 |================================================================== c . 62.20 |==================================================================== d . 62.15 |==================================================================== e . 60.32 |================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 161.41 |=================================================================== b . 156.72 |================================================================= c . 157.63 |================================================================= d . 157.69 |================================================================= e . 157.65 |================================================================= ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 245.28 |=================================================================== b . 243.73 |================================================================== c . 242.24 |================================================================== d . 245.87 |=================================================================== e . 239.03 |================================================================= Whisperfile 20Aug24 Model Size: Tiny Seconds < Lower Is Better a . 29.63 |================================================================== b . 30.31 |==================================================================== c . 30.04 |=================================================================== d . 30.04 |=================================================================== e . 30.37 |==================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 152.64 |=================================================================== b . 152.85 |=================================================================== c . 149.51 |================================================================== d . 149.30 |================================================================= e . 149.91 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 225.16 |=================================================================== b . 224.14 |=================================================================== c . 223.79 |=================================================================== d . 224.24 |=================================================================== e . 220.16 |================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 24.48 |=================================================================== b . 24.45 |=================================================================== c . 24.28 |=================================================================== d . 24.77 |==================================================================== e . 24.26 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 10.59 |=================================================================== b . 10.61 |=================================================================== c . 10.53 |=================================================================== d . 10.53 |=================================================================== e . 10.74 |==================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 1.41808 |================================================================= b . 1.41079 |================================================================= c . 1.42067 |================================================================= d . 1.40886 |================================================================= e . 1.43691 |================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 190.55 |=================================================================== b . 190.48 |=================================================================== c . 187.74 |================================================================== d . 190.60 |=================================================================== e . 191.36 |=================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 13.78 |=================================================================== b . 13.88 |==================================================================== c . 13.91 |==================================================================== d . 13.95 |==================================================================== e . 13.71 |=================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 71.08 |=================================================================== b . 70.80 |=================================================================== c . 71.60 |==================================================================== d . 70.55 |=================================================================== e . 71.64 |==================================================================== Whisperfile 20Aug24 Model Size: Small Seconds < Lower Is Better a . 127.97 |================================================================== b . 128.24 |================================================================== c . 129.63 |=================================================================== d . 129.80 |=================================================================== e . 129.38 |=================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 578.94 |=================================================================== b . 573.43 |================================================================== c . 571.19 |================================================================== d . 574.32 |================================================================== e . 575.42 |=================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 41.33 |==================================================================== b . 41.30 |==================================================================== c . 40.89 |=================================================================== d . 41.08 |==================================================================== e . 40.82 |=================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 127.19 |=================================================================== b . 125.63 |================================================================== c . 126.00 |================================================================== d . 127.17 |=================================================================== e . 126.02 |================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 851.24 |================================================================== b . 852.42 |================================================================== c . 855.08 |================================================================== d . 854.14 |================================================================== e . 861.65 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 98.66 |==================================================================== b . 99.26 |==================================================================== c . 99.22 |==================================================================== d . 98.24 |=================================================================== e . 98.06 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 257.20 |=================================================================== b . 258.65 |=================================================================== c . 257.42 |=================================================================== d . 255.71 |================================================================== e . 256.15 |================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 2.22498 |================================================================= b . 2.25003 |================================================================== c . 2.24795 |================================================================== d . 2.23680 |================================================================== e . 2.23767 |================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 150.87 |=================================================================== b . 151.25 |=================================================================== c . 149.57 |================================================================== d . 150.29 |=================================================================== e . 151.20 |=================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 2.55959 |================================================================== b . 2.55502 |================================================================= c . 2.57815 |================================================================== d . 2.55793 |================================================================= e . 2.55063 |================================================================= ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 4.95375 |================================================================== b . 4.92894 |================================================================= c . 4.98053 |================================================================== d . 4.93277 |================================================================= e . 4.93075 |================================================================= SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 302.42 |=================================================================== b . 304.33 |=================================================================== c . 303.06 |=================================================================== d . 303.12 |=================================================================== e . 301.64 |================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 128.71 |=================================================================== b . 127.96 |=================================================================== c . 128.37 |=================================================================== d . 128.10 |=================================================================== e . 127.61 |================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 44.10 |==================================================================== b . 43.97 |==================================================================== c . 43.77 |=================================================================== d . 43.87 |==================================================================== e . 43.74 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 11.81 |==================================================================== b . 11.82 |==================================================================== c . 11.79 |==================================================================== d . 11.82 |==================================================================== e . 11.74 |==================================================================== Whisperfile 20Aug24 Model Size: Medium Seconds < Lower Is Better a . 328.97 |=================================================================== b . 327.13 |=================================================================== c . 329.24 |=================================================================== d . 328.29 |=================================================================== e . 328.01 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 7.574 |==================================================================== b . 7.588 |==================================================================== c . 7.583 |==================================================================== d . 7.540 |==================================================================== e . 7.541 |==================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 55.74 |==================================================================== b . 56.05 |==================================================================== c . 56.01 |==================================================================== d . 56.03 |==================================================================== e . 55.91 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 19.40 |==================================================================== b . 19.40 |==================================================================== c . 19.42 |==================================================================== d . 19.49 |==================================================================== e . 19.49 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 37.31 |==================================================================== b . 37.21 |==================================================================== c . 37.20 |==================================================================== d . 37.15 |==================================================================== e . 37.20 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 1009.64 |================================================================== b . 1011.24 |================================================================== c . 1012.77 |================================================================== d . 1011.43 |================================================================== e . 1010.76 |================================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 1.760 |==================================================================== b . 1.758 |==================================================================== c . 1.760 |==================================================================== d . 1.759 |==================================================================== e . 1.759 |==================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 16.26 |=================================================================== b . 16.53 |==================================================================== c . 16.08 |================================================================== d . 16.09 |================================================================== e . 16.58 |==================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 24.19 |=================================================================== b . 24.21 |=================================================================== c . 24.45 |==================================================================== d . 24.34 |==================================================================== e . 24.49 |==================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 390.73 |=================================================================== b . 391.38 |=================================================================== c . 387.87 |================================================================== d . 390.94 |=================================================================== e . 392.06 |=================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 705.21 |=================================================================== b . 708.82 |=================================================================== c . 703.89 |================================================================== d . 709.79 |=================================================================== e . 695.94 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.44116 |================================================================= b . 4.46130 |================================================================= c . 4.46838 |================================================================= d . 4.45929 |================================================================= e . 4.54187 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 4.50538 |================================================================ b . 4.53509 |================================================================ c . 4.66209 |================================================================== d . 4.47960 |=============================================================== e . 4.58833 |================================================================= ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1.72670 |================================================================= b . 1.74330 |================================================================== c . 1.74974 |================================================================== d . 1.74031 |================================================================== e . 1.73720 |================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 6.55079 |================================================================= b . 6.54126 |================================================================ c . 6.68702 |================================================================== d . 6.69723 |================================================================== e . 6.66991 |================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 17.94 |==================================================================== b . 17.84 |==================================================================== c . 17.85 |==================================================================== d . 17.84 |==================================================================== e . 17.89 |==================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 40.85 |=================================================================== b . 40.90 |=================================================================== c . 41.18 |==================================================================== d . 40.36 |=================================================================== e . 41.22 |==================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 201.87 |=================================================================== b . 202.88 |=================================================================== c . 200.78 |================================================================== d . 202.72 |=================================================================== e . 202.81 |=================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 449.48 |=================================================================== b . 444.44 |================================================================== c . 444.85 |================================================================== d . 447.06 |=================================================================== e . 446.89 |=================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1.17405 |================================================================== b . 1.17235 |================================================================== c . 1.16879 |================================================================== d . 1.17003 |================================================================== e . 1.15972 |================================================================= ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 4.07620 |================================================================ b . 4.10040 |================================================================= c . 4.12702 |================================================================= d . 4.06632 |================================================================ e . 4.18210 |================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 39.83 |=============================================================== b . 39.91 |================================================================ c . 39.92 |================================================================ d . 42.71 |==================================================================== e . 39.96 |================================================================ ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 84.93 |=================================================================== b . 86.20 |==================================================================== c . 83.45 |================================================================== d . 85.40 |=================================================================== e . 85.08 |=================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 5.24802 |=============================================================== b . 5.18948 |=============================================================== c . 5.18564 |=============================================================== d . 5.46322 |================================================================== e . 5.17395 |=============================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 5.24748 |================================================================= b . 5.24914 |================================================================= c . 5.32595 |================================================================== d . 5.24590 |================================================================= e . 5.22498 |================================================================= ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 7.86110 |================================================================= b . 7.95942 |================================================================== c . 7.93481 |================================================================== d . 7.86194 |================================================================= e . 7.93399 |================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 14.07 |==================================================================== b . 14.12 |==================================================================== c . 13.97 |=================================================================== d . 14.17 |==================================================================== e . 13.96 |=================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 72.55 |==================================================================== b . 72.03 |=================================================================== c . 71.87 |=================================================================== d . 71.69 |=================================================================== e . 72.96 |==================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 129.64 |=================================================================== b . 130.12 |=================================================================== c . 124.41 |================================================================ d . 115.89 |============================================================ e . 127.80 |================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 6.19550 |================================================================ b . 6.37911 |================================================================== c . 6.34248 |================================================================== d . 6.33978 |================================================================== e . 6.34139 |================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 6.62484 |================================================================= b . 6.60878 |================================================================= c . 6.68270 |================================================================== d . 6.65046 |================================================================== e . 6.61099 |=================================================================