options

Help is available by moving the cursor above any symbol or by checking MAQAO website.

Global Metrics

Metricr0r1r2r3
Total Time (s)12.9413.0111.1712.85
Max (Thread Active Time) (s)12.2612.3010.5512.10
Average Active Time (s)11.9111.9510.2711.79
Activity Ratio (%)96.296.196.996.1
Average number of active threads88.32088.17088.27788.031
Affinity Stability (%)99.598.099.397.9
Time in analyzed loops (%)69.368.973.270.0
Time in analyzed innermost loops (%)68.568.372.466.8
Time in user code (%)70.169.474.070.5
Compilation Options Score (%)99.275.099.387.5
Array Access Efficiency (%)66.664.092.463.2
Potential Speedups
Perfect Flow Complexity1.001.001.001.00
Perfect OpenMP/MPI/Pthread/TBB1.221.311.231.29
Perfect OpenMP/MPI/Pthread/TBB + Perfect Load Distribution1.471.481.381.45
No Scalar IntegerPotential Speedup1.001.001.141.00
Nb Loops to get 80%3413
FP VectorisedPotential Speedup1.001.001.061.00
Nb Loops to get 80%2212
Fully VectorisedPotential Speedup1.011.001.281.00
Nb Loops to get 80%5717
Only FP ArithmeticPotential Speedup1.011.011.581.01
Nb Loops to get 80%7917

Cumulated Speedup If No Scalar Integer

Cumulated Speedup If FP Vectorized

Cumulated Speedup If Fully Vectorized

Cumulated Speedup If Only FP Arithmetic

Loop Based Profiles

Innermost / Single Loops

Inbetween Loops

Outermost Loops

Cumulated Coverage With All Loops

Innermost Loop Based Profiles

Coverage

Count

Application Categorization

Time

Coverage

Compilation Options

Source ObjectIssue
libllama.so
llama-vocab.cpp
-mcpu=native is missing.
llama-sampling.cpp
-mcpu=native is missing.
llama-impl.cpp
-mcpu=native is missing.
hashtable_policy.h
-mcpu=native is missing.
hashtable.h
-mcpu=native is missing.
libggml-cpu.so
binary-ops.cpp
repack.cpp
vec.cpp
ggml-cpu.cpp
traits.cpp
quants.c
ggml-cpu.c
ops.cpp
exec
console.cpp
-mcpu=native is missing.
sampling.cpp
-mcpu=native is missing.
vector.tcc
-mcpu=native is missing.
basic_string.h
-mcpu=native is missing.
regex_executor.tcc
-mcpu=native is missing.
[vdso]
-g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target)
-O2, -O3 or -Ofast is missing.
-mcpu=native is missing.
libggml-base.so
ggml-alloc.c
-mcpu=native is missing.
ggml.c
-mcpu=native is missing.

Path Count Profiles

Coverage

Count

Low Iteration Count Profiles

Coverage

Count

Average Number of Active Threads

Run 1 - orig_default

Run 2 - gcc_default

Run 3 - armclang_3

Run 4 - gcc_5

Experiment Summaries

r0r1r2r3
Experiment Name
Application/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/run/oneview_runs/defaults/orig/exec/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/run/base_runs/defaults/gcc/exec/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/run/binaries/armclang_3/exec/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/run/binaries/gcc_5/exec
Timestamp2025-09-16 13:35:292025-09-16 13:37:242025-09-16 13:49:122025-09-16 13:49:52
Experiment TypeMPI; OpenMP; same as r0same as r0same as r0
Machineip-172-31-47-249.ec2.internalsame as r0same as r0same as r0
Architectureaarch64same as r0same as r0same as r0
Micro ArchitectureARM_NEOVERSE_V2same as r0same as r0same as r0
Model Name
Cache Size
Number of Cores
Maximal Frequency0 GHzsame as r0same as r0same as r0
OS VersionLinux 6.1.109-118.189.amzn2023.aarch64 #1 SMP Tue Sep 10 08:58:40 UTC 2024same as r0same as r0same as r0
Architecture used during static analysisaarch64same as r0same as r0same as r0
Micro Architecture used during static analysisARM_NEOVERSE_V2same as r0same as r0same as r0
Compilation Options + [vdso]: N/A
exec: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 --driver-mode=g++ -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_SHARED -D LLAMA_USE_CURL -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/common/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/common/../vendor -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/../include -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -O3 -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT common/CMakeFiles/common.dir/sampling.cpp.o -MF common/CMakeFiles/common.dir/sampling.cpp.o.d -o common/CMakeFiles/common.dir/sampling.cpp.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/common/sampling.cpp
libggml-base.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 -D GGML_BUILD -D GGML_COMMIT=\"unknown\" -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_VERSION=\"0.0.0\" -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_base_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -O3 -D NDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -std=gnu11 -MD -MT ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o.d -o ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/ggml.c
libggml-cpu.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 -D GGML_BACKEND_BUILD -D GGML_BACKEND_SHARED -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_USE_CPU_REPACK -D GGML_USE_LLAMAFILE -D GGML_USE_OPENMP -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_cpu_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/.. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -O3 -D NDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -mcpu=native+dotprod+i8mm+sve+nosme -fopenmp=libomp -std=gnu11 -MD -MT ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o -MF ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o.d -o ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu/arch/arm/quants.c
libllama.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 --driver-mode=g++ -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_BUILD -D LLAMA_SHARED -D llama_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/../include -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -O3 -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT src/CMakeFiles/llama.dir/llama-vocab.cpp.o -MF src/CMakeFiles/llama.dir/llama-vocab.cpp.o.d -o src/CMakeFiles/llama.dir/llama-vocab.cpp.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/llama-vocab.cpp Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 --driver-mode=g++ -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_BUILD -D LLAMA_SHARED -D llama_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/../include -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -O3 -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT src/CMakeFiles/llama.dir/llama.cpp.o -MF src/CMakeFiles/llama.dir/llama.cpp.o.d -o src/CMakeFiles/llama.dir/llama.cpp.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/llama.cpp
exec: GNU C++17 14.2.0 -mlittle-endian -mabi=lp64 -g -O3 -O3 -fno-omit-frame-pointer -fcf-protection=none -fPIC
libggml-base.so: GNU C11 14.2.0 -mlittle-endian -mabi=lp64 -g -O3 -O3 -std=gnu11 -fno-omit-frame-pointer -fcf-protection=none -fPIC
libggml-cpu.so: GNU C11 14.2.0 -mcpu=neoverse-v2+crc+sve2-aes+sve2-sha3+nossbs+dotprod+i8mm+sve+nosme -mlittle-endian -mabi=lp64 -g -O3 -O3 -std=gnu11 -fno-omit-frame-pointer -fcf-protection=none -fPIC -fopenmp
libllama.so: GNU C++17 14.2.0 -mlittle-endian -mabi=lp64 -g -O3 -O3 -fno-omit-frame-pointer -fcf-protection=none -fPIC
+ [vdso]: N/A
exec: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 --driver-mode=g++ -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_SHARED -D LLAMA_USE_CURL -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/common/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/common/../vendor -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/../include -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -mcpu=neoverse-v2+nosve+nosve2 -armpl -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT common/CMakeFiles/common.dir/regex-partial.cpp.o -MF common/CMakeFiles/common.dir/regex-partial.cpp.o.d -o common/CMakeFiles/common.dir/regex-partial.cpp.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/common/regex-partial.cpp GNU C17 14.2.0 -mlittle-endian -mabi=lp64 -g -g -g -O2 -O2 -O2 -fbuilding-libgcc -fno-stack-protector -fPIC
libggml-base.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 -D GGML_BUILD -D GGML_COMMIT=\"unknown\" -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_VERSION=\"0.0.0\" -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_base_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -mcpu=neoverse-v2+nosve+nosve2 -armpl -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -std=gnu11 -MD -MT ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o.d -o ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/ggml.c
libggml-cpu.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 -D GGML_BACKEND_BUILD -D GGML_BACKEND_SHARED -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_USE_CPU_REPACK -D GGML_USE_LLAMAFILE -D GGML_USE_OPENMP -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_cpu_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/.. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -mcpu=neoverse-v2+nosve+nosve2 -armpl -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -fopenmp=libomp -std=gnu11 -MD -MT ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o -MF ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o.d -o ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu/arch/arm/quants.c
libllama.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 --driver-mode=g++ -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_BUILD -D LLAMA_SHARED -D llama_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/../include -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -mcpu=neoverse-v2+nosve+nosve2 -armpl -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT src/CMakeFiles/llama.dir/llama-vocab.cpp.o -MF src/CMakeFiles/llama.dir/llama-vocab.cpp.o.d -o src/CMakeFiles/llama.dir/llama-vocab.cpp.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/llama-vocab.cpp
+ [vdso]: N/A
exec: GNU C++17 14.2.0 -mcpu=neoverse-v2 -mlittle-endian -mabi=lp64 -g -O2 -funroll-loops -ffast-math -fno-omit-frame-pointer -fcf-protection=none -fno-finite-math-only -fPIC
libggml-base.so: GNU C11 14.2.0 -mcpu=neoverse-v2 -mlittle-endian -mabi=lp64 -g -O2 -std=gnu11 -funroll-loops -ffast-math -fno-omit-frame-pointer -fcf-protection=none -fno-finite-math-only -fPIC
libggml-cpu.so: GNU C11 14.2.0 -mcpu=neoverse-v2 -mlittle-endian -mabi=lp64 -g -O2 -std=gnu11 -funroll-loops -ffast-math -fno-omit-frame-pointer -fcf-protection=none -fno-finite-math-only -fPIC -fopenmp
libllama.so: GNU C++17 14.2.0 -mcpu=neoverse-v2 -mlittle-endian -mabi=lp64 -g -O2 -funroll-loops -ffast-math -fno-omit-frame-pointer -fcf-protection=none -fno-finite-math-only -fPIC
Number of processes observed1same as r0same as r0same as r0
Number of threads observed96same as r0same as r0same as r0
Frequency DriverNAsame as r0same as r0same as r0
Frequency GovernorNAsame as r0same as r0same as r0
Huge Pagesmadvisesame as r0same as r0same as r0
Hyperthreadingoffsame as r0same as r0same as r0
Number of sockets1same as r0same as r0same as r0
Number of cores per socket96same as r0same as r0same as r0
MAQAO version2025.1.2same as r0same as r0same as r0
MAQAO buildBuild information not availablesame as r0same as r0same as r0
Commentssame as r0same as r0same as r0
×