options

exec - 2025-09-16 13:49:12 - MAQAO 2025.1.2

Help is available by moving the cursor above any symbol or by checking MAQAO website.

Global Metrics

Total Time (s)11.17
Max (Thread Active Time) (s)10.55
Average Active Time (s)10.27
Activity Ratio (%)96.9
Average number of active threads88.277
Affinity Stability (%)99.3
Time in analyzed loops (%)73.2
Time in analyzed innermost loops (%)72.4
Time in user code (%)74.0
Compilation Options Score (%)99.3
Array Access Efficiency (%)92.4
Potential Speedups
Perfect Flow Complexity1.00
Perfect OpenMP/MPI/Pthread/TBB1.23
Perfect OpenMP/MPI/Pthread/TBB + Perfect Load Distribution1.38
No Scalar IntegerPotential Speedup1.14
Nb Loops to get 80%1
FP VectorisedPotential Speedup1.06
Nb Loops to get 80%1
Fully VectorisedPotential Speedup1.28
Nb Loops to get 80%1
FP Arithmetic OnlyPotential Speedup1.58
Nb Loops to get 80%1

CQA Potential Speedups Summary

Average Active Threads Count

Loop Based Profile

Innermost Loop Based Profile

Application Categorization

Compilation Options

Source ObjectIssue
libllama.so
hashtable.h
llama-sampling.cpp
llama-arch.cpp
stl_pair.h
llama-vocab.cpp
unique_ptr.h
llama-batch.cpp
hashtable_policy.h
libggml-cpu.so
binary-ops.cpp
traits.cpp
repack.cpp
ggml-cpu.cpp
ops.cpp
vec.cpp
ggml-cpu.c
quants.c
exec
common.cpp
sampling.cpp
vector.tcc
regex_executor.tcc
stl_uninitialized.h
[vdso]
-g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target)
-O2, -O3 or -Ofast is missing.
-mcpu=native is missing.
libggml-base.so
stl_construct.h
ggml.c

Loop Path Count Profile

Cumulated Speedup If No Scalar Integer

Cumulated Speedup If FP Vectorized

Cumulated Speedup If Fully Vectorized

Cumulated Speedup If FP Arithmetic Only

Experiment Summary

Application/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/run/binaries/armclang_3/exec
Timestamp2025-09-16 13:49:12 Universal Timestamp1758030552
Number of processes observed1 Number of threads observed96
Experiment TypeMPI; OpenMP;
Machineip-172-31-47-249.ec2.internal
Architectureaarch64 Micro ArchitectureARM_NEOVERSE_V2
OS VersionLinux 6.1.109-118.189.amzn2023.aarch64 #1 SMP Tue Sep 10 08:58:40 UTC 2024
Architecture used during static analysisaarch64 Micro Architecture used during static analysisARM_NEOVERSE_V2
Frequency DriverNA Frequency GovernorNA
Huge Pagesmadvise Hyperthreadingoff
Number of sockets1 Number of cores per socket96
Compilation Options+ [vdso]: N/A
exec: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 --driver-mode=g++ -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_SHARED -D LLAMA_USE_CURL -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/common/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/common/../vendor -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/../include -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -mcpu=neoverse-v2+nosve+nosve2 -armpl -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT common/CMakeFiles/common.dir/regex-partial.cpp.o -MF common/CMakeFiles/common.dir/regex-partial.cpp.o.d -o common/CMakeFiles/common.dir/regex-partial.cpp.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/common/regex-partial.cpp GNU C17 14.2.0 -mlittle-endian -mabi=lp64 -g -g -g -O2 -O2 -O2 -fbuilding-libgcc -fno-stack-protector -fPIC
libggml-base.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 -D GGML_BUILD -D GGML_COMMIT=\"unknown\" -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_VERSION=\"0.0.0\" -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_base_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -mcpu=neoverse-v2+nosve+nosve2 -armpl -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -std=gnu11 -MD -MT ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o.d -o ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/ggml.c
libggml-cpu.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 -D GGML_BACKEND_BUILD -D GGML_BACKEND_SHARED -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_USE_CPU_REPACK -D GGML_USE_LLAMAFILE -D GGML_USE_OPENMP -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_cpu_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/.. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -mcpu=neoverse-v2+nosve+nosve2 -armpl -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -fopenmp=libomp -std=gnu11 -MD -MT ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o -MF ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o.d -o ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu/arch/arm/quants.c
libllama.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 --driver-mode=g++ -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_BUILD -D LLAMA_SHARED -D llama_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/../include -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -mcpu=neoverse-v2+nosve+nosve2 -armpl -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT src/CMakeFiles/llama.dir/llama-vocab.cpp.o -MF src/CMakeFiles/llama.dir/llama-vocab.cpp.o.d -o src/CMakeFiles/llama.dir/llama-vocab.cpp.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/llama-vocab.cpp

Configuration Summary

Dataset
Run Command<executable> -m meta-llama-3.1-8b-instruct-Q8_0.gguf -no-cnv -t 96 -n 512 -p "what is a LLM?" --seed 0
MPI Commandmpirun -n <number_processes> --bind-to none --report-bindings
Number Processes1
Number Nodes1
FilterNot Used
Profile StartNot Used
×