Help is available by moving the cursor above any symbol or by checking MAQAO website.
Total Time (s) | 31.54 | ||
Max (Thread Active Time) (s) | 14.68 | ||
Average Active Time (s) | 14.51 | ||
Activity Ratio (%) | 94.9 | ||
Average number of active threads | 88.317 | ||
Affinity Stability (%) | 98.2 | ||
GFLOPS | 135.163 | ||
Time in analyzed loops (%) | 3.36 | ||
Time in analyzed innermost loops (%) | 3.22 | ||
Time in user code (%) | 17.1 | ||
Compilation Options Score (%) | 99.9 | ||
Array Access Efficiency (%) | 90.2 | ||
Potential Speedups | ![]() | ||
Perfect Flow Complexity | 1.01 | ||
Perfect OpenMP/MPI/Pthread/TBB | 3.53 | ||
Perfect OpenMP/MPI/Pthread/TBB + Perfect Load Distribution | 5.45 | ||
No Scalar Integer | Potential Speedup | 1.00 | |
Nb Loops to get 80% | 3 | ||
FP Vectorised | Potential Speedup | 1.00 | |
Nb Loops to get 80% | 4 | ||
Fully Vectorised | Potential Speedup | 1.02 | |
Nb Loops to get 80% | 5 | ||
FP Arithmetic Only | Potential Speedup | 1.02 | |
Nb Loops to get 80% | 2 |
Source Object | Issue |
---|---|
▼libllama.so | |
○hashtable.h | -march=(target) is missing. |
○llama-vocab.cpp | -march=(target) is missing. |
▼libggml-cpu.so | |
○binary-ops.cpp | |
○vec.cpp | |
○sgemm.cpp | |
○mmq.cpp | |
○ops.cpp | |
○common.h | |
○ggml-cpu.c | |
○quants.c | |
▼libggml-base.so | |
▼ | |
○ | -g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target) |
○ | -O2, -O3 or -Ofast is missing. |
○ | -march=(target) is missing. |
▼libggml-blas.so | |
▼ | |
○ | -g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target) |
○ | -O2, -O3 or -Ofast is missing. |
○ | -march=(target) is missing. |
Experiment Name | |||||
Application | /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/run/base_runs/defaults/aocc/exec | ||||
Timestamp | 2025-10-16 11:52:13 | Universal Timestamp | 1760608333 | ||
Number of processes observed | 1 | Number of threads observed | 192 | ||
Experiment Type | MPI; OpenMP; | ||||
Machine | isix06.benchmarkcenter.megware.com | ||||
Model Name | Intel(R) Xeon(R) 6972P | ||||
Architecture | x86_64 | Micro Architecture | GRANITE_RAPIDS | ||
Cache Size | 491520 KB | Number of Cores | 96 | ||
OS Version | Linux 5.14.0-570.39.1.el9_6.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Sep 4 05:08:52 EDT 2025 | ||||
Architecture used during static analysis | x86_64 | Micro Architecture used during static analysis | GRANITE_RAPIDS | ||
Frequency Driver | intel_pstate | Frequency Governor | performance | ||
Huge Pages | always | Hyperthreading | on | ||
Number of sockets | 2 | Number of cores per socket | 96 | ||
Compilation Options | libggml-base.so: N/A libggml-blas.so: N/A libggml-cpu.so: AMD clang version 17.0.6 (CLANG: AOCC_5.0.0-Build#1377 2024_09_24) /home/eoseret/aocc-compiler-5.0.0/bin/clang-17 --driver-mode=g++ -D GGML_BACKEND_BUILD -D GGML_BACKEND_SHARED -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_USE_CPU_REPACK -D GGML_USE_LLAMAFILE -D GGML_USE_OPENMP -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_cpu_EXPORTS -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/ggml/src/.. -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/ggml/src/. -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -g -fno-omit-frame-pointer -fcf-protection=none -nopie -grecord-command-line -O3 -D NDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -march=native -fopenmp=libomp -MD -MT ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o -MF ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o.d -o ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o -c /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu/amx/mmq.cpp libllama.so: AMD clang version 17.0.6 (CLANG: AOCC_5.0.0-Build#1377 2024_09_24) /home/eoseret/aocc-compiler-5.0.0/bin/clang-17 --driver-mode=g++ -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_BUILD -D LLAMA_SHARED -D llama_EXPORTS -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/src/. -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/src/../include -I /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -g -fno-omit-frame-pointer -fcf-protection=none -nopie -grecord-command-line -O3 -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT src/CMakeFiles/llama.dir/unicode.cpp.o -MF src/CMakeFiles/llama.dir/unicode.cpp.o.d -o src/CMakeFiles/llama.dir/unicode.cpp.o -c /beegfs/hackathon/users/eoseret/qaas_runs_test/176-060-7658/intel/llama.cpp/build/llama.cpp/src/unicode.cpp | ||||
Comments |
Dataset | |
Run Command | <executable> -m meta-llama-3.1-8b-instruct-Q8_0.gguf -t 192 -n 0 -p 512 -r 3 |
MPI Command | mpirun -n <number_processes> |
Number Processes | 1 |
Number Nodes | 1 |
Number Processes per Node | 1 |
Filter | Not Used |
Profile Start | Not Used |
Profile Stop | Not Used |
Maximal Path Number | 4 |