options

Experiment Quality

qmckl_large_c_o1qmckl_large_c_o1_callocqmckl_large_c_o1_malloc-onlyqmckl_large_fortran_o1

[ 2 / 3 ] Security settings from the host restrict profiling. Some metrics will be missing or incomplete.

Current value for kernel.perf_event_paranoid is 2. If possible, set it to 1 or check with your system administrator which flag can be used to achieve this.

[ 2 / 3 ] Security settings from the host restrict profiling. Some metrics will be missing or incomplete.

Current value for kernel.perf_event_paranoid is 2. If possible, set it to 1 or check with your system administrator which flag can be used to achieve this.

[ 2 / 3 ] Security settings from the host restrict profiling. Some metrics will be missing or incomplete.

Current value for kernel.perf_event_paranoid is 2. If possible, set it to 1 or check with your system administrator which flag can be used to achieve this.

[ 2 / 3 ] Security settings from the host restrict profiling. Some metrics will be missing or incomplete.

Current value for kernel.perf_event_paranoid is 2. If possible, set it to 1 or check with your system administrator which flag can be used to achieve this.

[ 2.91 / 3 ] Architecture specific option -march=native is used

[ 2.93 / 3 ] Architecture specific option -march=native is used

[ 2.93 / 3 ] Architecture specific option -march=native is used

[ 0.02 / 3 ] Compilation of some functions is not optimized for the target processor

Architecture specific options are needed to produce efficient code for a specific processor ( -march=(target) ). Architecture specific options are needed to produce efficient code for a specific processor ( -x(target), -ax(target) or -march=(target)).

[ 3 / 3 ] Most of time spent in analyzed modules comes from functions with source/debug info

-g option gives access to debugging informations, such are source locations.

[ 3 / 3 ] Most of time spent in analyzed modules comes from functions with source/debug info

-g option gives access to debugging informations, such are source locations.

[ 3 / 3 ] Most of time spent in analyzed modules comes from functions with source/debug info

-g option gives access to debugging informations, such are source locations.

[ 3 / 3 ] Most of time spent in analyzed modules comes from functions with source/debug info

-g option gives access to debugging informations, such are source locations.

[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions with compilation options informations but -fno-omit-frame-pointer is missing

-fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.

[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions with compilation options informations but -fno-omit-frame-pointer is missing

-fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.

[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions with compilation options informations but -fno-omit-frame-pointer is missing

-fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.

[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions with compilation options informations but -fno-omit-frame-pointer is missing

-fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling.

[ 2.95 / 3 ] Optimization level option is correctly used

[ 2.96 / 3 ] Optimization level option is correctly used

[ 2.96 / 3 ] Optimization level option is correctly used

[ 0.02 / 3 ] Some functions are compiled with a low optimization level (O0 or O1)

To have better performances, it is advised to help the compiler by using a proper optimization level (-O2 of higher). Warning, depending on compilers, faster optimization levels can decrease numeric accuracy.

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.10 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.10 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.11 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.22 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 0 / 4 ] Application profile is too short (5.05 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (5.07 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (4.76 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (4.56 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

Code Quality

qmckl_large_c_o1qmckl_large_c_o1_callocqmckl_large_c_o1_malloc-onlyqmckl_large_fortran_o1

[ 4 / 4 ] CPU activity is good

CPU cores are active 99.50% of time

[ 4 / 4 ] CPU activity is good

CPU cores are active 99.49% of time

[ 4 / 4 ] CPU activity is good

CPU cores are active 99.41% of time

[ 4 / 4 ] CPU activity is good

CPU cores are active 98.80% of time

[ 4 / 4 ] Affinity is good (99.64%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (99.52%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (99.65%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.70%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 3 / 3 ] Functions mostly use all threads

Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.00%)

[ 3 / 3 ] Functions mostly use all threads

Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.00%)

[ 3 / 3 ] Functions mostly use all threads

Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.00%)

[ 3 / 3 ] Functions mostly use all threads

Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.00%)

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (23.19%) lower than cumulative innermost loop coverage (67.69%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (23.57%) lower than cumulative innermost loop coverage (69.23%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (25.87%) lower than cumulative innermost loop coverage (72.45%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (9.21%) lower than cumulative innermost loop coverage (89.14%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 4 / 4 ] Threads activity is good

On average, more than 99.50% of observed threads are actually active

[ 4 / 4 ] Threads activity is good

On average, more than 99.49% of observed threads are actually active

[ 4 / 4 ] Threads activity is good

On average, more than 99.41% of observed threads are actually active

[ 4 / 4 ] Threads activity is good

On average, more than 98.80% of observed threads are actually active

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (67.69%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (69.23%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (72.45%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (89.14%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (25.77%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (27.51%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (29.02%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (36.40%), representing an hotspot for the application

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (90.88%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (92.80%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (98.32%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (98.36%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

Loops Overview

Analysisr0r1r2r3
Loop Computation IssuesPresence of expensive FP instructions1010
Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA1321
Presence of a large number of scalar integer instructions3222
Low iteration count0001
Control Flow IssuesPresence of more than 4 paths3222
Non-innermost loop2111
Low iteration count0001
Data Access IssuesPresence of indirect access3435
More than 10% of the vector loads instructions are unaligned0210
Presence of expensive instructions: scatter/gather4544
Presence of special instructions executing on a single port2111
More than 20% of the loads are accessing the stack6221
Vectorization RoadblocksPresence of more than 4 paths3222
Non-innermost loop2111
Presence of indirect access3435
Inefficient VectorizationPresence of expensive instructions: scatter/gather4544
Presence of special instructions executing on a single port2111
Use of masked instructions1110
×