options

Stylizer

orig_defaulticx_defaultgcc_defaulticx_5aocc_10gcc_6

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 0 / 4 ] Application profile is too short (4.81 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (4.91 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (4.83 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (4.89 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (4.79 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 0 / 4 ] Application profile is too short (4.90 s)

If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop.

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.01 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.01 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.01 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.01 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.01 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 0 / 9 ] Compilation options are not available

Compilation options are an important optimization leverage but ONE-View is not able to analyze them.

[ 0 / 9 ] Compilation options are not available

Compilation options are an important optimization leverage but ONE-View is not able to analyze them.

[ 0 / 9 ] Compilation options are not available

Compilation options are an important optimization leverage but ONE-View is not able to analyze them.

[ 0 / 9 ] Compilation options are not available

Compilation options are an important optimization leverage but ONE-View is not able to analyze them.

[ 0 / 9 ] Compilation options are not available

Compilation options are an important optimization leverage but ONE-View is not able to analyze them.

[ 0 / 9 ] Compilation options are not available

Compilation options are an important optimization leverage but ONE-View is not able to analyze them.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

Strategizer

orig_defaulticx_defaultgcc_defaulticx_5aocc_10gcc_6

[ 4 / 4 ] CPU activity is good

CPU cores are active 91.41% of time

[ 4 / 4 ] CPU activity is good

CPU cores are active 92.49% of time

[ 3 / 4 ] CPU activity is below 90% (85.24%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 4 / 4 ] CPU activity is good

CPU cores are active 92.29% of time

[ 4 / 4 ] CPU activity is good

CPU cores are active 91.41% of time

[ 3 / 4 ] CPU activity is below 90% (85.12%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 4 / 4 ] Affinity is good (97.24%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (97.37%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (95.10%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (95.19%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (95.08%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (94.77%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 0 / 3 ] Too many functions do not use all threads

Functions running on a reduced number of threads (typically sequential code) cover at least 10% of application walltime (10.76%). Check both "Max Inclusive Time Over Threads" and "Nb Threads" in Functions or Loops tabs and consider parallelizing sequential regions or improving parallelization of regions running on a reduced number of threads

[ 3 / 3 ] Functions mostly use all threads

Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (9.18%)

[ 3 / 3 ] Functions mostly use all threads

Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (8.90%)

[ 3 / 3 ] Functions mostly use all threads

Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (8.40%)

[ 0 / 3 ] Too many functions do not use all threads

Functions running on a reduced number of threads (typically sequential code) cover at least 10% of application walltime (10.75%). Check both "Max Inclusive Time Over Threads" and "Nb Threads" in Functions or Loops tabs and consider parallelizing sequential regions or improving parallelization of regions running on a reduced number of threads

[ 3 / 3 ] Functions mostly use all threads

Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (9.08%)

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.87%) lower than cumulative innermost loop coverage (4.13%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.52%) lower than cumulative innermost loop coverage (38.33%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.54%) lower than cumulative innermost loop coverage (4.70%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.70%) lower than cumulative innermost loop coverage (4.14%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.80%) lower than cumulative innermost loop coverage (13.10%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (1.31%) lower than cumulative innermost loop coverage (12.67%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 0 / 4 ] A significant amount of threads are idle (77.63%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 0 / 4 ] A significant amount of threads are idle (77.53%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 0 / 4 ] A significant amount of threads are idle (79.22%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 0 / 4 ] A significant amount of threads are idle (77.70%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 0 / 4 ] A significant amount of threads are idle (77.88%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 0 / 4 ] A significant amount of threads are idle (79.15%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (4.13%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (38.33%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (4.70%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (4.14%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (13.10%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (12.67%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 2 / 2 ] Less than 10% (1.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.86%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.68%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.74%) is spend in Libm/SVML (special functions)

[ 0 / 4 ] Loop profile is flat

No hotspot found in the application (greatest loop coverage is 0.63%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (4.93%)

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (34.04%), representing an hotspot for the application

[ 0 / 4 ] Loop profile is flat

No hotspot found in the application (greatest loop coverage is 0.73%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (5.08%)

[ 0 / 4 ] Loop profile is flat

No hotspot found in the application (greatest loop coverage is 0.64%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (4.76%)

[ 0 / 4 ] Loop profile is flat

No hotspot found in the application (greatest loop coverage is 2.70%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (13.57%)

[ 0 / 4 ] Loop profile is flat

No hotspot found in the application (greatest loop coverage is 2.48%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (13.67%)

[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (5.00%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (38.85%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (5.24%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (4.84%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (13.91%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (13.98%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

Optimizer

Analysisr0r1r2r3r4r5
Loop Computation IssuesPresence of expensive FP instructions223222
Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA332312
Presence of a large number of scalar integer instructions212133
Control Flow IssuesPresence of calls212123
Presence of 2 to 4 paths211221
Presence of more than 4 paths112020
Non-innermost loop211212
Data Access IssuesPresence of constant non-unit stride data access213147
Presence of indirect access000024
More than 10% of the vector loads instructions are unaligned002000
Presence of special instructions executing on a single port313132
More than 20% of the loads are accessing the stack321232
Vectorization RoadblocksPresence of calls212123
Presence of 2 to 4 paths211221
Presence of more than 4 paths233243
Non-innermost loop211212
Presence of constant non-unit stride data access213147
Presence of indirect access000024
Inefficient VectorizationPresence of special instructions executing on a single port313132
Use of masked instructions111121
×