options

Executable Output


* Info: Selecting the 'perf-low-ppn' engine for node inti6206

* Info: "ref-cycles" not supported on inti6206: fallback to "cpu-clock"
* Info: Process launched (host inti6206, process 2899797)                      :-) GROMACS - gmx mdrun, 2023.1 (-:

Executable:   /ccc/work/cont001/ocre/oserete/gromacs-2023.1-install-icc-2/bin/gmx
Data prefix:  /ccc/work/cont001/ocre/oserete/gromacs-2023.1-install-icc-2
Working dir:  /ccc/work/cont001/ocre/oserete/GROMACS_DATA
Command line:
  gmx mdrun -s ion_channel.tpr -ntmpi 1 -nsteps 1000 -pin on -deffnm icc-2


Back Off! I just backed up icc-2.log to ./#icc-2.log.22#
Reading file ion_channel.tpr, VERSION 2020.3 (single precision)
Note: file tpx version 119, software tpx version 129
Overriding nsteps with value passed on the command line: 1000 steps, 2.5 ps
Changing nstlist from 10 to 50, rlist from 1 to 1.095


Update groups can not be used for this system because there are three or more consecutively coupled constraints

Using 1 MPI thread
Using 1 OpenMP thread 


Back Off! I just backed up icc-2.edr to ./#icc-2.edr.22#
starting mdrun 'Protein'
1000 steps,      2.5 ps.

Writing final coordinates.

Back Off! I just backed up icc-2.gro to ./#icc-2.gro.14#

               Core t (s)   Wall t (s)        (%)
       Time:      198.688      198.689      100.0
                 (ns/day)    (hour/ns)
Performance:        1.088       22.055

GROMACS reminds you: "I cannot think of a single one, not even intelligence." (Enrico Fermi, when asked what characteristics physics Nobel laureates had in common.)


* Info: Process finished (host inti6206, process 2899797)
* Info: Dumping samples (host inti6206, process 2899797)
* Info: Dumping source info for callchain nodes (host inti6206, process 2899797)
* Info: Building/writing metadata (host inti6206)
* Info: Finished collect step (host inti6206, process 2899797)

Your experiment path is /ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV3_GROMACS_AMD_seq_1000it/tools/lprof_npsu_run_0

To display your profiling results:
#########################################################################################################################################################
#    LEVEL    |     REPORT     |                                                        COMMAND                                                         #
#########################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV3_GROMACS_AMD_seq_1000it/tools/lprof_npsu_run_0      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV3_GROMACS_AMD_seq_1000it/tools/lprof_npsu_run_0  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV3_GROMACS_AMD_seq_1000it/tools/lprof_npsu_run_0  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV3_GROMACS_AMD_seq_1000it/tools/lprof_npsu_run_0  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV3_GROMACS_AMD_seq_1000it/tools/lprof_npsu_run_0      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV3_GROMACS_AMD_seq_1000it/tools/lprof_npsu_run_0  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV3_GROMACS_AMD_seq_1000it/tools/lprof_npsu_run_0  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV3_GROMACS_AMD_seq_1000it/tools/lprof_npsu_run_0  #
#########################################################################################################################################################

×