Known issues: Difference between revisions

Jump to navigation Jump to search
little more specific about what's going on
No edit summary
(little more specific about what's going on)
Line 9: Line 9:


== Scheduler issues == <!--T:6-->
== Scheduler issues == <!--T:6-->
* Exceeded step memory limit at some point
* Slurm can report: "Exceeded step memory limit at some point" which may be surprising and can cause a problem with dependent jobs.
** IO uses memory and slurm is correctly reporting these cases.
** File I/O uses memory and Slurm is correctly reporting this usage.  This usage (primarily delayed writes) was not as visible in previous systems.  The kernel usually resolves such memory shortages by flushing writes to the filesystem.
** IO (and other things) can trigger an actual OOM kill, which triggers the same message (but affects exit status differently).
** Memory shortage can cause the kernel to kill processes ("OOM kill"), which results in the same message but affects exit status differently.
** DerivedExitStatus 0:125 is the signature of the memory-use-but-not-OOM
** A job that reports DerivedExitStatus 0:125 indicates hitting the memory limit, but not being OOM-killed.
** in the absence of any other action, a step with 0:125 will *not* enable a dependent job which has afterok the latter is a slurm bug that will be fixed, so that slurm can clearly distinguish its handling of "out of memory" warning versus actual "kernel OOM killed" errors.  Slurm will continue to correctly report memory usage from cgroups, so IO memory will still be counted.
** In the absence of any other action, a step with 0:125 will *not* enable a job which has afterok dependency.  This is a Slurm bug that will be [https://bugs.schedmd.com/show_bug.cgi?id=3820 fixed in 17.11.3], so that Slurm can distinguish between the warning condition versus actual kernel OOM-kill events.  Slurm will continue to limit memory usage from cgroups, so I/O memory will still be counted and be reported when exceeds the job's requested memory.
** This is a [https://bugs.schedmd.com/show_bug.cgi?id=3820 reported bug], which is reportedly fixed in an upcoming version of Slurm.
 
* The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. See [[Job_scheduling_policies#Whole_nodes_versus_cores;|Job Scheduling - Whole Node Scheduling]].
* The CC Slurm configuration encourages whole-node jobs. When appropriate, users should request whole-node rather than per-core resources. See [[Job_scheduling_policies#Whole_nodes_versus_cores;|Job Scheduling - Whole Node Scheduling]].
* By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with <tt>--export=none</tt> or add <tt>#SBATCH --export=NONE</tt> to your job script.
* By default, the job receives environment settings from the submitting shell. This can lead to irreproducible results if it's not what you expect. To force the job to run with a fresh-like login environment, you can submit with <tt>--export=none</tt> or add <tt>#SBATCH --export=NONE</tt> to your job script.
cc_staff
172

edits

Navigation menu