Frequently Asked Questions/fr: Difference between revisions

Jump to navigation Jump to search
Updating to match new version of source page
(Updating to match new version of source page)
(Updating to match new version of source page)
Tags: Mobile edit Mobile web edit
Line 58: Line 58:
<br />
<br />
Notez que ce message survient dans certains cas même lorsque Slurm a accepté la tâche.
Notez que ce message survient dans certains cas même lorsque Slurm a accepté la tâche.
== Why are my jobs taking so long to start? ==
You can see the reason why your jobs are in the state <tt>PD</tt> (pending) int the column <tt>(REASON)</tt>, which will typically have the value <tt>Resources</tt> or <tt>Priority</tt>. In the former case, the cluster is simply very busy and you will have to be patient or perhaps consider where you might submit a job that asks for fewer resources (e.g. nodes, memory, time). In the latter case however, your job is waiting to start due to its lower priority. This is because you and other members of your research group have been over-consuming your just share of the cluster resources in the recent past, something you can track using the command <tt>sshare</tt> as explained in [[Job scheduling policies]]. The column <tt>LevelFS</tt> gives you information about your over- or under-consumption of cluster resources; when <tt>LevelFS</tt> is greater than unity you are consuming fewer resources than your just share, while if it is less than unity you are consuming more. The closer <tt>LevelFS</tt> becomes to zero, the more you are over-consuming resources and greater the degree to which your jobs will have their priority diminished. There is a memory effect to this calculation so the scheduler gradually forgets about any potential over- or under-consumption of resources from months past. Finally, note that this scheduler priority is unique to a specific cluster - your <tt>LevelFS</tt> on one cluster is independent of its value on another.
35,604

edits

Navigation menu