The answer to the question "can I weight a node differently per partition" seems to be no based on this thread below, but I would like to request this feature: https://groups.google.com/forum/#!searchin/slurm-devel/node$20weight/slurm-devel/DcbuJ3Cb1MM/xtREhtakJ90J So if the entire cluster has all nodes in same partitions A and B as an extreme example, I want to be able to have the nodes sorted differently depending on what partition the job is submitted to.
I can confirm this is not possible with the current code. On August 3, 2015 6:01:29 PM PDT, bugs@schedmd.com wrote: >http://bugs.schedmd.com/show_bug.cgi?id=1843 > > Site: TotalCAE > Bug ID: 1843 > Summary: Would like to have node weight take into account per > partition > Product: Slurm > Version: 14.11.8 > Hardware: Linux > OS: Linux > Status: UNCONFIRMED > Severity: 5 - Enhancement > Priority: --- > Component: Scheduling > Assignee: brian@schedmd.com > Reporter: rod@totalcae.com > CC: brian@schedmd.com, da@schedmd.com, david@schedmd.com, > jette@schedmd.com > >The answer to the question "can I weight a node differently per >partition" >seems to be no based on this thread below, but I would like to request >this >feature: > >https://groups.google.com/forum/#!searchin/slurm-devel/node$20weight/slurm-devel/DcbuJ3Cb1MM/xtREhtakJ90J > >So if the entire cluster has all nodes in same partitions A and B as an >extreme >example, I want to be able to have the nodes sorted differently >depending on >what partition the job is submitted to. > >-- >You are receiving this mail because: >You are on the CC list for the bug.
*** Ticket 1844 has been marked as a duplicate of this ticket. ***
We would also be interested in such a feature. In our case, all our nodes are in a single cluster, with two partitions, one for nodes that have GPUs, and another for all nodes, to run CPU jobs. The CPU partition has a limit on the number of CPU/cores that can be used per node, in order to leave some cores for GPU jobs on the same node. Typically, our priorities for using a given node in the two partitions are opposite: we'd like to prioritize nodes with best GPUs for GPUs jobs in the GPU queue, and conversely we'd like to prioritize nodes with no GPUs or older GPUs for CPU jobs in the CPU queue. For now, we give lowest weight (highest priority) to nodes with no GPU, so that they get picked first, but when they're full, the nodes with best GPUs get picked first even for CPU-only jobs, because that's the priority we need in the GPU partition. Is there another way to achieve the same goal?