Ticket 8173 - Specify partitions in node ranges
Summary: Specify partitions in node ranges
Status: RESOLVED FIXED
Alias: None
Product: Slurm
Classification: Unclassified
Component: Configuration (show other tickets)
Version: 19.05.3
Hardware: Linux Linux
: 6 - No support contract
Assignee: Jacob Jenson
QA Contact:
URL:
Depends on:
Blocks:
 
Reported: 2019-12-03 09:32 MST by Gordon Dexter
Modified: 2020-02-25 14:27 MST (History)
0 users

See Also:
Site: -Other-
Slinky Site: ---
Alineos Sites: ---
Atos/Eviden Sites: ---
Confidential Site: ---
Coreweave sites: ---
Cray Sites: ---
DS9 clusters: ---
Google sites: ---
HPCnow Sites: ---
HPE Sites: ---
IBM Sites: ---
NOAA SIte: ---
NoveTech Sites: ---
Nvidia HWinf-CS Sites: ---
OCF Sites: ---
Recursion Pharma Sites: ---
SFW Sites: ---
SNIC sites: ---
Tzag Elita Sites: ---
Linux Distro: ---
Machine Name:
CLE Version:
Version Fixed: 20.02.0
Target Release: ---
DevPrio: ---
Emory-Cloud Sites: ---


Attachments

Note You need to log in before you can comment on or make changes to this ticket.
Description Gordon Dexter 2019-12-03 09:32:00 MST
SGE had the ability to include a queue in another queue.  It would greatly simplify configuration and usage if Slurm had a similar ability to use partition names in node ranges.

E.g. if the 'tesla' partition had nodes tesla[01-10] and the 'gtx' partition had nodes gtx[01-30] then you could specify a third partition called allgpus with the nodelist '@tesla,@gtx' or something like that.

Similarly, I could update nodes like so:

scontrol update node @gtx state=drain reason="GPU upgrade"

It would also be nice to be able to exclude nodes from a nodelist, e.g. Nodes=ALL,!specialnode[01-05] to specify all nodes except a few special ones, or Nodes=ALL,!@login to specify all nodes not in the login partition.
Comment 1 Gordon Dexter 2020-02-25 14:27:53 MST
I see the 20.02 release added the NodeSet config option, which does some of what this ticket talks about.  It's not everything, but I see the idea to use ! to exclude nodes is also mentioned in the https://slurm.schedmd.com/SLUG19/Slurm_20.02_and_Beyond.pdf presentation, so I guess it's probably in the works too.  Closing.