| Summary: | Make swap a selectable resource | ||
|---|---|---|---|
| Product: | Slurm | Reporter: | John Hanks <john.hanks> |
| Component: | Limits | Assignee: | Moe Jette <jette> |
| Status: | RESOLVED INFOGIVEN | QA Contact: | |
| Severity: | 5 - Enhancement | ||
| Priority: | --- | ||
| Version: | 15.08.2 | ||
| Hardware: | Linux | ||
| OS: | Linux | ||
| Site: | KAUST | Slinky Site: | --- |
| Alineos Sites: | --- | Atos/Eviden Sites: | --- |
| Confidential Site: | --- | Coreweave sites: | --- |
| Cray Sites: | --- | DS9 clusters: | --- |
| Google sites: | --- | HPCnow Sites: | --- |
| HPE Sites: | --- | IBM Sites: | --- |
| NOAA SIte: | --- | NoveTech Sites: | --- |
| Nvidia HWinf-CS Sites: | --- | OCF Sites: | --- |
| Recursion Pharma Sites: | --- | SFW Sites: | --- |
| SNIC sites: | --- | Tzag Elita Sites: | --- |
| Linux Distro: | --- | Machine Name: | |
| CLE Version: | Version Fixed: | ||
| Target Release: | --- | DevPrio: | --- |
| Emory-Cloud Sites: | --- | ||
|
Description
John Hanks
2015-11-14 16:08:19 MST
Hi, as you know Slurm currently does not have this feature. Development will evaluate this and get back to you. David Perhaps GRES (Generic RESources) could be used for this purpose. You can define a GRES count per node and jobs requesting it would consume those resources. Node configurations would look something like this: NodeName=nid[00000-01000] Gres=swap:1g .... gres.conf would include: Name=swap Count=1g Job requests would look something like this: sbatch --gres=swap:100m ... A job submit plugin could set default swap values if desired. More information about GRES is available here: http://slurm.schedmd.com/gres.html Let me know if this addresses your needs. What we wound up doing was similar to your suggestion, except we applied it to zram. On the nodes where we allow this we added a "zram" feature, then wrote a submit plugin which checks for the feature and if found sets --mem=0 and --exclusive. Prolog and epilog scripts then enable zram for the job and disable it once the job is complete. Now we can simply lower the available disk based swap to some amount that is general purpose and people running large jobs can activate zram as-needed. My original idea was that allowing selectable swap amounts would allow jobs to run on the same node with different swap limits but upon further pondering I realized that almost all jobs either want all the swap they can get or no swap at all. Will still let all jobs that set a memory amount go over it by 10% into swap and that seems to be a fairly good boundary. I think you can close this request as yeah it would be neat but it's really unnecessary. If it turns out we do want to allow selectable swap in the future I will probably follow the same approach and have a prolog/epilog add and remove a swap zvol for the duration of the job. Thank you, jbh Resolved using GRES. |