| Summary: | Make job submission denied if memory requested > MemSpecLimit | ||
|---|---|---|---|
| Product: | Slurm | Reporter: | Akmal Madzlan <akmalm> |
| Component: | slurmctld | Assignee: | David Bigagli <david> |
| Status: | RESOLVED INFOGIVEN | QA Contact: | |
| Severity: | 5 - Enhancement | ||
| Priority: | --- | CC: | brian, da |
| Version: | 14.11.9 | ||
| Hardware: | Linux | ||
| OS: | Linux | ||
| Site: | DownUnder GeoSolutions | Alineos Sites: | --- |
| Atos/Eviden Sites: | --- | Confidential Site: | --- |
| Coreweave sites: | --- | Cray Sites: | --- |
| DS9 clusters: | --- | HPCnow Sites: | --- |
| HPE Sites: | --- | IBM Sites: | --- |
| NOAA SIte: | --- | OCF Sites: | --- |
| Recursion Pharma Sites: | --- | SFW Sites: | --- |
| SNIC sites: | --- | Linux Distro: | --- |
| Machine Name: | CLE Version: | ||
| Version Fixed: | Target Release: | --- | |
| DevPrio: | --- | Emory-Cloud Sites: | --- |
|
Description
Akmal Madzlan
2015-10-04 21:30:44 MDT
Hi, do you really mean MemSpecLimit? That is a limit for compute node daemons the slurmd and slurmstepd. David Yeah, I'm talking about MemSpecLimit. When MemSpecLimit=8000, job that use more than 8000 will be killed right? So I dont see the point of allowing the submission of job that request memory more than that
This is the documentation about the parameter. It is not a job limit.
MemSpecLimit
Limit on combined real memory allocation for compute node dae-
mons (slurmd, slurmstepd), in megabytes.
More informatioon can be found here: http://slurm.schedmd.com/core_spec.html.
David
Right. So can I make this a feature request? (In reply to Akmal Madzlan from comment #4) > Right. > So can I make this a feature request? It is already in Slum v15.08. You will need to use the MaxTRESPerJob parameter for Memory. More information here: http://slurm.schedmd.com/SLUG15/TRES.pdf http://slurm.schedmd.com/resource_limits.html http://slurm.schedmd.com/sacctmgr.html Alright. Thanks Moe |