| Summary: | salloc -n behavior | ||
|---|---|---|---|
| Product: | Slurm | Reporter: | Don Lipari <lipari1> |
| Component: | Bluegene select plugin | Assignee: | Danny Auble <da> |
| Status: | RESOLVED FIXED | QA Contact: | |
| Severity: | 3 - Medium Impact | ||
| Priority: | --- | ||
| Version: | 2.4.x | ||
| Hardware: | IBM BlueGene | ||
| OS: | Linux | ||
| Site: | LLNL | Alineos Sites: | --- |
| Atos/Eviden Sites: | --- | Confidential Site: | --- |
| Coreweave sites: | --- | Cray Sites: | --- |
| DS9 clusters: | --- | HPCnow Sites: | --- |
| HPE Sites: | --- | IBM Sites: | --- |
| NOAA SIte: | --- | OCF Sites: | --- |
| Recursion Pharma Sites: | --- | SFW Sites: | --- |
| SNIC sites: | --- | Linux Distro: | --- |
| Machine Name: | CLE Version: | ||
| Version Fixed: | Target Release: | --- | |
| DevPrio: | --- | Emory-Cloud Sites: | --- |
|
Description
Don Lipari
2012-12-18 10:13:02 MST
You shouldn't get 64 nodes there. I'll see what I can find. I am guessing this was always the case with salloc and not directly related to anything we did with the 2 bugs you mention here. This is fixed in 2.5. It was referencing code that only applied to an L or P system. sbatch was affected in the same way. If you want to backport it to 2.4 the patch is here... https://github.com/SchedMD/slurm/commit/3e89da1164312ab8a0d049cb70931347942340fa |