| Summary: | SLURM_TASKS_PER_NODE incorrect with --exclusive --cpus-per-task underfit | ||
|---|---|---|---|
| Product: | Slurm | Reporter: | Dylan Simon <dsimon> |
| Component: | Scheduling | Assignee: | Carlos Tripiana Montes <tripiana> |
| Status: | RESOLVED FIXED | QA Contact: | |
| Severity: | 4 - Minor Issue | ||
| Priority: | --- | CC: | cinek, jdamicis, lgarrison, tripiana |
| Version: | 21.08.8 | ||
| Hardware: | Linux | ||
| OS: | Linux | ||
| See Also: | https://bugs.schedmd.com/show_bug.cgi?id=10620 | ||
| Site: | Simons Foundation & Flatiron Institute | Slinky Site: | --- |
| Alineos Sites: | --- | Atos/Eviden Sites: | --- |
| Confidential Site: | --- | Coreweave sites: | --- |
| Cray Sites: | --- | DS9 clusters: | --- |
| Google sites: | --- | HPCnow Sites: | --- |
| HPE Sites: | --- | IBM Sites: | --- |
| NOAA SIte: | --- | NoveTech Sites: | --- |
| Nvidia HWinf-CS Sites: | --- | OCF Sites: | --- |
| Recursion Pharma Sites: | --- | SFW Sites: | --- |
| SNIC sites: | --- | Tzag Elita Sites: | --- |
| Linux Distro: | --- | Machine Name: | |
| CLE Version: | Version Fixed: | 23.02.0pre1 | |
| Target Release: | --- | DevPrio: | --- |
| Emory-Cloud Sites: | --- | ||
| Attachments: | slurm.conf | ||
|
Description
Dylan Simon
2022-09-30 07:22:14 MDT
Hi, I'll try to reproduce your issue. Please provide your slurm.conf if possible. Thanks, Carlos. Created attachment 27103 [details]
slurm.conf
Hi, I have been able to reproduce the issue in master and we are investigating why this extra task is set *only* in the environmental variable. The steps aren't affected by this and properly run the right amount of tasks per node. I'll let you know once this is fixed. Thanks for reporting, Carlos. Hi Dylan, This has been fixed in 22.05 and master branches, commits: 848142a418 Fix salloc SLURM_NTASKS_PER_NODE output env variable when -n not given 7c86732028 Fix sbatch SLURM_NTASKS_PER_NODE output env variable when -n not given 355a3df278 Add NEWS for the previous two commits I'm going to close the bug as fixed. Feel free to reopen it if you find any related issue. Regards, Carlos. |