| Summary: | Preemption leading to drained nodes | ||
|---|---|---|---|
| Product: | Slurm | Reporter: | Adam <adam.munro> |
| Component: | Scheduling | Assignee: | Director of Support <support> |
| Status: | RESOLVED INVALID | QA Contact: | |
| Severity: | 3 - Medium Impact | ||
| Priority: | --- | ||
| Version: | 20.02.6 | ||
| Hardware: | Linux | ||
| OS: | Linux | ||
| Site: | Yale | Alineos Sites: | --- |
| Atos/Eviden Sites: | --- | Confidential Site: | --- |
| Coreweave sites: | --- | Cray Sites: | --- |
| DS9 clusters: | --- | HPCnow Sites: | --- |
| HPE Sites: | --- | IBM Sites: | --- |
| NOAA SIte: | --- | OCF Sites: | --- |
| Recursion Pharma Sites: | --- | SFW Sites: | --- |
| SNIC sites: | --- | Linux Distro: | --- |
| Machine Name: | CLE Version: | ||
| Version Fixed: | Target Release: | --- | |
| DevPrio: | --- | Emory-Cloud Sites: | --- |
|
Description
Adam
2021-06-03 10:34:05 MDT
Hi, this bug can be closed. We are fairly certain that this situation is occurring because a storage system isn't responding quickly enough (there's nothing SLURM can do about that except increase the unkillable timeout). Thank you, Adam Resolving out. |