| Summary: | scontrol show job jobid can now accept a coma separaed list of job-id | ||
|---|---|---|---|
| Product: | Slurm | Reporter: | Tazio Ceri <tazio.ceri> |
| Component: | User Commands | Assignee: | Tim Wickberg <tim> |
| Status: | OPEN --- | QA Contact: | |
| Severity: | C - Contributions | ||
| Priority: | --- | CC: | taras.shapovalov |
| Version: | 20.02.3 | ||
| Hardware: | Linux | ||
| OS: | Linux | ||
| Site: | -Other- | Alineos Sites: | --- |
| Atos/Eviden Sites: | --- | Confidential Site: | --- |
| Coreweave sites: | --- | Cray Sites: | --- |
| DS9 clusters: | --- | HPCnow Sites: | --- |
| HPE Sites: | --- | IBM Sites: | --- |
| NOAA SIte: | --- | OCF Sites: | --- |
| Recursion Pharma Sites: | --- | SFW Sites: | --- |
| SNIC sites: | --- | Linux Distro: | --- |
| Machine Name: | CLE Version: | ||
| Version Fixed: | Target Release: | --- | |
| DevPrio: | --- | Emory-Cloud Sites: | --- |
| Attachments: |
Patch against slurm20 but it applies also to slurm19
Patch against slurm20 |
||
Hi Tazio - While I'm not necessarily against the idea of supporting a comma-separated list of jobids here, the implementation you've chosen still issues successive RPCs to load each and every job record from the slurmctld. If we were going to do this, I'd want to see further changes to use slurm_load_jobs() instead, and handle the filtering client-side. That'd be considerably faster than the approach proposed here. I'm happy to review that if you'd like to submit such a patch, otherwise I'm marking this as resolved/wontfix at this time. I'll also take the time to note that the output from 'scontrol show job' is not necessarily intended for downstream consumption as it does change release to release, and we'd recommend using 'squeue' with specific formatting options to obtain such data instead. - Tim Hi Tim! We want to build that patch, we are starting to work on it. Just to be sure that we are on the same page: you would accept a patch that uses slurm_load_jobs inside scontrol_load_job and filters there the ids or would you prefer an entirely new function, to keep down cyclomatic complexity? (In reply to Tazio Ceri from comment #2) > Hi Tim! > > We want to build that patch, we are starting to work on it. > > Just to be sure that we are on the same page: > you would accept a patch that uses slurm_load_jobs inside scontrol_load_job > and filters there the ids or would you prefer an entirely new function, to > keep down cyclomatic complexity? I think it can happen within the existing scontrol_print_job() function without too much difficulty. Created attachment 14664 [details]
Patch against slurm20
Hi Tim, When do you have a plan to try the new patch? Best regards, Taras What is the status of the patch? |
Created attachment 14256 [details] Patch against slurm20 but it applies also to slurm19 This patch improves the performances of our software to retrieve information from jobs when there is a huge number of jobs, because otherwise we would have to call scontrol every time.