| Summary: | Expose node-count as a % filename pattern | ||
|---|---|---|---|
| Product: | Slurm | Reporter: | Carl Ponder <CPonder> |
| Component: | Accounting | Assignee: | Jacob Jenson <jacob> |
| Status: | RESOLVED INVALID | QA Contact: | |
| Severity: | 6 - No support contract | ||
| Priority: | --- | ||
| Version: | 20.02.x | ||
| Hardware: | Linux | ||
| OS: | Linux | ||
| Site: | -Other- | Alineos Sites: | --- |
| Atos/Eviden Sites: | --- | Confidential Site: | --- |
| Coreweave sites: | --- | Cray Sites: | --- |
| DS9 clusters: | --- | HPCnow Sites: | --- |
| HPE Sites: | --- | IBM Sites: | --- |
| NOAA SIte: | --- | OCF Sites: | --- |
| Recursion Pharma Sites: | --- | SFW Sites: | --- |
| SNIC sites: | --- | Linux Distro: | --- |
| Machine Name: | CLE Version: | ||
| Version Fixed: | Target Release: | --- | |
| DevPrio: | --- | Emory-Cloud Sites: | --- |
I'm running a scaling study, with a sequence of jobs that vary by node count. I'd like to be able to include the node counts in the names of the log files that I generate, i.e. #SBATCH -o %j.log.medium.%N_nodes would generate log files with names like 8635.log.4_nodes 8359.log.8_nodes etc. While you could argue that studies like this could vary any parameter, and how could you expose them all (?), but the node-count is controlled on the "sbatch" command-line (or batch control-file header) and should be available at the job startup just like the %j Job ID.