Ticket 6626

Summary: Can sacct display ncpus per node in multi-node job?
Product: Slurm Reporter: James Desjardins <jdesjard>
Component: AccountingAssignee: Director of Support <support>
Status: RESOLVED DUPLICATE QA Contact:
Severity: 5 - Enhancement    
Priority: ---    
Version: 17.11.7   
Hardware: Linux   
OS: Linux   
Site: SharcNet Alineos Sites: ---
Atos/Eviden Sites: --- Confidential Site: ---
Coreweave sites: --- Cray Sites: ---
DS9 clusters: --- HPCnow Sites: ---
HPE Sites: --- IBM Sites: ---
NOAA SIte: --- NoveTech Sites: ---
Nvidia HWinf-CS Sites: --- OCF Sites: ---
Recursion Pharma Sites: --- SFW Sites: ---
SNIC sites: --- Tzag Elita Sites: ---
Linux Distro: --- Machine Name:
CLE Version: Version Fixed:
Target Release: --- DevPrio: ---
Emory-Cloud Sites: ---

Description James Desjardins 2019-03-04 09:10:07 MST
This is largely (if not entirely) a duplicate of ticket 2047 (unassigned).

In order to do accurate accounting of usage on our heterogeneous system I need to know how many cores were allocated uniquely to nodes in multi-node jobs. Currently, using sacct there is only the nodelist and ncpus, but there is no way  (that I can find) to tell how many cpus were assigned to each of the allocated nodes.

Is there a way to do this using sacct?

If not, is there a plan to add this information to the sacct output?

For example, how many cpus ran on gra1047 in the following example:

$ sacct -a -j 11318385 -o ncpus,nnodes,state,nodelist%48
     NCPUS   NNodes      State                                         NodeList 
---------- -------- ---------- ------------------------------------------------ 
        16        8  COMPLETED         gra[99,101,108,1047,1056,1058,1096,1123] 
         2        1  COMPLETED                                            gra99 
        16        8  COMPLETED         gra[99,101,108,1047,1056,1058,1096,1123] 
        16        8  COMPLETED         gra[99,101,108,1047,1056,1058,1096,1123] 




Thank you for your attention!

James
Comment 2 Jason Booth 2019-03-04 14:40:16 MST
Hi James, 2047, 6406 are indeed duplicates of this issue. We are tracking this same request through 2047 but there are no plans to implement this currently. If you would like to discuss sponsoring development for this feature then I can have Jess speak with you about this request.

*** This ticket has been marked as a duplicate of ticket 2047 ***