Recently I've seen that if I just ask for a JobID it is blank but if I ask for a range it gives me an answer: [root@holy-slurm02 log]# sacct -j 6216840 JobID JobName Partition Account AllocCPUS State ExitCode ------------ ---------- ---------- ---------- ---------- ---------- -------- [root@holy-slurm02 log]# sacct -j 6216840 --start=2019-04-01 --end=2019-04-10 JobID JobName Partition Account AllocCPUS State ExitCode ------------ ---------- ---------- ---------- ---------- ---------- -------- 6216840_8 modelA1_t+ gpu_reque+ hoffman_l+ 1 COMPLETED 0:0 6216840_8.b+ batch hoffman_l+ 1 COMPLETED 0:0 6216840_8.e+ extern hoffman_l+ 1 COMPLETED 0:0 6216840_1 modelA1_t+ gpu_reque+ hoffman_l+ 1 COMPLETED 0:0 6216840_1.b+ batch hoffman_l+ 1 COMPLETED 0:0 6216840_1.e+ extern hoffman_l+ 1 COMPLETED 0:0 6216840_2 modelA1_t+ gpu_reque+ hoffman_l+ 1 COMPLETED 0:0 6216840_2.b+ batch hoffman_l+ 1 COMPLETED 0:0 6216840_2.e+ extern hoffman_l+ 1 COMPLETED 0:0 6216840_3 modelA1_t+ gpu_reque+ hoffman_l+ 1 COMPLETED 0:0 6216840_3.b+ batch hoffman_l+ 1 COMPLETED 0:0 6216840_3.e+ extern hoffman_l+ 1 COMPLETED 0:0 6216840_4 modelA1_t+ gpu_reque+ hoffman_l+ 1 COMPLETED 0:0 6216840_4.b+ batch hoffman_l+ 1 COMPLETED 0:0 6216840_4.e+ extern hoffman_l+ 1 COMPLETED 0:0 6216840_5 modelA1_t+ gpu_reque+ hoffman_l+ 1 COMPLETED 0:0 6216840_5.b+ batch hoffman_l+ 1 COMPLETED 0:0 6216840_5.e+ extern hoffman_l+ 1 COMPLETED 0:0 6216840_6 modelA1_t+ gpu_reque+ hoffman_l+ 1 COMPLETED 0:0 6216840_6.b+ batch hoffman_l+ 1 COMPLETED 0:0 6216840_6.e+ extern hoffman_l+ 1 COMPLETED 0:0 6216840_7 modelA1_t+ gpu_reque+ hoffman_l+ 1 COMPLETED 0:0 6216840_7.b+ batch hoffman_l+ 1 COMPLETED 0:0 6216840_7.e+ extern hoffman_l+ 1 COMPLETED 0:0 It used to be the case that if you put in a JobID it would show you the job regardless of age. Was there a change in the sacct settings? Can it be changed such that if you ask for a specific job it gives you info with out giving the time range? Thanks. -Paul Edmon-
Hi Paul, This bug is a duplicated of bug 6755, a regression added in 18.08.6 It is already fixed in branch slurm-18.08 and we are accelerating the release of 18.08.7 specially for it. Sorry for the inconveniences, Albert *** This ticket has been marked as a duplicate of ticket 6755 ***