Ticket 5933

Summary: High Incidence of "Due to Failed" Jobs Getting Requeued
Product: Slurm Reporter: Kevin Manalo <kmanalo>
Component: ConfigurationAssignee: Jason Booth <jbooth>
Status: RESOLVED INFOGIVEN QA Contact:
Severity: 3 - Medium Impact    
Priority: ---    
Version: 17.11.7   
Hardware: Linux   
OS: Linux   
Site: Johns Hopkins University Alineos Sites: ---
Atos/Eviden Sites: --- Confidential Site: ---
Coreweave sites: --- Cray Sites: ---
DS9 clusters: --- HPCnow Sites: ---
HPE Sites: --- IBM Sites: ---
NOAA SIte: --- OCF Sites: ---
Recursion Pharma Sites: --- SFW Sites: ---
SNIC sites: --- Linux Distro: ---
Machine Name: CLE Version:
Version Fixed: Target Release: ---
DevPrio: --- Emory-Cloud Sites: ---
Attachments: slurm conf
sdiag
compute 278 slurmd
slurmctld 5am hour

Description Kevin Manalo 2018-10-26 07:57:00 MDT
Created attachment 8113 [details]
slurm conf

Hi,

Recently we've been dealing with sporadic node (communication) failures.

'due to failure' requeues inside slurmctld log

[2018-10-26T05:10:18.936] requeue job 30515750 due to failure of node compute0278
[2018-10-26T05:10:18.939] requeue job 30550481 due to failure of node compute0357
[2018-10-26T05:10:18.940] requeue job 30515766 due to failure of node compute0361
[2018-10-26T06:10:19.817] requeue job 30491064 due to failure of node compute0403
[2018-10-26T06:30:20.862] requeue job 30542666 due to failure of node compute0087
[2018-10-26T06:30:20.862] requeue job 30542697 due to failure of node compute0087
[2018-10-26T06:30:20.862] requeue job 30542741 due to failure of node compute0087
[2018-10-26T06:30:20.863] requeue job 30542871 due to failure of node compute0087
[2018-10-26T06:30:20.863] requeue job 30552111 due to failure of node compute0087
[2018-10-26T06:30:20.863] requeue job 30552112 due to failure of node compute0087
[2018-10-26T06:30:20.863] requeue job 30552113 due to failure of node compute0087
[2018-10-26T06:30:20.864] requeue job 30554423 due to failure of node compute0087
[2018-10-26T06:30:20.864] requeue job 30554424 due to failure of node compute0087
[2018-10-26T06:30:20.864] requeue job 30554425 due to failure of node compute0087
[2018-10-26T06:30:20.864] requeue job 30554426 due to failure of node compute0087


We are getting quite a few hits (1 to 20) hourly.
Our job volume avg is about 20k per day over the past month (150k peak).  slurm.conf attached

We recently upgraded from CentOS 6 to CentOS 7, and so I'm wondering what tuning or configuration we should be looking at as this was not as much a problem on the older set up, no changes on version either (17.11.7).

Thanks,
Kevin
Comment 1 Jason Booth 2018-10-26 09:00:06 MDT
Hi Kevin,

 Can I have you also attach the slurmctld.log and the slurmd.log (from a failed node)? Please also attach the output of sdiag. 

 Do the jobs consume a lot of network or CPU on these failed compute nodes?

-Jason
Comment 2 Kevin Manalo 2018-10-26 09:15:33 MDT
Created attachment 8117 [details]
sdiag
Comment 3 Kevin Manalo 2018-10-26 09:16:39 MDT
Created attachment 8118 [details]
compute 278 slurmd
Comment 4 Kevin Manalo 2018-10-26 09:21:52 MDT
Also just after the ticket, I upped the debug levels and there appear to have been more of the requeues (about 550 in one hour).  SlurmdTimeout values updated below.

The slurmctld file is really chatty so I'm trying to get the matching window (say between 5 and 6 am) and getting that submitted.  But it appears that file is about 2.9 gb 

Here's the updated timeout values.

scontrol show conf | grep -i timeout
BatchStartTimeout       = 10 sec
EioTimeout              = 60
GetEnvTimeout           = 2 sec
MessageTimeout          = 90 sec
PrologEpilogTimeout     = 65534
ResumeTimeout           = 60 sec
SlurmctldTimeout        = 900 sec
SlurmdTimeout           = 360 sec
SuspendTimeout          = 30 sec
TCPTimeout              = 2 sec
UnkillableStepTimeout   = 60 sec


Are the timeout values proper?

As far as network or cpu activity on the 'failed' nodes I don't have a lot of information gathered for you at this point.
Comment 5 Kevin Manalo 2018-10-26 09:34:19 MDT
Created attachment 8119 [details]
slurmctld 5am hour
Comment 6 Kevin Manalo 2018-10-26 09:41:57 MDT
Also to give you a better feel for it seems to be more of a cluster-wide spread (so that we're showing it's not just based on the handful of nodes seen earlier).

[2018-10-26T10:10:20.812] requeue job 30551878 due to failure of node compute0292
[2018-10-26T10:10:20.813] requeue job 30531319 due to failure of node compute0296
[2018-10-26T10:10:20.814] requeue job 30551862 due to failure of node compute0297
[2018-10-26T10:10:20.815] requeue job 30551852 due to failure of node compute0299
[2018-10-26T10:10:20.815] requeue job 30497718 due to failure of node compute0306
[2018-10-26T10:10:20.816] requeue job 30556439 due to failure of node compute0312
[2018-10-26T10:10:20.817] requeue job 30515749 due to failure of node compute0313
[2018-10-26T10:10:20.818] requeue job 30540050 due to failure of node compute0319
[2018-10-26T10:10:20.820] requeue job 30553609 due to failure of node compute0437
[2018-10-26T10:10:20.821] requeue job 30551954 due to failure of node compute0440
[2018-10-26T10:10:20.822] requeue job 30551958 due to failure of node compute0450
[2018-10-26T10:10:20.823] requeue job 30551959 due to failure of node compute0451
[2018-10-26T10:10:20.823] requeue job 30552014 due to failure of node compute0459
[2018-10-26T10:10:20.824] requeue job 30552016 due to failure of node compute0460
[2018-10-26T10:10:20.824] requeue job 30539984 due to failure of node compute0556
[2018-10-26T10:10:20.826] requeue job 30548102 due to failure of node compute0557
[2018-10-26T10:10:20.826] requeue job 30556427 due to failure of node compute0558
[2018-10-26T10:10:20.827] requeue job 30414158 due to failure of node compute0560
[2018-10-26T10:10:20.830] requeue job 30551865 due to failure of node compute0626
[2018-10-26T10:10:20.830] requeue job 30551867 due to failure of node compute0628
[2018-10-26T10:10:20.831] requeue job 30491065 due to failure of node compute0632
[2018-10-26T10:10:20.832] requeue job 30539549 due to failure of node compute0635
[2018-10-26T10:10:20.834] requeue job 30550247 due to failure of node compute0647
[2018-10-26T10:30:20.465] requeue job 30497618 due to failure of node compute0098
[2018-10-26T10:30:20.467] requeue job 30497783 due to failure of node compute0102
[2018-10-26T10:30:20.468] requeue job 30497903 due to failure of node compute0109
[2018-10-26T10:30:20.469] requeue job 30550478 due to failure of node compute0268
[2018-10-26T10:30:20.471] requeue job 30551980 due to failure of node compute0271
[2018-10-26T10:30:20.473] requeue job 30551886 due to failure of node compute0335
[2018-10-26T10:30:20.473] requeue job 30500926 due to failure of node compute0342
[2018-10-26T10:30:20.476] requeue job 30536129 due to failure of node compute0354
[2018-10-26T10:30:20.478] requeue job 30523517 due to failure of node compute0363
[2018-10-26T10:30:20.480] requeue job 30551949 due to failure of node compute0390
[2018-10-26T10:30:20.481] requeue job 30551888 due to failure of node compute0404
[2018-10-26T10:30:20.481] requeue job 30533176 due to failure of node compute0406
[2018-10-26T10:30:20.482] requeue job 30523518 due to failure of node compute0414
[2018-10-26T10:30:20.484] requeue job 30515737 due to failure of node compute0455
[2018-10-26T10:30:20.486] requeue job 30550479 due to failure of node compute0473
[2018-10-26T10:30:20.487] requeue job 30515690 due to failure of node compute0566
[2018-10-26T10:30:20.488] requeue job 30551890 due to failure of node compute0575
[2018-10-26T10:30:20.488] requeue job 30551866 due to failure of node compute0627
[2018-10-26T10:30:20.489] requeue job 30551868 due to failure of node compute0629
[2018-10-26T10:30:20.490] requeue job 30551869 due to failure of node compute0630
[2018-10-26T10:30:20.490] requeue job 30515694 due to failure of node compute0636
[2018-10-26T10:30:20.491] requeue job 30551894 due to failure of node compute0640
Comment 7 Jason Booth 2018-10-26 11:15:58 MDT
Hi Kevin,

The timeouts look fine to me. We do suggest reviewing the recommendations for high_throughput found here:
https://slurm.schedmd.com/high_throughput.html

There seems to be an issue with name resolution and I believe this is why your jobs are failing. Detailed logs are below.

[2018-10-25T13:07:47.120] Message aggregation disabled
[2018-10-25T13:07:47.125] error: _find_node_record(751): lookup failure for gpu073
[2018-10-25T13:07:47.125] error: _find_node_record(763): lookup failure for gpu073 alias gpu073
[2018-10-25T13:07:47.125] error: _find_node_record(751): lookup failure for gpudev001
[2018-10-25T13:07:47.125] error: _find_node_record(763): lookup failure for gpudev001 alias gpudev001
[2018-10-25T13:07:47.125] error: _find_node_record(751): lookup failure for gpudev002
[2018-10-25T13:07:47.125] error: _find_node_record(763): lookup failure for gpudev002 alias gpudev002
[2018-10-25T13:07:47.125] error: WARNING: Invalid hostnames in switch configuration: gpu073,gpudev[001-002]
[2018-10-25T13:07:47.126] TOPOLOGY: warning -- no switch can reach all nodes through its descendants.Do not use route/topology
[2018-10-26T05:10:18.947] [30515750.batch] error: *** JOB 30515750 ON compute0278 CANCELLED AT 2018-10-26T05:10:18 DUE TO NODE FAILURE, SEE SLURMCTLD LOG FOR DETAILS ***
[2018-10-26T05:10:19.974] [30515750.batch] sending REQUEST_COMPLETE_BATCH_SCRIPT, error:0 status 15
[2018-10-26T05:10:20.680] [30515750.batch] done with job
[2018-10-26T05:11:25.121] launch task 30493852.1 request from 2641.1009@172.16.10.238 (port 29374)

....

[2018-10-25T10:58:03.868] WARNING: MessageTimeout is too high for effective fault-tolerance
[2018-10-25T10:58:03.871] Message aggregation disabled
[2018-10-25T10:58:04.794] error: _find_node_record(751): lookup failure for gpu073
[2018-10-25T10:58:04.794] error: _find_node_record(763): lookup failure for gpu073 alias gpu073
[2018-10-25T10:58:04.794] error: _find_node_record(751): lookup failure for gpudev001
[2018-10-25T10:58:04.794] error: _find_node_record(763): lookup failure for gpudev001 alias gpudev001
[2018-10-25T10:58:04.794] error: _find_node_record(751): lookup failure for gpudev002
[2018-10-25T10:58:04.794] error: _find_node_record(763): lookup failure for gpudev002 alias gpudev002
[2018-10-25T10:58:04.794] error: WARNING: Invalid hostnames in switch configuration: gpu073,gpudev[001-002]
[2018-10-25T10:58:04.794] TOPOLOGY: warning -- no switch can reach all nodes through its descendants.Do not use route/topology
[2018-10-25T10:58:14.669] Message aggregation disabled
[2018-10-25T10:58:14.674] error: _find_node_record(751): lookup failure for gpu073
[2018-10-25T10:58:14.674] error: _find_node_record(763): lookup failure for gpu073 alias gpu073
[2018-10-25T10:58:14.674] error: _find_node_record(751): lookup failure for gpudev001
[2018-10-25T10:58:14.674] error: _find_node_record(763): lookup failure for gpudev001 alias gpudev001
[2018-10-25T10:58:14.674] error: _find_node_record(751): lookup failure for gpudev002
[2018-10-25T10:58:14.674] error: _find_node_record(763): lookup failure for gpudev002 alias gpudev002
[2018-10-25T10:58:14.674] error: WARNING: Invalid hostnames in switch configuration: gpu073,gpudev[001-002]
[2018-10-25T10:58:14.674] TOPOLOGY: warning -- no switch can reach all nodes through its descendants.Do not use route/topology

-Jason
Comment 8 Kevin Manalo 2018-10-26 12:00:15 MDT
That's a good point regarding the name resoluation - I've decided to turn the topology plugin off for now (so that we can fix the names and review that), and I ended up increasing the tcp timeout value to 20.  I'll report back on failure frequencies when I collect more data.   

-Kevin
Comment 9 Kevin Manalo 2018-10-29 07:07:58 MDT
error: WARNING: Invalid hostnames in switch configuration

So indeed turning topology plugin off for the above error is reducing node failures to zero except for when there are legitimate node fails.  Turning topology plugin back to 'none' worked out great (I have not re-adjusted TCPTimeout but I'm at a point where I want to wait :) )

I'll continue to collect results, but it looks good today.  

-Kevin
Comment 10 Jason Booth 2018-11-02 10:17:59 MDT
Hi Kevin,

 Do you need anything else from my end for this issue?
Comment 11 Kevin Manalo 2018-11-02 10:43:31 MDT
No, I think all is good on this specific issue. Please mark as resolved.