We are using onprem slurm RPMS in Azure, where Azure does not have lustre or IB (onprem does or can). When an azure node spins up with resume, interactive sessions regularly display errors such as following: [tnightingale6@slurm-cloud packer]$ salloc -N1 -Aphx-pace-staff -Crhel9 -qpace -pazureDS1 --mem=1000 salloc: Granted job allocation 48 salloc: Waiting for resource configuration salloc: Nodes azDS1-1 are ready for job slurmstepd: error: couldn't chdir to `/storage/home/hcodaman1/tnightingale6/packer': No such file or directory: going to /tmp instead slurmstepd: error: couldn't chdir to `/storage/home/hcodaman1/tnightingale6/packer': No such file or directory: going to /tmp instead slurmstepd: error: _read_lustre_counters: can't find Lustre stats slurmstepd: error: acct_gather_filesystem_p_get_data: cannot read lustre counters ibwarn: [6035] get_abi_version: can't read ABI version from /sys/class/infiniband_mad/abi_version (No such file or directory): is ib_umad module loaded? ibwarn: [6035] mad_rpc_open_port: can't open UMAD port ((null):1) bash-5.1$ ibwarn: [6035] mad_rpc_open_port: can't open UMAD port ((null):1) ibwarn: [6035] mad_rpc_open_port: can't open UMAD port ((null):1) ibwarn: [6035] mad_rpc_open_port: can't open UMAD port ((null):1) ibwarn: [6035] mad_rpc_open_port: can't open UMAD port ((null):1) ibwarn: [6035] mad_rpc_open_port: can't open UMAD port ((null):1) ibwarn: [6035] mad_rpc_open_port: can't open UMAD port ((null):1) ibwarn: [6035] mad_rpc_open_port: can't open UMAD port ((null):1) How do we get rid of these errors? Removing relevant lines in slurm.conf does not mitigate.