Hello, When shutting down a 1st run of slurmctld (done via "slurmctld -Dvvvv"), I am seeing this in the log output: slurmctld: Terminate signal (SIGINT or SIGTERM) received slurmctld: debug: sched: slurmctld terminating slurmctld: debug3: _slurmctld_rpc_mgr shutting down slurmctld: Saving all slurm state slurmctld: debug: create_mmap_buf: Failed to open file `/var/lib/slurm/slurmctld/job_state`, No such file or directory slurmctld: error: Could not open job state file /var/lib/slurm/slurmctld/job_state: No such file or directory slurmctld: error: NOTE: Trying backup state save file. Jobs may be lost! slurmctld: debug: create_mmap_buf: Failed to open file `/var/lib/slurm/slurmctld/job_state.old`, No such file or directory slurmctld: No job state file (/var/lib/slurm/slurmctld/job_state.old) found slurmctld: debug3: Writing job id 0 to header record of job_state file slurmctld: debug3: _slurmctld_background shutting down Shall I be concerned about this? If so, what to do to fix?
Will - if this is the first time the scheduler has started and shutdown then what you are seeing is normal. Slurm will write out state information to the StateSaveLocation on shutdown. This includes information about jobs, partitions, nodes, associations, clustername, federation, database messages, config state, tres, qos, priority, reservations and triggers. Since this is the first time the cluster has started it will not contain the state information for the cluster until the first shutdown. At this point, it will write this information out to the StateSaveLocation.
It is the first time, but since this message is new to me, wanted to check it out. You may go ahead and close, thanks!