On a AMD Rome (Zen 2 EPYC) system in mode NPS=4 (4 NUMA nodes per Socket), "slurmd -C" reports 8 Boards and 2 SocketsPerBoard. $ slurmd -C NodeName=XXXXX CPUs=256 Boards=8 SocketsPerBoard=2 CoresPerSocket=64 ThreadsPerCore=2 RealMemory=2064082 This is not correct, the system has 1 Board, 2 sockets and thus a total of 8 NUMA nodes. The number of cores per socket is correct. Looking at the output from "lstopo", the topology seems to be detected correctly, with a hierarchy of Package -> Group -> NUMANode -> Core. There are as many Groups as there are NUMANodes.
Felix, I'm looking into this. I think that you're correct - groups inside the package in terms of hwloc should not be interpreted as separate boards as done today. I'll have to check the details of hwloc spec to figure out how to best address the case. Unfortunately, hwloc doesn't have a separate type for boards so we have to use groups here. Just to be sure - you can always override discovered node topology in slurm.conf adding SlurmdParameters=config_overrides. cheers, Marcin
> Just to be sure - you can always override discovered node topology in slurm.conf adding SlurmdParameters=config_overrides. Yes, it's more a low-priority FYI. It doesn't impact us right now.
Created attachment 15378 [details] v2 Felix, Could you please apply the attached patch a verify if the issue is fixed in the cases you were testing? cheers, Marcin
Yes, your patch seems to work fine on this system, thanks! NodeName=XXXXX CPUs=256 Boards=1 SocketsPerBoard=2 CoresPerSocket=64 ThreadsPerCore=2 RealMemory=2064085
Felix, The fix got merged to our public repository[1] and will be released in 20.02.5. Should you have any questions, please reopen the case. cheers, Marcin [1]https://github.com/SchedMD/slurm/commit/6566c1b1c1735768fb4beff9566c9dd894ec44d0