Ticket 21163 - Inconsistent GPU ordering when a mix of full GPU and MIG instances are used
Summary: Inconsistent GPU ordering when a mix of full GPU and MIG instances are used
Status: OPEN
Alias: None
Product: Slurm
Classification: Unclassified
Component: GPU (show other tickets)
Version: 23.11.8
Hardware: Linux Linux
: 6 - No support contract
Assignee: Jacob Jenson
QA Contact:
URL:
Depends on:
Blocks:
 
Reported: 2024-10-11 04:45 MDT by Mahendra Paipuri
Modified: 2024-10-11 04:52 MDT (History)
0 users

See Also:
Site: -Other-
Alineos Sites: ---
Atos/Eviden Sites: ---
Confidential Site: ---
Coreweave sites: ---
Cray Sites: ---
DS9 clusters: ---
HPCnow Sites: ---
HPE Sites: ---
IBM Sites: ---
NOAA SIte: ---
NoveTech Sites: ---
Nvidia HWinf-CS Sites: ---
OCF Sites: ---
Recursion Pharma Sites: ---
SFW Sites: ---
SNIC sites: ---
Tzag Elita Sites: ---
Linux Distro: Ubuntu
Machine Name:
CLE Version:
Version Fixed:
Target Release: ---
DevPrio: ---
Emory-Cloud Sites: ---


Attachments

Note You need to log in before you can comment on or make changes to this ticket.
Description Mahendra Paipuri 2024-10-11 04:45:37 MDT
Hello,

We noticed that the GPU ordering by SLURM is inconsistent when a compute node is configured to use a mix of full GPUs and MIG instances. The following tests are made on a node where all SLURM components are installed (slurmctld, slurmd) and it has 2 A100 GPUs and MIG is enabled only on GPU 0.

$ nvidia-smi
Fri Oct 11 12:04:56 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100-PCIE-40GB          On  | 00000000:21:00.0 Off |                   On |
| N/A   28C    P0              31W / 250W |     50MiB / 40960MiB |     N/A      Default |
|                                         |                      |              Enabled |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA A100-PCIE-40GB          On  | 00000000:81:00.0 Off |                    0 |
| N/A   27C    P0              34W / 250W |      4MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| MIG devices:                                                                          |
+------------------+--------------------------------+-----------+-----------------------+
| GPU  GI  CI  MIG |                   Memory-Usage |        Vol|      Shared           |
|      ID  ID  Dev |                     BAR1-Usage | SM     Unc| CE ENC DEC OFA JPG    |
|                  |                                |        ECC|                       |
|==================+================================+===========+=======================|
|  0    3   0   0  |              12MiB /  9856MiB  | 14      0 |  1   0    1    0    0 |
|                  |               0MiB / 16383MiB  |           |                       |
+------------------+--------------------------------+-----------+-----------------------+
|  0    4   0   1  |              12MiB /  9856MiB  | 14      0 |  1   0    1    0    0 |
|                  |               0MiB / 16383MiB  |           |                       |
+------------------+--------------------------------+-----------+-----------------------+
|  0    5   0   2  |              12MiB /  9856MiB  | 14      0 |  1   0    1    0    0 |
|                  |               0MiB / 16383MiB  |           |                       |
+------------------+--------------------------------+-----------+-----------------------+
|  0    6   0   3  |              12MiB /  9856MiB  | 14      0 |  1   0    1    0    0 |
|                  |               0MiB / 16383MiB  |           |                       |
+------------------+--------------------------------+-----------+-----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

**Case 1: When AutoDetect=nvml is used in gres.conf:**
In this case we let SLURM detect the GPUs automatically using NVML.

$ cat /etc/slurm/gres.conf
AutoDetect=nvml

The partition is defined as follows:
$ cat /etc/slurm/partitions.conf
PartitionName=gpu Nodes=grouille-2 Default=YES MaxTime=INFINITE State=UP
NodeName=grouille-2 CPUs=128 Boards=1 SocketsPerBoard=2 CoresPerSocket=32 ThreadsPerCore=2 RealMemory=128414 Gres=gpu:a100:1,gpu:nvidia_a100_1g.10gb:4

Here are relevant Slurmd logs:
[2024-10-11T12:09:42.303] debug:  gres/gpu: init: loaded
[2024-10-11T12:09:42.303] debug:  gpu/nvml: init: init: GPU NVML plugin loaded
[2024-10-11T12:09:42.309] debug2: gpu/nvml: _nvml_init: Successfully initialized NVML
[2024-10-11T12:09:42.309] debug:  gpu/nvml: _get_system_gpu_list_nvml: Systems Graphics Driver Version: 535.129.03
[2024-10-11T12:09:42.309] debug:  gpu/nvml: _get_system_gpu_list_nvml: NVML Library Version: 12.535.129.03
[2024-10-11T12:09:42.309] debug2: gpu/nvml: _get_system_gpu_list_nvml: NVML API Version: 11
[2024-10-11T12:09:42.309] debug2: gpu/nvml: _get_system_gpu_list_nvml: Total CPU count: 128
[2024-10-11T12:09:42.309] debug2: gpu/nvml: _get_system_gpu_list_nvml: Device count: 2
[2024-10-11T12:09:42.375] debug2: gpu/nvml: _get_system_gpu_list_nvml: GPU index 0:
[2024-10-11T12:09:42.375] debug2: gpu/nvml: _get_system_gpu_list_nvml:     Name: nvidia_a100-pcie-40gb
[2024-10-11T12:09:42.375] debug2: gpu/nvml: _get_system_gpu_list_nvml:     UUID: GPU-93d0f4bd-5592-4be2-ee0f-c4736678cf9e
[2024-10-11T12:09:42.375] debug2: gpu/nvml: _get_system_gpu_list_nvml:     PCI Domain/Bus/Device: 0:33:0
[2024-10-11T12:09:42.375] debug2: gpu/nvml: _get_system_gpu_list_nvml:     PCI Bus ID: 00000000:21:00.0
[2024-10-11T12:09:42.375] debug2: gpu/nvml: _get_system_gpu_list_nvml:     NVLinks: -1,0
[2024-10-11T12:09:42.375] debug2: gpu/nvml: _get_system_gpu_list_nvml:     Device File (minor number): /dev/nvidia0
[2024-10-11T12:09:42.375] debug2: gpu/nvml: _get_system_gpu_list_nvml:     CPU Affinity Range - Machine: 0-31,64-95
[2024-10-11T12:09:42.375] debug2: gpu/nvml: _get_system_gpu_list_nvml:     Core Affinity Range - Abstract: 0-31
[2024-10-11T12:09:42.375] debug2: gpu/nvml: _get_system_gpu_list_nvml:     MIG mode: enabled
[2024-10-11T12:09:42.457] debug2: gpu/nvml: _get_system_gpu_list_nvml:     MIG count: 4
[2024-10-11T12:09:42.478] debug2: gpu/nvml: _handle_mig: GPU minor 0, MIG index 0:
[2024-10-11T12:09:42.478] debug2: gpu/nvml: _handle_mig:     MIG Profile: nvidia_a100_1g.10gb
[2024-10-11T12:09:42.478] debug2: gpu/nvml: _handle_mig:     MIG UUID: MIG-ce2e805f-ce8e-5cf7-8132-176167d87d24
[2024-10-11T12:09:42.478] debug2: gpu/nvml: _handle_mig:     UniqueID: MIG-ce2e805f-ce8e-5cf7-8132-176167d87d24
[2024-10-11T12:09:42.478] debug2: gpu/nvml: _handle_mig:     GPU Instance (GI) ID: 3
[2024-10-11T12:09:42.478] debug2: gpu/nvml: _handle_mig:     Compute Instance (CI) ID: 0
[2024-10-11T12:09:42.478] debug2: gpu/nvml: _handle_mig:     GI Minor Number: 30
[2024-10-11T12:09:42.478] debug2: gpu/nvml: _handle_mig:     CI Minor Number: 31
[2024-10-11T12:09:42.478] debug2: gpu/nvml: _handle_mig:     Device Files: /dev/nvidia0,/dev/nvidia-caps/nvidia-cap30,/dev/nvidia-caps/nvidia-cap31
[2024-10-11T12:09:42.498] debug2: gpu/nvml: _handle_mig: GPU minor 0, MIG index 1:
[2024-10-11T12:09:42.498] debug2: gpu/nvml: _handle_mig:     MIG Profile: nvidia_a100_1g.10gb
[2024-10-11T12:09:42.498] debug2: gpu/nvml: _handle_mig:     MIG UUID: MIG-2cc993d7-588c-5c28-b454-b3851897e3d7
[2024-10-11T12:09:42.498] debug2: gpu/nvml: _handle_mig:     UniqueID: MIG-2cc993d7-588c-5c28-b454-b3851897e3d7
[2024-10-11T12:09:42.498] debug2: gpu/nvml: _handle_mig:     GPU Instance (GI) ID: 4
[2024-10-11T12:09:42.498] debug2: gpu/nvml: _handle_mig:     Compute Instance (CI) ID: 0
[2024-10-11T12:09:42.498] debug2: gpu/nvml: _handle_mig:     GI Minor Number: 39
[2024-10-11T12:09:42.498] debug2: gpu/nvml: _handle_mig:     CI Minor Number: 40
[2024-10-11T12:09:42.498] debug2: gpu/nvml: _handle_mig:     Device Files: /dev/nvidia0,/dev/nvidia-caps/nvidia-cap39,/dev/nvidia-caps/nvidia-cap40
[2024-10-11T12:09:42.519] debug2: gpu/nvml: _handle_mig: GPU minor 0, MIG index 2:
[2024-10-11T12:09:42.519] debug2: gpu/nvml: _handle_mig:     MIG Profile: nvidia_a100_1g.10gb
[2024-10-11T12:09:42.519] debug2: gpu/nvml: _handle_mig:     MIG UUID: MIG-4bd078f2-f9bb-5bfb-8695-774674f75e96
[2024-10-11T12:09:42.519] debug2: gpu/nvml: _handle_mig:     UniqueID: MIG-4bd078f2-f9bb-5bfb-8695-774674f75e96
[2024-10-11T12:09:42.519] debug2: gpu/nvml: _handle_mig:     GPU Instance (GI) ID: 5
[2024-10-11T12:09:42.519] debug2: gpu/nvml: _handle_mig:     Compute Instance (CI) ID: 0
[2024-10-11T12:09:42.519] debug2: gpu/nvml: _handle_mig:     GI Minor Number: 48
[2024-10-11T12:09:42.519] debug2: gpu/nvml: _handle_mig:     CI Minor Number: 49
[2024-10-11T12:09:42.519] debug2: gpu/nvml: _handle_mig:     Device Files: /dev/nvidia0,/dev/nvidia-caps/nvidia-cap48,/dev/nvidia-caps/nvidia-cap49
[2024-10-11T12:09:42.540] debug2: gpu/nvml: _handle_mig: GPU minor 0, MIG index 3:
[2024-10-11T12:09:42.540] debug2: gpu/nvml: _handle_mig:     MIG Profile: nvidia_a100_1g.10gb
[2024-10-11T12:09:42.540] debug2: gpu/nvml: _handle_mig:     MIG UUID: MIG-c104d04a-3bbc-5e2b-847c-38414ce7db25
[2024-10-11T12:09:42.540] debug2: gpu/nvml: _handle_mig:     UniqueID: MIG-c104d04a-3bbc-5e2b-847c-38414ce7db25
[2024-10-11T12:09:42.540] debug2: gpu/nvml: _handle_mig:     GPU Instance (GI) ID: 6
[2024-10-11T12:09:42.540] debug2: gpu/nvml: _handle_mig:     Compute Instance (CI) ID: 0
[2024-10-11T12:09:42.540] debug2: gpu/nvml: _handle_mig:     GI Minor Number: 57
[2024-10-11T12:09:42.540] debug2: gpu/nvml: _handle_mig:     CI Minor Number: 58
[2024-10-11T12:09:42.540] debug2: gpu/nvml: _handle_mig:     Device Files: /dev/nvidia0,/dev/nvidia-caps/nvidia-cap57,/dev/nvidia-caps/nvidia-cap58
[2024-10-11T12:09:42.544] debug2: Possible GPU Memory Frequencies (1):
[2024-10-11T12:09:42.544] debug2: -------------------------------
[2024-10-11T12:09:42.544] debug2:     *1215 MHz [0]
[2024-10-11T12:09:42.544] debug2:         Possible GPU Graphics Frequencies (81):
[2024-10-11T12:09:42.544] debug2:         ---------------------------------
[2024-10-11T12:09:42.544] debug2:           *1410 MHz [0]
[2024-10-11T12:09:42.544] debug2:           *1395 MHz [1]
[2024-10-11T12:09:42.544] debug2:           ...
[2024-10-11T12:09:42.544] debug2:           *810 MHz [40]
[2024-10-11T12:09:42.544] debug2:           ...
[2024-10-11T12:09:42.544] debug2:           *225 MHz [79]
[2024-10-11T12:09:42.544] debug2:           *210 MHz [80]
[2024-10-11T12:09:42.564] debug2: gpu/nvml: _get_system_gpu_list_nvml: GPU index 1:
[2024-10-11T12:09:42.564] debug2: gpu/nvml: _get_system_gpu_list_nvml:     Name: nvidia_a100-pcie-40gb
[2024-10-11T12:09:42.564] debug2: gpu/nvml: _get_system_gpu_list_nvml:     UUID: GPU-5aeebfea-a89d-0f4c-1783-e95e8ebe4952
[2024-10-11T12:09:42.564] debug2: gpu/nvml: _get_system_gpu_list_nvml:     PCI Domain/Bus/Device: 0:129:0
[2024-10-11T12:09:42.564] debug2: gpu/nvml: _get_system_gpu_list_nvml:     PCI Bus ID: 00000000:81:00.0
[2024-10-11T12:09:42.564] debug2: gpu/nvml: _get_system_gpu_list_nvml:     NVLinks: 0,-1
[2024-10-11T12:09:42.564] debug2: gpu/nvml: _get_system_gpu_list_nvml:     Device File (minor number): /dev/nvidia1
[2024-10-11T12:09:42.564] debug2: gpu/nvml: _get_system_gpu_list_nvml:     CPU Affinity Range - Machine: 32-63,96-127
[2024-10-11T12:09:42.564] debug2: gpu/nvml: _get_system_gpu_list_nvml:     Core Affinity Range - Abstract: 32-63
[2024-10-11T12:09:42.564] debug2: gpu/nvml: _get_system_gpu_list_nvml:     MIG mode: disabled
[2024-10-11T12:09:42.567] debug2: Possible GPU Memory Frequencies (1):
[2024-10-11T12:09:42.567] debug2: -------------------------------
[2024-10-11T12:09:42.567] debug2:     *1215 MHz [0]
[2024-10-11T12:09:42.567] debug2:         Possible GPU Graphics Frequencies (81):
[2024-10-11T12:09:42.567] debug2:         ---------------------------------
[2024-10-11T12:09:42.567] debug2:           *1410 MHz [0]
[2024-10-11T12:09:42.567] debug2:           *1395 MHz [1]
[2024-10-11T12:09:42.567] debug2:           ...
[2024-10-11T12:09:42.567] debug2:           *810 MHz [40]
[2024-10-11T12:09:42.567] debug2:           ...
[2024-10-11T12:09:42.567] debug2:           *225 MHz [79]
[2024-10-11T12:09:42.567] debug2:           *210 MHz [80]
[2024-10-11T12:09:42.567] gpu/nvml: _get_system_gpu_list_nvml: 2 GPU system device(s) detected
[2024-10-11T12:09:42.567] debug:  Gres GPU plugin: Merging configured GRES with system GPUs
[2024-10-11T12:09:42.567] debug2: gres/gpu: _merge_system_gres_conf: gres_list_conf:
[2024-10-11T12:09:42.567] debug2:     GRES[gpu] Type:a100 Count:1 Cores(128):(null)  Links:(null) Flags:HAS_TYPE,ENV_NVML,ENV_RSMI,ENV_ONEAPI,ENV_OPENCL,ENV_DEFAULT File:(null) UniqueId:(null)
[2024-10-11T12:09:42.567] debug2:     GRES[gpu] Type:nvidia_a100_1g.10gb Count:4 Cores(128):(null)  Links:(null) Flags:HAS_TYPE,ENV_NVML,ENV_RSMI,ENV_ONEAPI,ENV_OPENCL,ENV_DEFAULT File:(null) UniqueId:(null)
[2024-10-11T12:09:42.567] debug:  gres/gpu: _merge_system_gres_conf: Including the following GPU matched between system and configuration:
[2024-10-11T12:09:42.567] debug:      GRES[gpu] Type:nvidia_a100_1g.10gb Count:1 Cores(128):0-31  Links:(null) Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia0,/dev/nvidia-caps/nvidia-cap30,/dev/nvidia-caps/nvidia-cap31 UniqueId:MIG-ce2e805f-ce8e-5cf7-8132-176167d87d24
[2024-10-11T12:09:42.567] debug:  gres/gpu: _merge_system_gres_conf: Including the following GPU matched between system and configuration:
[2024-10-11T12:09:42.567] debug:      GRES[gpu] Type:nvidia_a100_1g.10gb Count:1 Cores(128):0-31  Links:(null) Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia0,/dev/nvidia-caps/nvidia-cap39,/dev/nvidia-caps/nvidia-cap40 UniqueId:MIG-2cc993d7-588c-5c28-b454-b3851897e3d7
[2024-10-11T12:09:42.567] debug:  gres/gpu: _merge_system_gres_conf: Including the following GPU matched between system and configuration:
[2024-10-11T12:09:42.567] debug:      GRES[gpu] Type:nvidia_a100_1g.10gb Count:1 Cores(128):0-31  Links:(null) Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia0,/dev/nvidia-caps/nvidia-cap48,/dev/nvidia-caps/nvidia-cap49 UniqueId:MIG-4bd078f2-f9bb-5bfb-8695-774674f75e96
[2024-10-11T12:09:42.567] debug:  gres/gpu: _merge_system_gres_conf: Including the following GPU matched between system and configuration:
[2024-10-11T12:09:42.567] debug:      GRES[gpu] Type:nvidia_a100_1g.10gb Count:1 Cores(128):0-31  Links:(null) Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia0,/dev/nvidia-caps/nvidia-cap57,/dev/nvidia-caps/nvidia-cap58 UniqueId:MIG-c104d04a-3bbc-5e2b-847c-38414ce7db25
[2024-10-11T12:09:42.567] debug:  gres/gpu: _merge_system_gres_conf: Including the following GPU matched between system and configuration:
[2024-10-11T12:09:42.567] debug:      GRES[gpu] Type:a100 Count:1 Cores(128):32-63  Links:0,-1 Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia1 UniqueId:(null)
[2024-10-11T12:09:42.567] debug2: gres/gpu: _merge_system_gres_conf: gres_list_gpu
[2024-10-11T12:09:42.567] debug2:     GRES[gpu] Type:a100 Count:1 Cores(128):32-63  Links:0,-1 Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia1 UniqueId:(null)
[2024-10-11T12:09:42.568] debug2:     GRES[gpu] Type:nvidia_a100_1g.10gb Count:1 Cores(128):0-31  Links:(null) Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia0,/dev/nvidia-caps/nvidia-cap30,/dev/nvidia-caps/nvidia-cap31 UniqueId:MIG-ce2e805f-ce8e-5cf7-8132-176167d87d24
[2024-10-11T12:09:42.568] debug2:     GRES[gpu] Type:nvidia_a100_1g.10gb Count:1 Cores(128):0-31  Links:(null) Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia0,/dev/nvidia-caps/nvidia-cap39,/dev/nvidia-caps/nvidia-cap40 UniqueId:MIG-2cc993d7-588c-5c28-b454-b3851897e3d7
[2024-10-11T12:09:42.568] debug2:     GRES[gpu] Type:nvidia_a100_1g.10gb Count:1 Cores(128):0-31  Links:(null) Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia0,/dev/nvidia-caps/nvidia-cap48,/dev/nvidia-caps/nvidia-cap49 UniqueId:MIG-4bd078f2-f9bb-5bfb-8695-774674f75e96
[2024-10-11T12:09:42.568] debug2:     GRES[gpu] Type:nvidia_a100_1g.10gb Count:1 Cores(128):0-31  Links:(null) Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia0,/dev/nvidia-caps/nvidia-cap57,/dev/nvidia-caps/nvidia-cap58 UniqueId:MIG-c104d04a-3bbc-5e2b-847c-38414ce7db25
[2024-10-11T12:09:42.568] debug:  Gres GPU plugin: Final merged GRES list:
[2024-10-11T12:09:42.568] debug:      GRES[gpu] Type:a100 Count:1 Cores(128):32-63  Links:0,-1 Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia1 UniqueId:(null)
[2024-10-11T12:09:42.568] debug:      GRES[gpu] Type:nvidia_a100_1g.10gb Count:1 Cores(128):0-31  Links:(null) Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia0,/dev/nvidia-caps/nvidia-cap30,/dev/nvidia-caps/nvidia-cap31 UniqueId:MIG-ce2e805f-ce8e-5cf7-8132-176167d87d24
[2024-10-11T12:09:42.568] debug:      GRES[gpu] Type:nvidia_a100_1g.10gb Count:1 Cores(128):0-31  Links:(null) Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia0,/dev/nvidia-caps/nvidia-cap39,/dev/nvidia-caps/nvidia-cap40 UniqueId:MIG-2cc993d7-588c-5c28-b454-b3851897e3d7
[2024-10-11T12:09:42.568] debug:      GRES[gpu] Type:nvidia_a100_1g.10gb Count:1 Cores(128):0-31  Links:(null) Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia0,/dev/nvidia-caps/nvidia-cap48,/dev/nvidia-caps/nvidia-cap49 UniqueId:MIG-4bd078f2-f9bb-5bfb-8695-774674f75e96
[2024-10-11T12:09:42.568] debug:      GRES[gpu] Type:nvidia_a100_1g.10gb Count:1 Cores(128):0-31  Links:(null) Flags:HAS_FILE,HAS_TYPE,ENV_NVML File:/dev/nvidia0,/dev/nvidia-caps/nvidia-cap57,/dev/nvidia-caps/nvidia-cap58 UniqueId:MIG-c104d04a-3bbc-5e2b-847c-38414ce7db25
[2024-10-11T12:09:42.568] Gres Name=gpu Type=a100 Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML
[2024-10-11T12:09:42.568] Gres Name=gpu Type=nvidia_a100_1g.10gb Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML
[2024-10-11T12:09:42.568] Gres Name=gpu Type=nvidia_a100_1g.10gb Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML
[2024-10-11T12:09:42.568] Gres Name=gpu Type=nvidia_a100_1g.10gb Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML
[2024-10-11T12:09:42.568] Gres Name=gpu Type=nvidia_a100_1g.10gb Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML

Now doing a simple test to check the GPU ordering yeilds:

$ srun --gres=gpu:nvidia_a100_1g.10gb:4 env | grep GPU
SLURM_GPUS_ON_NODE=4
SLURM_STEP_GPUS=1,2,3,4

$ srun --gres=gpu:a100:1 env | grep GPU
SLURM_GPUS_ON_NODE=1
SLURM_STEP_GPUS=0

So, Slurm considers MIG disabled GPU as 0 and then orders all the MIG instances from 1,2,... From our setup we enabled MIG on GPU 0 (PCI:00000000:21:00.0) and other GPU has PCI addr 00000000:81:00.0. So, by PCI ordering Slurm should start numbering GPUs from MIG instances and then place the full GPU at the last. To resume:

Expected ordering: MIG-3:0, MIG-4:1, MIG-5:2, MIG-6:3, GPU-1:4
Actual ordering: GPU-1:0, MIG-3:1, MIG-4:2, MIG-5:3, MIG-6:4

**Case 2: When devices are set manually in gres.conf:**

Here we use MIG discovery tool (https://gitlab.com/nvidia/hpc/slurm-mig-discovery.git) to make a gres.conf manually. Here are the contents:

$ cat /etc/slurm/gres.conf 
# GPU 0 MIG 0 /proc/driver/nvidia/capabilities/gpu0/mig/gi3/access
Name=gpu Type=nvidia_a100_1g.10gb File=/dev/nvidia-caps/nvidia-cap30

# GPU 0 MIG 1 /proc/driver/nvidia/capabilities/gpu0/mig/gi4/access
Name=gpu Type=nvidia_a100_1g.10gb File=/dev/nvidia-caps/nvidia-cap39

# GPU 0 MIG 2 /proc/driver/nvidia/capabilities/gpu0/mig/gi5/access
Name=gpu Type=nvidia_a100_1g.10gb File=/dev/nvidia-caps/nvidia-cap48

# GPU 0 MIG 3 /proc/driver/nvidia/capabilities/gpu0/mig/gi6/access
Name=gpu Type=nvidia_a100_1g.10gb File=/dev/nvidia-caps/nvidia-cap57

# GPU 1
Name=gpu Type=a100 File=/dev/nvidia1


Partitons definitions is still same as in the previous case:

$ cat /etc/slurm/partitions.conf 
PartitionName=gpu Nodes=grouille-2 Default=YES MaxTime=INFINITE State=UP
NodeName=grouille-2 CPUs=128 Boards=1 SocketsPerBoard=2 CoresPerSocket=32 ThreadsPerCore=2 RealMemory=128414 Gres=gpu:a100:1,gpu:nvidia_a100_1g.10gb:4


Relevant slurmd logs:

[2024-10-11T12:27:07.331] debug:  gpu/generic: init: init: GPU Generic plugin loaded
[2024-10-11T12:27:07.331] Gres Name=gpu Type=a100 Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML,ENV_RSMI,ENV_ONEAPI,ENV_OPENCL,ENV_DEFAULT
[2024-10-11T12:27:07.331] Gres Name=gpu Type=nvidia_a100_1g.10gb Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML,ENV_RSMI,ENV_ONEAPI,ENV_OPENCL,ENV_DEFAULT
[2024-10-11T12:27:07.331] Gres Name=gpu Type=nvidia_a100_1g.10gb Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML,ENV_RSMI,ENV_ONEAPI,ENV_OPENCL,ENV_DEFAULT
[2024-10-11T12:27:07.331] Gres Name=gpu Type=nvidia_a100_1g.10gb Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML,ENV_RSMI,ENV_ONEAPI,ENV_OPENCL,ENV_DEFAULT
[2024-10-11T12:27:07.331] Gres Name=gpu Type=nvidia_a100_1g.10gb Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML,ENV_RSMI,ENV_ONEAPI,ENV_OPENCL,ENV_DEFAULT

Now doing the same tests as the case above yeilds same results:

$ srun --gres=gpu:nvidia_a100_1g.10gb:4 env | grep GPU
SLURM_GPUS_ON_NODE=4
SLURM_STEP_GPUS=1,2,3,4

$ srun --gres=gpu:a100:1 env | grep GPU
SLURM_GPUS_ON_NODE=1
SLURM_STEP_GPUS=0

Slurm is starting the numbering from full GPU even though its PCI addr is higher than the MIG enabled GPU.

However, we have noticed that by changing the ordering of Gres elements in partitions.conf, we can control the GPU ordering by slurm. So, we changed the partitions.conf to following:

$ cat /etc/slurm/partitions.conf
cat /etc/slurm/partitions.conf 
PartitionName=gpu Nodes=grouille-2 Default=YES MaxTime=INFINITE State=UP
NodeName=grouille-2 CPUs=128 Boards=1 SocketsPerBoard=2 CoresPerSocket=32 ThreadsPerCore=2 RealMemory=128414 Gres=gpu:nvidia_a100_1g.10gb:4,gpu:a100:1

The difference is that we put `gpu:a100:1` resource at the end of Gres definition where as in above config, it is placed at the beginning.

Relevant slurmd logs after changing the resource ordering in Gres:

[2024-10-11T12:32:00.753] debug:  gpu/generic: init: init: GPU Generic plugin loaded
[2024-10-11T12:32:00.753] Gres Name=gpu Type=nvidia_a100_1g.10gb Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML,ENV_RSMI,ENV_ONEAPI,ENV_OPENCL,ENV_DEFAULT
[2024-10-11T12:32:00.753] Gres Name=gpu Type=nvidia_a100_1g.10gb Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML,ENV_RSMI,ENV_ONEAPI,ENV_OPENCL,ENV_DEFAULT
[2024-10-11T12:32:00.753] Gres Name=gpu Type=nvidia_a100_1g.10gb Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML,ENV_RSMI,ENV_ONEAPI,ENV_OPENCL,ENV_DEFAULT
[2024-10-11T12:32:00.753] Gres Name=gpu Type=nvidia_a100_1g.10gb Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML,ENV_RSMI,ENV_ONEAPI,ENV_OPENCL,ENV_DEFAULT
[2024-10-11T12:32:00.753] Gres Name=gpu Type=a100 Count=1 Flags=HAS_FILE,HAS_TYPE,ENV_NVML,ENV_RSMI,ENV_ONEAPI,ENV_OPENCL,ENV_DEFAULT

We can see here that Slurm is placing the full GPU at the end of the list now.

Repeating the tests, we get "expected" ordering:

$ srun --gres=gpu:nvidia_a100_1g.10gb:4 env | grep GPU
SLURM_GPUS_ON_NODE=4
SLURM_STEP_GPUS=0,1,2,3

$ srun --gres=gpu:a100:1 env | grep GPU
SLURM_GPUS_ON_NODE=1
SLURM_STEP_GPUS=4

Now the ordering is expected. The full GPU which has higher PCI addr than MIG enabled one is placed at the last. 

**IMPORTANT:**

We noticed that changing the order of resources in partitions.conf does not have any effect on ordering of GPUs when Autodetect=nvml is used.

**Conclusion:**

Looking at docs and some commits, it seems the objective is always to order the GPUs based on their PCI addr. In that spirit, this behaviour is unwanted and unreliable. Our objective is to reliably match jobs to GPU ordinals, store this info and eventually use it in our monitoring stack to present GPU metrics for each job.