The NVIDIA System Management Interface tool is the easiest way to explore the GPU topology on your system. This tool is available as nvidia-smi and is installed as part of the NVIDIA display driver. GPU topology describes how one or more GPUs in the system are connected to each other and to the CPU and other devices in the system. The topology is important to know how data is copied between GPUs or between a GPU and CPU or other device.
$ nvidia-smi topo -h
$ nvidia-smi topo -m
The output of this command shows a matrix of the connections between your GPUs with interconnect info, CPU affinity and NUMA affinity.
On a system with a single GPU:
$ nvidia-smi topo -m GPU0 CPU Affinity GPU0 X 0-7
On a system with 4 GPUs and 4 CPUs with NUMA enabled as 4 nodes:
$ nvidia-smi topo -m GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU0 X NV4 NV4 NV4 48-63,112-127 3 GPU1 NV4 X NV4 NV4 32-47,96-111 2 GPU2 NV4 NV4 X NV4 16-31,80-95 1 GPU3 NV4 NV4 NV4 X 0-15,64-79 0
The command also prints this legend as reference:
Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks