System Preparation
Hardware
Platform
Supports work on AMD and Intel platforms with x86-64 architecture.
AMD platforms require BIOS setup for optimal performance.
Hyper-threading
It is recommended to enable hyper-threading (HT) in the BIOS.
With HT enabled, the following command shows 2
:
lscpu | grep 'Thread(s) per core:'
NUMA
Multiprocessor platforms are recommended to be used in NUMA mode with one processor per NUMA node. Platforms with one and two NUMA nodes are supported.
For optimal performance, it is recommended to spread NICs across different NUMA nodes so that each processor only works with ports on its own node.
Find out the NUMA node of a device by its PCI address:
cat /sys/bus/pci/devices/0000:04:00.0/numa_node
Drivers (kernel modules)
For Intel network cards and some others, a driver (kernel module) must be loaded on the system to allow DPDK to work with them.
Mellanox (NVIDIA) cards do not require kernel modules to be loaded. In fact all Mellanox ports are available to DPDK by default and will be used by packet processor automatically.
Required:
-
Select a driver for installed network devices (DPDK documentation):
-
Set up driver loading at system start.
Driver management
Binding devices to the desired driver is done through the dpdk-devbind
script
(DPDK documentation).
Download and install dpdk-devbind
:
wget https://docs.mitigator.ru/master/dist/dpdk-devbind -O /usr/local/bin/dpdk-devbind
chmod +x /usr/local/bin/dpdk-devbind
View the status of network devices and available drivers:
dpdk-devbind --status-dev=net
The Network devices using DPDK-compatible driver
list contains devices bound to a
DPDK-compatible driver. The Network devices using kernel driver
list contains devices
with standard kernel drivers. The current driver is listed in the drv=
field. Available
drivers are listed in the unused=
field. Only the drivers that are loaded on the system
are displayed.
If the required driver is not loaded, load via modprobe
.
For example, to load the vfio-pci
driver:
modprobe vfio-pci
You can match the device name (for example, eth0
or enp3s0f0
)
and its PCI address using the if=
field in the dpdk-devbind
output.
It is recommended to bind all devices with the same dpdk-devbind
command.
To work with vfio-pci
, all ports on one card must be bound to the same driver.
For example, to bind 06:00.0
and 06:00.1
devices to the vfio-pci
driver:
dpdk-devbind -b vfio-pci 06:00.0 06:00.1
If the bind command completes without error, the driver can be used.
A device bound to a special driver disappears from the system
(output of ip link
etc.). Binding works until reboot, automatic binding
can be configured when installing MITIGATOR.
Devices marked in the **Active**
list have an IP address. This is usually the
port through which the machine is accessed via SSH, so the script does not allow changing
the driver for such devices.
Automatic loading of the kernel module
If the required kernel module is not loaded by default at system startup,
it can be enabled via /etc/modules-load.d
.
For example, to load vfio-pci
:
echo vfio-pci >> /etc/modules-load.d/mitigator.conf
vfio-pci module
The vfio-pci
module is a part of the Linux kernel. Required for DPDK to work with
network cards. Requires processor support for I/O virtualization (Intel VT-d or AMD-V).
Enable in the BIOS with relevant settings (for example, “VT-d”,
“Intel Virtualization Technology”, “AMD-V”, “AMD Virtualization”, “SVM”, “IOMMU”).
Add options to the kernel parameters (for example, using GRUB):
intel_iommu=on iommu=pt
Check for virtualization support:
grep -q 'vmx\|svm' /proc/cpuinfo && echo enabled || echo disabled
Check for IOMMU support:
compgen -G '/sys/kernel/iommu_groups/*/devices/*' > /dev/null && echo enabled || echo disabled
Module loading:
modprobe vfio-pci
uio_pci_generic module
The uio_pci_generic
module is part of the Linux kernel. It is used in place of
vfio-pci
if it is not supported by the system or does not work for some reason.
Module loading:
modprobe uio_pci_generic
igb_uio module
The igb_uio
module can be used as an alternative to other modules
if they don’t work.
Module installation:
apt install -y dpdk-igb-uio-dkms
apt install -y dpdk-kmods-dkms
Module loading:
modprobe igb_uio
Hugepages
MITIGATOR requires configured hugepages (large memory pages). Hugepages of various sizes (2 MB, 1 GB) may be supported by the platform. It is recommended to configure hugepages of larger size (1 GB).
The required number of hugepages depends on the desired number of protection policies. It is recommended to allocate 50-75% of the total memory to hugepages.
1 GB hugepages
It is recommended to use 1 GB hugepages if supported by the platform. They can only be allocated at system boot.
Check support:
grep -q pdpe1gb /proc/cpuinfo && echo supported || echo not supported
Configured by adding options to the kernel parameters. Example of allocating 64 pages of 1 GB size:
default_hugepagesz=1G hugepagesz=1G hugepages=64
2 MB hugepages
2 MB hugepages must be used only if 1 GB hugepages are not supported by the platform, for example, in virtualized environment. They can be configured without a system reboot.
Example of allocating 2048 pages of 2 MB size:
sysctl -w vm.nr_hugepages=2048
Example of setting up the allocation of hugepages at system boot:
echo 'vm.nr_hugepages = 2048' > /etc/sysctl.d/hugepages.conf
Docker
Install Docker from the distributive repositories:
apt install -y docker.io
apt-get install -y docker-engine
dnf install -y docker-ce
Once installed, you need to start and enable Docker service:
systemctl enable --now docker
dnf install -y docker
Once installed, you need to start and enable Docker service:
systemctl enable --now docker
Install Docker Compose from the official repository and make the binary executable:
curl -L "https://github.com/docker/compose/releases/download/v2.28.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/bin/docker-compose \
&& chmod +x /usr/bin/docker-compose
If https://docker.mitigator.ru is accessed through a proxy, you need to configure Docker.
Official Docker and Docker Compose installation guides: