Supports work on AMD and Intel platforms with x86-64 architecture.
AMD platforms require BIOS setup for optimal performance.
It is recommended to enable hyper-threading (HT) in the BIOS.
With HT enabled, the following command shows 2
:
lscpu | grep 'Thread(s) per core:'
Multiprocessor platforms are recommended to be used in NUMA mode with one processor per NUMA node. Platforms with one and two NUMA nodes are supported.
For optimal performance, it is recommended to spread NICs across different NUMA nodes so that each processor only works with ports on its own node.
Find out the NUMA node of a device by its PCI address:
cat /sys/bus/pci/devices/0000:04:00.0/numa_node
For Intel network cards and some others, a driver (kernel module) must be loaded on the system to allow DPDK to work with them. Mellanox cards do not require kernel modules to be loaded.
Required:
Select a driver for installed network devices (DPDK documentation):
vfio-pci
: default, recommended by default
(see below);uio_pci_generic
: default, used instead of vfio-pci
,
if it doesn’t work (see below);igb_uio
: non-standard, only needed if other drivers
do not work (see below);virtio-pci
: Needed for QEMU/KVM virtual adapters.Set up driver loading at system start.
Binding devices to the desired driver is done through the dpdk-devbind
script
(DPDK documentation).
Download and install dpdk-devbind
:
wget https://docs.mitigator.ru/v22.08/dist/dpdk-devbind -O /usr/local/bin/dpdk-devbind
chmod +x /usr/local/bin/dpdk-devbind
View the status of network devices and available drivers:
dpdk-devbind --status-dev=net
The Network devices using DPDK-compatible driver
list contains devices bound to a DPDK-compatible driver.
The Network devices using kernel driver
list contains devices with standard kernel drivers.
The current driver is listed in the drv=
field. Available drivers are listed in the unused=
field.
Only the drivers that are loaded on the system are displayed.
If the required driver is not loaded, load via modprobe
.
For example, to load the vfio-pci
driver:
modprobe vfio-pci
You can match the device name (for example, eth0
or enp3s0f0
)
and its PCI address using the if=
field in the dpdk-devbind
output.
It is recommended to bind all devices with the same dpdk-devbind
command.
To work with vfio-pci
, all ports on one card must be bound to the same driver.
For example, to bind 06:00.0
and 06:00.1
devices to the vfio-pci
driver:
dpdk-devbind --bind vfio-pci 06:00.0 06:00.1
If the bind command completes without error, the driver can be used.
A device bound to a special driver disappears from the system
(output of ip link
etc.). Binding works until reboot, automatic binding
can be configured when installing MITIGATOR.
Note. Devices marked in the **Active**
list have an IP address. Thiss is usually the port
through which the machine is accessed via SSH, so the script does not allow changing the driver for such devices.
If the required kernel module is not loaded by default at system startup,
it can be enabled via /etc/modules-load.d
.
For example, to load vfio-pci
:
echo vfio-pci >> /etc/modules-load.d/mitigator.conf
The vfio-pci
module is a part of the Linux kernel. Required for DPDK to work with network cards.
Requires processor support for I/O virtualization (such as Intel VT-d or AMD-V). It is enabled in the BIOS with the appropriate settings.
In the kernel parameters, you need to add the options:
intel_iommu=on iommu=pt
,iommu=pt
.Check out support:
grep 'vmx\|svm' /proc/cpuinfo >/dev/null && echo supported || echo not supported
Module loading:
modprobe vfio-pci
The uio_pci_generic
module is part of the Linux kernel. It is used in place of vfio-pci
if it is
not supported by the system or does not work for some reason.
Module loading:
modprobe uio_pci_generic
The igb_uio
module can be used as an alternative to other modules
if they don’t work.
Module installation:
apt install -y dpdk-igb-uio-dkms
apt install -y dpdk-kmods-dkms
Module loading:
modprobe igb_uio
MITIGATOR requires configured hugepages (large memory pages). Hugepages of various sizes (2MB, 1GB) can be supported by the platform, it is recommended to configure larger pages.
The required number of hugepages depends on the desired number of protection policies. It is recommended to allocate 50-75% of the total memory to hugepages.
It is recommended to use 1 GB hugepages if supported by the platform. They can only be allocated at system boot.
Check out support:
grep -m1 pdpe1gb /proc/cpuinfo
It is configured by options in the kernel parameters. Example for allocating 64x 1 GB pages:
default_hugepagesz=1G hugepagesz=1G hugepages=64
2MB Hugepages can be configured without a system reboot.
Example to allocate 2048 x 2 MB pages:
sysctl -w vm.nr_hugepages=2048
Example for allocation on system boot:
echo 'vm.nr_hugepages = 2048' > /etc/sysctl.d/hugepages.conf
Install Docker and Docker Compose following the official installation documentation for your OS:
You should install Docker Compose v1. MITIGATOR is not guaranteed to work with Docker Compose v2.
If https://docker.mitigator.ru is accessed through a proxy, you need to configure Docker.