i40e Linux* Base Driver for the Intel(R) Ethernet Controller 700 Series =============================================================================== January 7, 2020 Copyright(c) 2014-2020 Intel Corporation. Contents ======== - Overview - Identifying Your Adapter - Important Notes - Building and Installation - Command Line Parameters - Additional Features & Configurations - Performance Optimization - Known Issues/Troubleshooting Overview ======== This driver supports kernel versions 2.6.32 and newer. The associated Virtual Function (VF) driver for this driver is iavf. Driver information can be obtained using ethtool, lspci, and ifconfig. Instructions on updating ethtool can be found in the section Additional Configurations later in this document. This driver is only supported as a loadable module at this time. Intel is not supplying patches against the kernel source to allow for static linking of the drivers. For questions related to hardware requirements, refer to the documentation supplied with your Intel adapter. All hardware requirements listed apply to use with Linux. This driver supports XDP (Express Data Path) on kernel 4.14 and later. Note that XDP is blocked for frame sizes larger than 3KB. This driver supports AF_XDP zero-copy on kernel 4.18 and later. NOTE: 1 Gb devices based on the Intel(R) Ethernet Network Connection X722 do not support the following features: * Data Center Bridging (DCB) * QOS * VMQ * SR-IOV * Task Encapsulation offload (VXLAN, NVGRE) * Energy Efficient Ethernet (EEE) * Auto-media detect Identifying Your Adapter ======================== The driver in this release is compatible with devices based on the following: * Intel(R) Ethernet Controller X710 * Intel(R) Ethernet Controller XL710 * Intel(R) Ethernet Network Connection X722 * Intel(R) Ethernet Controller XXV710 For the best performance, make sure the latest NVM/FW is installed on your device and that you are using the newest drivers. For information on how to identify your adapter, and for the latest NVM/FW images and Intel network drivers, refer to the Intel Support website: http://www.intel.com/support SFP+ and QSFP+ Devices: ----------------------- For information about supported media, refer to this document: http://www.intel.com/content/dam/www/public/us/en/documents/release-notes/xl710- ethernet-controller-feature-matrix.pdf NOTE: Some adapters based on the Intel(R) Ethernet Controller 700 Series only support Intel Ethernet Optics modules. On these adapters, other modules are not supported and will not function. NOTE: For connections based on Intel(R) Ethernet Controller 700 Series, support is dependent on your system board. Please see your vendor for details. NOTE: In all cases Intel recommends using Intel Ethernet Optics; other modules may function but are not validated by Intel. Contact Intel for supported media types. NOTE: In systems that do not have adequate airflow to cool the adapter and optical modules, you must use high temperature optical modules. Important Notes =============== TC0 must be enabled when setting up DCB on a switch --------------------------------------------------- The kernel assumes that TC0 is available, and will disable Priority Flow Control (PFC) on the device if TC0 is not available. To fix this, ensure TC0 is enabled when setting up DCB on your switch. Enabling a VF link if the port is disconnected ---------------------------------------------- If the physical function (PF) link is down, you can force link up (from the host PF) on any virtual functions (VF) bound to the PF. Note that this requires kernel support (Red Hat kernel 3.10.0-327 or newer, upstream kernel 3.11.0 or newer, and associated iproute2 user space support). For example, to force link up on VF 0 bound to PF eth0: # ip link set eth0 vf 0 state enable Note: If the command does not work, it may not be supported by your system. Do not unload port driver if VF with active VM is bound to it ------------------------------------------------------------- Do not unload a port's driver if a Virtual Function (VF) with an active Virtual Machine (VM) is bound to it. Doing so will cause the port to appear to hang. Once the VM shuts down, or otherwise releases the VF, the command will complete. Configuring SR-IOV for improved network security ------------------------------------------------ In a virtualized environment, on Intel(R) Ethernet Network Adapters that support SR-IOV, the virtual function (VF) may be subject to malicious behavior. Software-generated layer two frames, like IEEE 802.3x (link flow control), IEEE 802.1Qbb (priority based flow-control), and others of this type, are not expected and can throttle traffic between the host and the virtual switch, reducing performance. To resolve this issue, and to ensure isolation from unintended traffic streams, configure all SR-IOV enabled ports for VLAN tagging from the administrative interface on the PF. This configuration allows unexpected, and potentially malicious, frames to be dropped. See "Configuring VLAN Tagging on SR-IOV Enabled Adapter Ports" in this README for configuration instructions. Firmware Recovery Mode ---------------------- A device will enter Firmware Recovery mode if it detects a problem that requires the firmware to be reprogrammed. When a device is in Firmware Recovery mode it will not pass traffic or allow any configuration; you can only attempt to recover the device's firmware. Refer to the Intel(R) Ethernet Adapters and Devices User Guide for details on Firmware Recovery Mode and how to recover from it. Building and Installation ========================= To build a binary RPM package of this driver -------------------------------------------- Note: RPM functionality has only been tested in Red Hat distributions. 1. Run the following command, where is the version number for the driver tar file. # rpmbuild -tb i40e-.tar.gz NOTE: For the build to work properly, the currently running kernel MUST match the version and configuration of the installed kernel sources. If you have just recompiled the kernel, reboot the system before building. 2. After building the RPM, the last few lines of the tool output contain the location of the RPM file that was built. Install the RPM with one of the following commands, where is the location of the RPM file: # rpm -Uvh or # dnf/yum localinstall NOTES: - To compile the driver on some kernel/arch combinations, you may need to install a package with the development version of libelf (e.g. libelf-dev, libelf-devel, elfutilsl-libelf-devel). - When compiling an out-of-tree driver, details will vary by distribution. However, you will usually need a kernel-devel RPM or some RPM that provides the kernel headers at a minimum. The RPM kernel-devel will usually fill in the link at /lib/modules/'uname -r'/build. To manually build the driver ---------------------------- 1. Move the base driver tar file to the directory of your choice. For example, use '/home/username/i40e' or '/usr/local/src/i40e'. 2. Untar/unzip the archive, where is the version number for the driver tar file: # tar zxf i40e-.tar.gz 3. Change to the driver src directory, where is the version number for the driver tar: # cd i40e-/src/ 4. Compile the driver module: # make install The binary will be installed as: /lib/modules//updates/drivers/net/ethernet/intel/i40e/i40e.ko The install location listed above is the default location. This may differ for various Linux distributions. NOTE: To gather and display additional statistics, use the I40E_ADD_PROBES pre-processor macro: # make CFLAGS_EXTRA=-DI40E_ADD_PROBES Please note that this additional statistics gathering can impact performance. 5. Load the module using the modprobe command. To check the version of the driver and then load it: # modinfo i40e # modprobe i40e [parameter=port1_value,port2_value] Alternately, make sure that any older i40e drivers are removed from the kernel before loading the new module: # rmmod i40e; modprobe i40e 6. Assign an IP address to the interface by entering the following, where is the interface name that was shown in dmesg after modprobe: # ip address add / dev 7. Verify that the interface works. Enter the following, where IP_address is the IP address for another machine on the same subnet as the interface that is being tested: # ping Command Line Parameters ======================= In general, ethtool and other OS-specific commands are used to configure user-changeable parameters after the driver is loaded. The i40e driver only supports the max_vfs kernel parameter on older kernels that do not have the standard sysfs interface. The only other module parameter supported is the debug parameter that can control the default logging verbosity of the driver. If the driver is built as a module, the following optional parameters are used by entering them on the command line with the modprobe command using this syntax: # modprobe i40e [