mTCP is a highly scalable user-level TCP stack for multicore systems. mTCP source code is distributed under the Modified BSD License. For more detail, please refer to the LICENSE. The license term of io_engine driver and ported applications may differ from the mTCPs.
We require the following libraries to run mTCP.
Using CCP(https://ccp-project.github.io/) for congestion control (disabled by default), requires building and running a CCP algorithm. If you would like to enable CCP (ie. use the internal implementation of Reno), simply run configure script with --enable-ccp option.
Install Rust. Any installation method should be fine. We recommend using rustup:
curl https://sh.rustup.rs -sSf | sh -- -y -v --default-toolchain nightly
Build a CCP algorithm. The generic-cong-avoid(https://github.com/ccp-project/generic-cong-avoid) package implements standard TCP Reno and Cubic, so this is probably best to start with. The same steps can be followed to build any of the other algorithms hosted in the ccp-project(https://github.com/ccp-project) organization, such as bbr(https://github.com/ccp-project/bbr).
git clone https://github.com/ccp-project/generic-cong-avoid.git cd generic-cong-avoid cargo +nightly build
Later, after you've built mTCP and started an mTCP application (such as epserver or perf), you must start the CCP binary you just built. If you try to start the CCP process before running an mTCP application, it will report a "connection refused" error.
cd generic-cong-avoid sudo ./target/debug/reno --ipc unix
mtcp - mtcp source code directory mtcp/src - source code mtcp/src/include - mTCPs internal header files mtcp/lib - library file mtcp/include - header files that applications will use
io_engine - event-driven packet I/O engine (io_engine) io_engine/driver - driver source code io_engine/lib - io_engine library io_engine/include - io_engine header files io_engine/samples - sample io_engine applications (not mTCPs)
dpdk - Intel's Data Plane Development Kit dpdk/...
apps - mTCP applications apps/example - example applications (see README) apps/lighttpd-1.4.32 - mTCP-ported lighttpd (see INSTALL) apps/apache_benchmark - mTCP-ported apache benchmark (ab) (see README-mtcp)
util - useful source code for applications
config - sample mTCP configuration files (may not be necessary)
mTCP can be prepared in three ways.
echo $PWD
/io_engineecho $PWD
/io_engine CFLAGS="-DMAX_CPUS=8"Set up DPDK first.
# bash setup_mtcp_dpdk_env.sh [<path to $RTE_SDK]]
Press [15] to compile x86_64-native-linuxapp-gcc version Press [18] to install igb_uio driver for Intel NICs Press [22] to setup 2048 2MB hugepages Press [24] to register the Ethernet ports Press [35] to quit the tool
Only those devices will work with DPDK drivers that are listed on this page: http://dpdk.org/doc/nics. Please make sure that your NIC is compatible before moving on to the next step.
We use dpdk/ as our DPDK driver. FYI, you can pass a different dpdk source directory as command line argument.
Bring the dpdk compatible interfaces up, and then set RTE_SDK and RTE_TARGET environment variables. If you are using Intel NICs, the interfaces will have dpdk prefix.
# sudo ifconfig dpdk0 x.x.x.x netmask 255.255.255.0 up
# export RTE_SDK=`echo $PWD`/dpdk
# export RTE_TARGET=x86_64-native-linuxapp-gcc
Setup mtcp library: # ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET # make
Run the applications!
You can revert back all your changes by running the following script.
# bash setup_linux_env.sh [<path to $RTE_SDK]]
Press [29] to unbind the Ethernet ports Press [30] to remove igb_uio.ko driver Press [33] to remove hugepage mappings Press [34] to quit the tool
NEW: Now you can run mTCP applications (server + client) locally.
A local setup is useful when only 1 machine is available for the experiment.
ONVM configurations are placed as .conf
files in apps/example directory.
ONVM basics are explained in https://github.com/sdnfv/openNetVM.
Before running the applications make sure that onvm_mgr is running. Also, no core overlap between applications and onvm_mgr is allowed.
Install openNetVM using the following instructions https://github.com/sdnfv/openNetVM/blob/master/docs/Install.md
Set up the dpdk interfaces: # bash setup_mtcp_onvm_env.sh
Next bring the dpdk-registered interfaces up. This can be setup using: # sudo ifconfig dpdk0 x.x.x.x netmask 255.255.255.0 up
Setup mtcp library
# ./configure --with-dpdk-lib=$<path_to_dpdk> --with-onvm-lib=$<path_to_onvm_lib>
# e.g. ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET --with-onvm-lib=echo $ONVM_HOME
/onvm
# make
By default, mTCP assumes that there are 16 CPUs in your system. You can set the CPU limit, e.g. on a 32-core system, by using the following command:
Please note that your NIC should support RSS queues equal to the MAX_CPUS value (since mTCP expects a one-to-one RSS queue to CPU binding).
In case `./configure' script prints an error, run the following command; and then re-do step-4 (configure again): # autoreconf -ivf
checksum offloading in the NIC is now ENABLED (by default)!!! - this only works for dpdk at the moment
check libmtcp.a in mtcp/lib
check header files in mtcp/include
check example binary files in apps/example
Run the applications!
You can revert back all your changes by running the following script.
# bash setup_linux_env.sh
Press [29] to unbind the Ethernet ports Press [30] to remove igb_uio.ko driver Press [33] to remove hugepage mappings Press [34] to quit the tool
OR
OR
To prevent this, use the base virtual address parameter to run the ONVM manager(core list arg 0xf8
isn't actually used by mtcp NFs but is required), e.g.:
cd openNetVM/onvm ./go.sh 1,2,3 1 0xf8 -s stdout -a 0x7f000000000
See README.netmap for details.
mTCP runs on Linux-based operating systems (2.6.x for PSIO) with generic x86_64 CPUs, but to help evaluation, we provide our tested environments as follows.
Intel Xeon E5-2690 octacore CPU @ 2.90 GHz 32 GB of RAM (4 memory channels) 10 GbE NIC with Intel 82599 chipset (specifically Intel X520-DA2) Debian 6.0.7 (Linux 2.6.32-5-amd64)
Intel Core i7-3770 quadcore CPU @ 3.40 GHz 16 GB of RAM (2 memory channels) 10 GbE NIC with Intel 82599 chipset (specifically Intel X520-DA2) Ubuntu 10.04 (Linux 2.6.32-47)
Event-driven PacketShader I/O engine (extended io_engine-0.2)
mTCP currently runs with fixed memory pools. That means, the size of TCP receive and send buffers are fixed at the startup and does not increase dynamically. This could be performance limit to the large long-lived connections. Be sure to configure the buffer size appropriately to your size of workload.
The client side of mTCP supports mtcp_init_rss() to create an address pool that can be used to fetch available address space in O(1). To easily congest the server side, this function should be called at the application startup.
The supported socket options are limited for right now. Please refer to the mtcp/src/api.c for more detail.
The counterpart of mTCP should enable TCP timestamp.
mTCP has been tested with the following Ethernet adapters:
For some Linux distros(e.g. Ubuntu), NetworkManager may re-assign a different IP address, or delete the assigned IP address.
Disable NetworkManager temporarily if that's the case. NetworkManager will be re-enabled upon reboot.
# sudo service network-manager stop
Do not remove I/O driver (ps_ixgbe/igb_uio) while running mTCP applications. The application will panic!
Use the ps_ixgbe/dpdk driver contained in this package, not the one from some other place (e.g., from io_engine github).
========================================================================
Contact: mtcp-user at list.ndsl.kaist.edu
April 2, 2015.
EunYoung Jeong <notav at ndsl.kaist.edu>
M. Asim Jamshed <ajamshed at ndsl.kaist.edu>