To start a tftp server use the lab server below the table. The files will be stored in the /var/lib/tftpboot directory. Starting and stopping is done via
Make sure that you can create new files in the /var/lib/tftpboot directory.
change in file /etc/default/tftpd-hpa TFTP_OPTIONS="--secure -c"
Disable the firewall
sudo ufw disable
Start the tftpd service
sudo service tftpd-hpa status sudo service tftpd-hpa start sudo service tftpd-hpa stop
The tftp server will listen on port 69 for connections. You can test the setup on the localhost by opening a new shell and doing
echo "Hallo hier bin ich!!!" > hallo.txt tftp localhost trace > put hallo.txt > quit
This will create a new file “hallo.txt”. This file is then transferred to server and is visible in the /var/lib/tftpboot directory.
In order to create a zero filled file “neu.txt” with 50MByte size you can use
dd if=/dev/zero of=neu.txt bs=1M count=50
Now start the server one of the laptops and the client on the other laptop. Test the tftp connection by transferring “hallo.txt” from the client to the server. Run wireshark on the client and on the server to observe the packets which are transferred. Sketch a message sequence chart.
Now do the big thing:
Modify the ethernet speed to 10 MBit/s and measure again. Set the ethernet speed back to 100MBit/s after the measurement.
The lab pc will be used a network emulation device for the connection between the two laptops. Configure the lab pc with the two network cards as bridge. Connect the two laptops directly to the lab pc.
sudo brctl addbr mybridge sudo brctl addif mybridge eth1 sudo brctl show sudo brctl addif mybridge eth2 sudo ifconfig eth1 0.0.0.0 sudo ifconfig eth2 0.0.0.0 sudo ifconfig mybridge up
The bridge should now be up and running. Test the connection between the two laptops with ping and measure the RTT. Do the tftp transfer test again - now the software bridge between the two laptops.
All further modifications regarding latency and packet loss should be done on the lab pc and not on the laptops. The reason is that wireshark will tap the traffic after the netem traffic control module.
In order to add 50 ms delay to the outgoing traffic.
sudo tc qdisc add dev eth2 root netem delay 50ms
To modify the delay setting use
sudo tc qdisc change dev eth2 root netem delay 10ms
To add a packet loss, you can do
sudo tc qdisc change dev eth2 root netem delay 10ms loss 10.0%
Make measurements and show the relation between configured latency and measured RTT.
Analyze and compile the tcp client and server programs “client.c” and “server.c”.
One important part of the tcp protocol design is the behavior in network congestion situations. TCP controls the transmit rate by modifying the sliding window size and ACK clocking. The transmit window size is determined by the available local memory, the receiver window size and the congestion window size (cwnd). The congestion window size parameter is internal to the tcp protocol software, i.e. it can not be identified by looking at the transmitted packets. In contrast the receiver window size is announced during packet exchange.
In order to monitor the status of the contention window size, the linux kernel provides a tracing module called “tcp_probe”. Once the module is loaded you can trace some tcp connection parameters. See the configuration in Adding tcpprobe traffic log.
You can then use “gnuplot” to produce graphics charts from the data. Please do all network latency and throughput emulation on the bridge and use the tracing with “tcp_probe” on the transmitting computer. The trace data contains the contention window size “cwnd” and the slow start threshold “ssthr”.
You should see a fast increase of the window size until a stable value is reached.
It is easier to view the slowstart behavior with some latency in the connection. The linux kernel sets an initial contention window size of 10 MSS in tcp.h.
The congestion avoidance can not be observed when no packets are lost inside the network.
I could not produce packet loss using netem and tbf traffic shaping on the same computer where iperf runs with a wired ethernet card. I guess this is due to internal flow control, i.e. no packets are dropped inside the kernel. I could however produce this situation when using a wireless connection.
In the next setup the dynamic behavior of congestion avoidance and rate control is analyzed