Tutorials |||| Installing and Using ns-2 | Installing Rapid in ns-2 | Using Tmix in ns-2
Rapid Research Group @ CS @ UNC-CH
Initial draft: October 2011
Last updated: 17 March 2013
This guide is dedicated (but not limited in its utility) to UNC-CH people who are interested in using ns-2 to simulate and test Rapid packet-scale congestion control. This page should give you enough information to get started with using the Tmix traffic generator. It is not intended to be comprehensive.
In order to perform realistic network simulations, one needs a traffic generator that is capable of generating realistic synthetic traffic in a closed-loop fashion that "looks like" traffic found on an actual network.
The Tmix system takes as input a packet header trace file captured
from a network link of interest (such as the link between the UNC campus
and the rest of the internet). This trace is reverse-compiled into a
connection vector (or cvec
) file, which is a source-level
characterization of each TCP connection present in the trace. Tmix
then uses this information to emulate the socket-level behavior of the
source application that created the corresponding connection in the
trace. The resulting traffic generation is statistically representative
of the traffic measured on the real link.
You may not need to capture a new trace file in order to use Tmix. You can use an existing trace file and use the compilation tool to make it compatible for use in ns-2. The average traffic load can be adjusted either up or down using a scaling tool.
For more information details about the traffic model that Tmix uses, see "Tmix: A Tool for Generating Realistic TCP Application Workloads in ns-2" (2006).
Tmix has been included in the standard codebase of ns-2 since ns-2.34.
However, the version of Tmix that we will use is only in ns-2.35+. Please use these instructions to download ns-2.35.
A typical Tmix topology includes a pair of delay box
nodes,
an outbound
initiator/acceptor node pair, and an
inbound
initiator/acceptor node pair:
Each Tmix initiator node may represent many individual TCP connections as defined by outbound and inbound connection vector (cvec) files. Application traffic flows from the initiator to the acceptor within each pair. The delay box pair allows for simulation of RTTs, bottleneck links, and loss rates for each connection. It is not strictly necessary to use delay boxes; if you don't to simulate these individual connection properties, then you may connect Tmix nodes to normal everyday ns-2 nodes.
For more information about the details encapsulated by a connection vector, see the ns-2 documentation for Tmix.
To use Tmix in ns-2, you may use an existing cvec file pair, or use the tools necessary to convert a network trace into a cvec file and/or to scale the average amount of traffic offered by the Tmix connections.
Depending on the needs of your experiment, you may not need to run an entirely new network trace to use Tmix. There are a number of pre-existing trace files and converted cvec files out on the web.
I will place links to cvec files, along with information about their source if known, as I come across them. For more information about obtaining trace files or existing cvec files, please contact me at lovewell (at) cs (dot) unc (dot) edu.
We have a script to convert from trace file format to ns-ready cvec format. For more information about this tool, please contact me at lovewell (at) cs (dot) unc (dot) edu.
Depending on the needs of your experiment, you may want a specific average traffic load. You can scale the average traffic load of an existing cvec file by using a scaling script. For more information aboutthis tool, please contact me at lovewell (at) cs (dot) unc (dot) edu.
Tmix delay boxes can be treated like regular nodes. Thus, we can attach a Rapid TCP (or other TCP) sender/receiver pair to the delay boxes to create a familiar dumbell topology:
Example cvec files, simulation code, processing scripts, and a gnuplot script for plotting output for simulating the above simple topology can be downloaded as an archive.
To run the simulation code in the
source.tcl
file, use the shell
command ns source.tcl
.
Each experiment should be run long enough to mitigate start-up and slow-down effects of the component nodes.
Running the ns-2 script
source.tcl
will produce two output trace-queue files, one for
each the outbound bottleneck router and the inbound bottleneck router.
Here is an excerpt from data-outbound.out
:
Outbound trace-queue data excerpt | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
+ | 0.056855 | 0 | 1 | ack | 40 | CP-AEFN | 6 | 5.11 | 4.10 | 0 | 21 |
- | 0.056855 | 0 | 1 | ack | 40 | CP-AEFN | 6 | 5.11 | 4.10 | 0 | 21 |
+ | 0.057529 | 0 | 1 | tcp | 1080 | CP-AEFN | 0 | 6.0 | 7.0 | 3 | 22 |
- | 0.057529 | 0 | 1 | tcp | 1080 | CP-AEFN | 0 | 6.0 | 7.0 | 3 | 22 |
+ | 0.059689 | 0 | 1 | tcp | 1080 | CP-AEFN | 0 | 6.0 | 7.0 | 4 | 23 |
- | 0.059689 | 0 | 1 | tcp | 1080 | CP-AEFN | 0 | 6.0 | 7.0 | 4 | 23 |
+ | 0.06384 | 0 | 1 | tcp | 40 | CP-AEFN | 1 | 2.0 | 3.1 | 0 | 24 |
- | 0.06384 | 0 | 1 | tcp | 40 | CP-AEFN | 1 | 2.0 | 3.1 | 0 | 24 |
r | 0.064655 | 0 | 1 | tcp | 1080 | CP-AEFN | 0 | 6.0 | 7.0 | 1 | 12 |
+ | 0.07088 | 0 | 1 | ack | 40 | CP-AEFN | 7 | 5.13 | 4.12 | 0 | 29 |
More information about the meaning of each field in a trace-queue output file can be found here. Each Tmix node does not correspond to a single initiator or acceptor connetion. Instead, a single Tmix initiator node generates TCP connections coming from a "cloud" of connection initiators (each represented on a port of the node). Likewise, a single Tmix acceptor node accepts and serves TCP connections destined for a "cloud" of connection acceptors.
Sample queue size
(process-queue-size.py
) and throughput/utilization processing
scripts (process-throughput.py
) are included in the sample
experiment archive.
To process throughput for the Rapid data flow, isolate data records for packets traveling from node 6.0 to node 7.0, process throughput every 100 ms, and scale results by 1,000,000 bits/Mb to get throughput in Mbps.
Outbound Throughput |
---|
To process bottleneck link utilization data, process throughput for all packets traveling through a queue and scale by 100,000,000 Mbps, the capacity of the link.
Outbound Utilization |
---|
Use the queue processing script to view the size of the bottleneck router queue as a function of time:
Outbound Queue Size |
---|