Using Dummynet for Traffic Shaping on FreeBSD

Ответить
admin
Администратор
Сообщения: 198
Зарегистрирован: 05 янв 2011, 04:19

Using Dummynet for Traffic Shaping on FreeBSD

Сообщение admin »

Using Dummynet for Traffic Shaping on FreeBSD
http://www.bsdnews.org/02/dummynet.php
by Alex Popa

Traffic Shaping

Traffic shaping is the process of limiting the speed of certain transfers on your network. This article will show you how to limit the traffic in several ways, like sharing bandwidth between departments or users, or preventing some services from using up all your bandwidth.
Configuring Your Kernel for Traffic Shaping

In order to get traffic shaping functionality from your FreeBSD system, you need to compile a kernel with the following options:

options IPFIREWALL
options DUMMYNET
options HZ=1000

The HZ option is not required by dummynet, but it is “strongly recommended” in the dummynet(4) man page. For more details on configuring and installing a custom kernel, read the Configuring the FreeBSD Kernel page in the FreeBSD Handbook.

NOTE: The traffic shaping mechanism is based on delaying packets in such a way that the transfer speed limit is not exceeded. The packet is stored and later sent. Sending is triggered by the kernel timer, and setting the frequency higher means the traffic looks more smooth (the delays are smaller). Leaving it at the default value of 100 Hz will cause the sends to be triggered every 10 milliseconds, resulting in more bursts. Having HZ=1000 will cause the trigger to happen every millisecond, resulting in less delay in packets.
Traffic Shaping Basics

Traffic shaping occurs in three stages: Configuring the pipes, configuring the queues, and diverting traffic through the queues and/or pipes.
Dummynet Features

Pipes are the basic elements of the traffic shaper. A pipe is used to emulate a network link with a certain bandwidth, delay and packet loss rate.

Queues are used for implementing weighted fair queuing, and cannot used without a pipe. All queues connected to a pipe share the bandwidth of that pipe in a certain configurable proportion.

The most important parameter of a pipe configuration is its bandwidth. Setting the bandwidth is done with this command:

# ipfw pipe 1 config bw 120kbit/s

This command creates pipe 1 (if it does not already exist), and assigns it a 120 kilobits per second bandwidth. If the pipe already exists, its bandwidth will be changed to 120 kilobits per second.

There are two very important parameters when configuring a queue: the number of the pipe it will connect to, and the weight. The weight must be in range 1 to 100, and it defaults to 1. There can be multiple queues connected to the same pipe.

# ipfw queue 5 config pipe 1 weight 20

This command instructs dummynet to configure queue 5 to use pipe 1, and have a weight of 20. The weight parameter allows you to specify the ratios of bandwidth the queues will use. Queues with higher weights will use more bandwidth. For example, if a 120kbit/s pipe sees active traffic (“flows”) from three queues, with weights 3, 2 and 1, the flows will be assigned 60kbits/s, 40kbits/s and 20kbits/s respectively. If the flow from the queue with weight 2 disappears, leaving only the flows with weights 3 and 1, those will get 90kbits/s, and 30 kbits/s respectively.

NOTE: The weight concept may seem strange, but it is rather simple. If queues have equal weights, they will get the same amount of bandwidth. If queue 2 has double the weight of queue 1, it will get twice as much bandwidth. Queues that have no traffic are not taken into account when dividing traffic. This means that in a configuration with two queues, one with weight 1 (for unimportant traffic) and the other with weight 99 (for important business traffic), having both queues active will result in a 1%/99% sharing, but if there is no traffic on the 99 queue, the unimportant traffic will get all of the bandwidth.

Another very useful option is the mask option. It has the same syntax for both pipes and queues: add mask mask-specifier at the end of the config option. The effect is masking the packets and turning one flow into several flows, using the mask to establish which flow a packet belongs to. The default mask is empty, meaning all packets fall in the same flow. Using mask all would make all the fields be significant, meaning every TCP connection or UDP communication would be considered a flow.

For a pipe, each flow acts like it has a separate pipe, which is an exact clone of the original pipe: it has the same parameters. This means three active flows will use (for our example pipe) 360 kbits/s, meaning 120kbits/s each.

For a queue, the flows will act as several queues, with the same weight as the original one. This means you can use the mask to equally share a certain bandwidth. For our examples with three flows and the 120kbit/s pipe, each flow will get a a third of that bandwidth, namely 40 kilobits per second.

WARNING: This article assumes you will either integrate these rules in your firewall configuration, or you are using ipfw only for traffic shaping. In the latter case, having the option IPFIREWALL_DEFAULT_TO_ACCEPT option in the kernel will greatly simplify your task.
Configuring the Traffic Shaper
Simple Configurations

There are several ways of limiting bandwidth. Here are some examples that assume ed0 to be the external interface:

# only outgoing gets limited
ipfw pipe 1 config bw 100kbits/s

ipfw add 1 pipe 1 ip from any to any out xmit ed0

To limit both incoming and outgoing, to 100 and 50 kbits/s respectively:

ipfw pipe 1 config bw 100kbits/s
ipfw pipe 2 config bw 50kbits/s

ipfw add 100 pipe 1 ip from any to any in recv ed0
ipfw add 100 pipe 2 ip from any to any out xmit ed0

Limitation on total incoming plus outgoing:

ipfw pipe 1 config bw 100kbits/s

ipfw add 100 pipe 1 ip from any to any in recv ed0
ipfw add 100 pipe 1 ip from any to any out xmit ed0


Each host gets 16kbits/s (incoming, outgoing not limited):

ipfw pipe 1 config bw 16kbits/s mask dst-ip 0xffffffff

ipfw add 100 pipe 1 ip from any to any in recv ed0

Complex Configurations

Here are a couple of real-life or theoretical examples.

Limiting a web server's outgoing traffic speed

I have used this configuration on one of my servers. The server had some FreeBSD ISO files, and I did not want it to hog all the outgoing bandwidth. Another purpose was to prevent people from gaining an unfair advantage by using download accelerators, so I chose to share the total outgoing bandwidth equally among 24-bit networks.

# pipe configuration, 2000 kilobits maximum
ipfw pipe 1 config bw 2000kbits/s

# the queue will be used to enforce the /24 limit mentioned above
ipfw queue 1 config pipe 1 mask dst-ip 0xffffff00
# with this mask, only the first 24 bits of the destination IP
# address are taken into consideration when generating the flow ID

# divert outgoing traffic from the web server (at 1.1.1.1)
ipfw add queue 1 tcp from 1.1.1.1 80 to any out

Limiting incoming traffic by department

This is a configuration used to limit the incoming bandwidth in a small company behind a 1Mbit/s connection. Before this was applied, some users were using peer-to-peer clients and download accelerators, and were hogging almost all the bandwidth. The solution was to implement some weighted sharing between departments, and let the departments take care of their own hogs.

# Variables we will use
# External interface
EXTIF=fxp0
# My IP address
ME=192.168.1.1

# configure the pipe, 95% of total incoming capacity
ipfw pipe 1 config bw 950kbits/s

# configure the queues for the departments
# departments 1 and 2 heavy net users
ipfw queue 1 config pipe 1 weight 40
ipfw queue 2 config pipe 1 weight 40

# accounting, they shouldn't use the network a lot
ipfw queue 3 config pipe 1 weight 5

# medium usage for others
ipfw queue 4 config pipe 1 weight 20

# incoming mail (SMTP) to this server, HIGH priority
ipfw queue 10 config pipe 1 weight 100

# not caught by the previous categories - VERY LOW bandwidth
ipfw queue 11 config pipe 1 weight 1

# classify the traffic
# only incoming traffic is limited, outgoing is not affected.

ipfw add 10 allow ip from any to any out xmit via $EXTIF

# department 1
ipfw add 100 queue 1 ip from any to 192.168.0.16/28 in via $EXTIF

# department 2
ipfw add 200 queue 2 ip from any to 192.168.0.32/28 in via $EXTIF

# accounting
ipfw add 300 queue 3 ip from any to 192.168.0.48/28 in via $EXTIF

# mail
ipfw add 1000 queue 10 ip from any to $ME 25 in via $EXTIF

# others
ipfw add 1100 queue 11 ip from any to any in via $EXTIF

The incoming limit is set to 95% of the true available bandwidth. This will allow the shaper to delay some packets. If this were not the case, and the pipe would have been configured with the same bandwidth as the physical link, all of the delay queues for the pipe would have been empty. The extra 5% of bandwidth on the physical link is used to fill the queues. The shaper chooses packets from the queues based on weight, letting the ones from queues with a higher weight through more often than the ones from queues with lower weight.
Conclusion

Dummynet can be used to limit incoming or outgoing bandwidth in several ways. Pairing it with well-thought ipfw rules can produce good results when your requirements are not extremely complex. Keep in mind though, that dummynet cannot be used to guarantee bandwidth or quality of service.
Additional Resources

* ipfw(8) man page
* dummynet(4) man page

- Alex
Alexandru Popa is a CCNA studying for a CCNP, and is actively involved in the FreeBSD community in his spare time. He also maintains cvsup.ro.freebsd.org out of a basement in a deserted building, using a large hamster array for power.
Ответить