Our junior managers and the administrators.
Last update: 11-Sep-2010 16:52 UTC
This page describes the various rate management provisions in NTPv4. Some national time metrology laboratories, including NIST and USNO, use the NTP reference implementation in their very busy public time servers. They operate multiple servers behind load-balancing devices to support aggregate rates up to ten thousand packets per second. The servers need to defend themselves against all manner of broken implementations that can clog the server and and network infrastructure. On the other hand, friendly clients need to avoid configurations that can result in unfriendly rates.
There are several features in the reference implementation designed to defend the servers and network against accidental or intentional flood attack. On the other hand these features are also used to insure the client is a good citizen, even if configured in unfriendly ways. The ground rules are:
Rate management involves four algorithms to manage resources: (1) poll rate control, (2) burst control, (3) average headway time and (4) guard time. These are described in following sections.
Guard time is the minimum time between successive packets. By default this is 2 s, but can be changed using the minimum option of the discard command. By design, this is also the interval between packets sent using the burst and iburst options.
A review of past client abuse incidence shows the most frequent scenario is a broken client that attempts to send a number of packets at rates of one per second or more. On one occasion due to a defective client design, over 750,000 clients fell into this mode. There have been occasions where this abuse has persisted for days at a time. These scenarios are the most damaging, as they can threaten not only the victim server but the network infrastructure as well.
In the ntpd design the minimum headway between the last packet received and the current packet is called the guard time. If the headway is less than the guard time, the packet is discarded. The guard time defaults to 2 s, but this can be changed using the minimum option of the discard command.
There are two features in ntpd to manage the interval between one packet and the next. These features make use of a set of counters: a client output counter for each association and a server input counter for each distinct client address. Each counter increments by a value called the headway when a packet is processed and decrements by one each second. The default minimum average headway in ntpd is 8 s, but this can be changed using the average option of the discard command, but not less than 3 (8 s).
If the iburst or burst options are present, the poll algorithm sends a burst of packets instead of a single packet at each poll opportunity. The NTPv4 specification requires that bursts contain no more than eight packets; so, starting from an output counter value of zero, the maximum counter value or output ceiling can be no more than eight times the minimum poll interval set by the minpoll option of the server command. However, if the burst starts with a counter value other than zero, there is a potential to exceed the ceiling. The poll algorithm avoids this by computing an additional headway time so that the next packet sent will not exceed the ceiling. Additional headway time can result from Autokey protocol operations. Designs such as this are often called leaky buckets. The current headway is displayed as the headway peer variable by the ntpq rv command.
The ntpd input packet routine uses a special list of entries, one for each distinct client address found. Each entry includes an IP address, input counter and interval since the last packet arrival. The entries are ordered by interval from the smallest to the largest. As each packet arrives, the IP source address is compared to the IP address in each entry in turn. If a match is found the entry is removed and inserted first on the list. If the IP source address does not match any entry, a new entry is created and inserted first, possibly discarding the last entry on the list if it is full. Observers will note this is the same algorithm used for page replacement in virtual memory systems.
In the virtual memory algorithm the entry of interest is the last, whereas here the entry of interest is the first. The input counter is decreased by the interval since it was last referenced, but not below zero. If the value of the counter plus the headway is greater than the input ceiling set by the average option, the packet is discarded. Otherwise, the counter is increased by the headway and the packet is processed. The result is, if the client maintains a maximum average headway not less than the input ceiling and transmits no more than eight packets in a burst, the input counter will not exceed the ceiling.
Ordinarily, packets denied service are simply dropped with no further action except incrementing statistics counters. Sometimes a more proactive response is needed to cause the client to slow down. A special packet format has been created for this purpose called the kiss-o'-death (KoD) packet. KoD packets have leap indicator 3, stratum 0 and the reference identifier set to a four-byte ASCII code. At present, only one code RATE is sent by the server if the limited and kod flags are set in the restrict command and the rate limit is exceeded.
A client receiving a KoD packet is expected to slow down; however, no explicit mechanism is specified in the protocol to do this. In the current reference implementation, the server sets the packet poll to the greater of (a) minimum average headway and (b) client packet poll. The client sets the peer poll field to the maximum of (a) minimum average headway and (b) server packet poll. For KoD packets (only), the minimum peer poll is clamped not less than the peer poll and the headway temporarily increased.
At present there is only one KoD packet with code RATE. In order to make sure the client notices the KoD, the receive and transmit timestamps are set to the transmit timestamp of the client packet and all other fields left as in the client packet. Thus, even if the client ignores the KoD, it cannot do any useful time computations. KoDs themselves are rate limited in the same way as arriving packets in order to deflect a flood attack.