How NTP Works

Last update: 23-Oct-2011 16:21 UTC

Related Links

Table of Contents


Introduction and Overview

Note: This document contains a technical description of the Network Time Protocol (NTP) architecture and operation. It is intended for administrators, operators and monitoring personnel. Additional information for nontechnical readers can be found in the white paper Executive Summary: Computer Network Time Synchronization.

NTP time synchronization services are widely available in the public Internet. The public NTP subnet in mid 2011 includes several thousand servers in most countries and on every continent of the globe, including Antarctica, and sometimes in space and on the sea floor. These servers support a total population estimated at over 25 million computers in the global Internet.

The NTP subnet operates with a hierarchy of levels, where each level is assigned a number called the stratum. Stratum 1 (primary) servers at the lowest level are directly synchronized to national time services via satellite, radio or telephone modem. Stratum 2 (secondary) servers at the next higher level are synchronized to stratum 1 servers and so on. Normally, NTP clients and servers with a relatively small number of clients do not synchronize to public primary servers. There are several hundred public secondary servers operating at higher strata and are the preferred choice.

This page presents an overview of the NTP implementation included in this software distribution. We refer to this implementation as the reference implementation only because it was used to test and validate the NTPv4 specification RFC-5905. It is best read in conjunction with the briefings on the Network Time Synchronization Research Project page.

gif

Figure 1. NTP Daemon Processes and Algorithms

The overall organization of the NTP design is shown in Figure 1. It is useful in this context to consider the design as both a client of upstream servers and as a server for downstream clients. It includes a pair of peer/poll processes for each reference clock or remote server used as a synchronization source. Packets are exchanged between the client and server using the on-wire protocol described in the white paper Analysis and Simulation of the NTP On-Wire Protocols. The protocol is resistant to lost, replayed or spoofed packets.

The poll process sends NTP packets at intervals ranging from 8 s to 36 hr. The intervals are managed as described on the Poll Program page to maximize accuracy while minimizing network load. The peer process receives NTP packets and performs the packet sanity tests of the flash status word. The flash status word reports in addition the results of various access control and security checks described in the white paper NTP Security Analysis.

Packets that fail one or more of these tests are summarily discarded. Otherwise, the peer process runs the on-wire protocol that uses four raw timestamps: the origin timestamp T1 upon departure of the client request, the receive timestamp T2 upon arrival at the server, the transmit timestamp T3 upon departure of the server reply, and the destination timestamp T4 upon arrival at the client. These timestamps, which are recorded by the rawstats option of the filegen command, are used to calculate the clock offset and roundtrip delay samples:

offset = [(T2 - T1) + (T3 - T4)] / 2,
delay = (T4 - T1) - (T3 - T2).

The algorithm described on the Clock Filter Algorithm page selects the offset and delay samples most likely to produce accurate results. Those servers that have passed the sanity tests are declared selectable. From the selectable population the statistics are used by the algorithm described on the Clock Select Algorithm page to determine a number of truechimers according to correctness principles. From the truechimer population the algorithm described on the Clock Cluster Algorithm page determines a number of survivors on the basis of statistical clustering principles. The algorithms described on the Mitigation Rules and the prefer Keyword page combine the survivor offsets, designate one of them as the system peer and produces the final offset used by the algorithm described on the Clock Discipline Algorithm page to adjust the system clock time and frequency. The clock offset and frequency, are recorded by the loopstats option of the filegen command. For additional details about these algorithms, see the Architecture Briefing on the Network Time Synchronization Research Project page.

NTP Timescale and Data Formats

NTP clients and servers synchronize to the Coordinated Universal Time (UTC) timescale used by national laboratories and disseminated by radio, satellite and telephone modem. This is a global timescale independent of geographic position. There are no provisions for local time zone or daylight savings time; however, these functions can be performed by the operating system on a per-user basis.

The UT1 timescale, upon which UTC is based, is determined by the rotation of the Earth about its axis, which is gradually slowing down. In order to rationalize UTC with respect to UT1, a leap second is inserted at intervals of about 18 months, as determined by the International Earth Rotation Service (IERS). The historic insertions are documented in the leap-seconds.list file, which can be downloaded from the NIST FTP server. This file is updated at intervals not exceeding six months. Leap second warnings are disseminated by the national laboratories in the broadcast timecode format. These warnings are propagated from the NTP primary servers via other server to the clients by the NTP on-wire protocol. The leap second is implemented by the operating system kernel, as described in the white paper The NTP Timescale and Leap Seconds.

There are two NTP time formats, a 64-bit timestamp format and a 128-bit date format. The date format is used internally, while the timestamp format is used in packet headers exchanged between clients and servers. The timestamp format spans 136 years, called an era. The current era began on 1 January 1900, while the next one begins in 2036. Details on these formats and conversions between them are in the white paper The NTP Era and Era Numbering. However, the NTP protocol will synchronize correctly, regardless of era, as long as the system clock is set initially within 68 years of the correct time. Further discussion on this issue is in the white paper NTP Timestamp Calculations. Ordinarily, these formats are not seen by application programs, which convert these NTP formats to native Unix or Windows formats.

Statistics Budget

Each NTP synchronization source is characterized by the offset and delay samples measured by the on-wire protocol using the equations above. The dispersion sample is initialized with the sum of the server precision and the client precision as each sample is received. The dispersion increases at a rate of 15 ms/s after that. For this purpose, the precision is equal to the latency to read the system clock. The offset, delay and dispersion are called the sample statistics.

In a window of eight (offset, delay, dispersion) samples, the clock filter algorithm selects the sample with minimum delay, which generally represents the most accurate offset statistic. The selected sample becomes the peer offset and peer delay statistics. The peer dispersion is a weighted average of the dispersion samples in the window. It is recalculated as each sample update is received from the server. Between updates, the dispersion continues to grow at the same rate as the sample dispersion, 15 ms/s. Finally, the peer jitter is determined as the root mean square (RMS) of the offset samples in the window relative to the selected offset sample. The peer offset, peer delay, peer dispersion and peer jitter statistics are recorded by the peerstats option of the filegen command. Peer variables are displayed by the rv command of the ntpq program.

The clock filter algorithm continues to process packets in this way until the source is no longer reachable. Reachability is determined by an eight-bit shift register, which is shifted left by one bit as each poll packet is sent, with 0 replacing the vacated rightmost bit. Each time an update is received, the rightmost bit is set to 1. The source is considered reachable if any bit is set to 1 in the register; otherwise, it is considered unreachable.

A server is considered selectable only if it is reachable, the dispersion is below the select threshold and a timing loop would not be created. The select threshold is by default 1.5 s, but can be changed by the maxdist option of the tos command. A timing loop occurs when the server is apparently synchronized to the client or when the server is synchronized to the same server as the client. When a source is unreachable, a dummy sample with "infinite" dispersion is inserted in the shift register at each poll, thus displacing old samples.

The composition of the survivor population and the system peer selection is re determined as each update from each source is received. The system variables are copied from the system peer variables of the same name and the system stratum set one greater than the system peer stratum. System variables are displayed by the rv command of the ntpq program.

The system dispersion increases at the same rate as the peer dispersion, even if all sources have become unreachable. The server appears to dependent clients at ever increasing dispersion. If the system dispersion exceeds the select threshold as apparent to dependent clients, the server is considered nonselectable It is important to understand that a server in this condition remains a reliable source of synchronization within its error bounds, as described in the next section.

Quality of Service

The mitigation algorithms deliver several important statistics, including system offset and system jitter. These statistics are determined by the mitigation algorithms from the survivor statistics produced by the clock cluster algorithm. System offset is best interpreted as the maximum likelihood estimate of the system clock offset, while system jitter is best interpreted as the expected error of this estimate. These statistics are reported by the loopstats option of the filegen command.

Of interest in this discussion is how the client determines the quality of service from a particular reference clock or remote server. This is determined from two statistics, expected error and maximum error. Expected error, or system jitter, is determined from various jitter components; it represents the nominal error in determining the mean clock offset.

Maximum error is determined from delay and dispersion contributions and represents the worst-case error due to all causes. In order to simplify discussion, certain minor contributions to the maximum error statistic are ignored. Elsewhere in the documentation the maximum error is called synchronization distance. If the precision time kernel support is available, both the estimated error and maximum error are reported to user programs via the ntp_gettime() kernel system call. See the Kernel Model for Precision Timekeeping page for further information.

The maximum error is computed as one-half the root delay to the primary source of time; i.e., the primary reference clock, plus the root dispersion. The root variables are included in the NTP packet header received from each server. When calculating maximum error, the root delay is the sum of the root delay in the packet and the peer delay, while the root dispersion is the sum of the root dispersion in the packet and the peer dispersion.

A source is considered selectable only if its maximum error is less than the select threshold, by default 1.5 s, but can be changed according to client preference using the maxdist option of the tos command. A common consequence is when an upstream server loses all sources and its maximum error apparent to dependent clients begins to increase. The clients are not aware of this condition and continue to accept synchronization as long as the maximum error is less than the select threshold.

Although it might seem counterintuitive, a cardinal rule in the selection process is, once a sample has been selected by the clock filter algorithm, older samples are no longer selectable. This applies also to the clock select algorithm. Once the peer variables for a source have been selected, older variables of the same or other sources are no longer selectable. The reason for these rules is to limit the time delay in the clock discipline algorithm. This is necessary to preserve the optimum impulse response and thus the risetime and overshoot.

This means that not every sample can be used to update the peer variables, and up to seven samples can be ignored between selected samples. This fact has been carefully considered in the discipline algorithm design with due consideration for feedback loop delay and minimum sampling rate. In engineering terms, even if only one sample in eight survives, the resulting sample rate is twice the Nyquist rate at any time constant and poll interval.

Clock Initialization and Management

If left running continuously, an NTP client on a fast LAN in a home or office environment can maintain synchronization nominally within one millisecond. When the ambient temperature variations are less than a degree Celsius, the clock oscillator frequency is disciplined to within one part per million (PPM), even when the clock oscillator native frequency offset is 100 PPM or more.

For laptops and portable devices when the power is turned off, the battery backup clock offset error can increase as much as one second per day. When power is restored after several hours or days, the clock offset and oscillator frequency errors must be resolved by the clock discipline algorithm, but this can take several hours without specific provisions.

When the client is restarted after a period when the power is off, the clock may have significant error. The provisions described in this section insure that, in all but pathological situations, the startup transient is suppressed to within nominal levels in no more than five minutes after a warm start or ten minutes after a cold start. Following is a summary of these procedures. A detailed discussion of these procedures is on the Clock State Machine page.

The reference implementation measures the clock oscillator frequency and updates a frequency file at intervals of one hour or more, depending on the measured frequency wander. This design is intended to minimize write cycles in NVRAM that might be used in a laptop or portable device. In a warm start, the frequency is initialized from this file, which avoids a possibly lengthy discipline time. In a cold start when no frequency file is available, the reference implementation first measures the oscillator frequency over a five-min interval. This generally results in a residual frequency error of less than 1 PPM. The measurement interval can be changed using the stepout option of the tinker command.

In order to further reduce the clock offset error at restart, the reference implementation mext disables oscillator frequency discipline and enables clock offset discipline with a small time constant. This is designed to quickly reduce the clock offset error without causing a frequency surge. This configuration is continued for an interval of five-min, after which the clock offset error is usually no more than a millisecond. The measurement interval can be changed using the stepout option of the tinker command.

Another concern at restart is the time necessary for the selection and clustering algorithms to refine and validate the initial clock offset estimate. Normally, this takes several updates before setting the system clock. As the default minimum poll interval in most configurations is about one minute, it can take several minutes before setting the system clock. The iburst option of the server command changes the behavior at restart and is recommended for client/server configurations. When this option is enabled, the client sends a volley of six requests at intervals of two seconds. This insures a reliable estimate is available in about ten seconds before setting the clock. Once this initial volley is complete, the procedures described above are executed.

As a result of the above considerations, when a backup source, such as the local clock driver, ACTS modem driver or orphan mode is included in the system configuration, it may happen that one or more of them are selectable before one or more of the regular sources are selectable. When backup sources are included in the configuration, the reference implementation waits an interval of several minutes without regular sources before switching to backup sources. This is generally enough to avoid startup transients due to premature switching to backup sources. The interval can be changed using the orphanwait option of the tos command.