Network Working Group H. Schulzrinne
Request for Comments: 4733 Columbia U.
Obsoletes: 2833 T. Taylor
Category: Standards Track Nortel
December 2006 RTP Payload for DTMF Digits, Telephony Tones, and Telephony Signals
Status of This Memo
This document specifies an Internet standards track protocol for the
Internet community, and requests discussion and suggestions for
improvements. Please refer to the current edition of the "Internet
Official Protocol Standards" (STD 1) for the standardization state
and status of this protocol. Distribution of this memo is unlimited.
Copyright (C) The IETF Trust (2006).
This memo describes how to carry dual-tone multifrequency (DTMF)
signalling, other tone signals, and telephony events in RTP packets.
It obsoletes RFC 2833.
This memo captures and expands upon the basic framework defined in
RFC 2833, but retains only the most basic event codes. It sets up an
IANA registry to which other event code assignments may be added.
Companion documents add event codes to this registry relating to
modem, fax, text telephony, and channel-associated signalling events.
The remainder of the event codes defined in RFC 2833 are
conditionally reserved in case other documents revive their use.
This document provides a number of clarifications to the original
document. However, it specifically differs from RFC 2833 by removing
the requirement that all compliant implementations support the DTMF
events. Instead, compliant implementations taking part in
out-of-band negotiations of media stream content indicate what events
they support. This memo adds three new procedures to the RFC 2833
framework: subdivision of long events into segments, reporting of
multiple events in a single packet, and the concept and reporting of
Table of Contents
1. Introduction ....................................................41.1. Terminology ................................................41.2. Overview ...................................................41.3. Potential Applications .....................................51.4. Events, States, Tone Patterns, and Voice-Encoded Tones .....62. RTP Payload Format for Named Telephone Events ...................82.1. Introduction ...............................................82.2. Use of RTP Header Fields ...................................82.2.1. Timestamp ...........................................82.2.2. Marker Bit ..........................................82.3. Payload Format .............................................82.3.1. Event Field .........................................92.3.2. E ("End") Bit .......................................92.3.3. R Bit ...............................................92.3.4. Volume Field ........................................92.3.5. Duration Field ......................................92.4. Optional Media Type Parameters ............................102.4.1. Relationship to SDP ................................102.5. Procedures ................................................112.5.1. Sending Procedures .................................112.5.2. Receiving Procedures ...............................162.6. Congestion and Performance ................................192.6.1. Performance Requirements ...........................202.6.2. Reliability Mechanisms .............................202.6.3. Adjusting to Congestion ............................223. Specification of Event Codes for DTMF Events ...................233.1. DTMF Applications .........................................233.2. DTMF Events ...............................................253.3. Congestion Considerations .................................254. RTP Payload Format for Telephony Tones .........................264.1. Introduction ..............................................264.2. Examples of Common Telephone Tone Signals .................274.3. Use of RTP Header Fields ..................................274.3.1. Timestamp ..........................................274.3.2. Marker Bit .........................................274.3.3. Payload Format .....................................284.3.4. Optional Media Type Parameters .....................294.4. Procedures ................................................294.4.1. Sending Procedures .................................294.4.2. Receiving Procedures ...............................304.4.3. Handling of Congestion .............................305. Examples .......................................................316. Security Considerations ........................................38
7. IANA Considerations ............................................387.1. Media Type Registrations ..................................407.1.1. Registration of Media Type audio/telephone-event ...407.1.2. Registration of Media Type audio/tone ..............428. Acknowledgements ...............................................439. References .....................................................439.1. Normative References ......................................439.2. Informative References ....................................44
Appendix A. Summary of Changes from RFC 2833 ......................46
In this document, the key words "MUST", "MUST NOT", "REQUIRED",
"SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY",
and "OPTIONAL" are to be interpreted as described in RFC 2119 .
This document uses the following abbreviations:
ANSam Answer tone (amplitude modulated) 
DTMF Dual-Tone Multifrequency 
IVR Interactive Voice Response unit
PBX Private branch exchange (telephone system)
PSTN Public Switched (circuit) Telephone Network
RTP Real-time Transport Protocol 
SDP Session Description Protocol 
This memo defines two RTP  payload formats, one for carrying
dual-tone multifrequency (DTMF) digits and other line and trunk
signals as events (Section 2), and a second one to describe general
multifrequency tones in terms only of their frequency and cadence
(Section 4). Separate RTP payload formats for telephony tone signals
are desirable since low-rate voice codecs cannot be guaranteed to
reproduce these tone signals accurately enough for automatic
recognition. In addition, tone properties such as the phase
reversals in the ANSam tone will not survive speech coding. Defining
separate payload formats also permits higher redundancy while
maintaining a low bit rate. Finally, some telephony events such as
"on-hook" occur out-of-band and cannot be transmitted as tones.
The remainder of this section provides the motivation for defining
the payload types described in this document. Section 2 defines the
payload format and associated procedures for use of named events.
Section 3 describes the events for which event codes are defined in
this document. Section 4 describes the payload format and associated
procedures for tone representations. Section 5 provides some
examples of encoded events, tones, and combined payloads. Section 6
deals with security considerations. Section 7 defines the IANA
requirements for registration of event codes for named telephone
events, establishes the initial content of that registry, and
provides the media type registrations for the two payload formats.
Appendix A describes the changes from RFC 2833  and in particular
indicates the disposition of the event codes defined in .
1.3. Potential Applications
The payload formats described here may be useful in a number of
On the sending side, there are two basic possibilities: either the
sending side is an end system that originates the signals itself, or
it is a gateway with the task of propagating incoming telephone
signals into the Internet.
On the receiving side, there are more possibilities. The first is
that the receiver must propagate tone signalling accurately into the
PSTN for machine consumption. One example of this is a gateway
passing DTMF tones to an IVR. In this scenario, frequencies,
amplitudes, tone durations, and the durations of pauses between tones
are all significant, and individual tone signals must be delivered
reliably and in order.
In a second receiving scenario, the receiver must play out tones for
human consumption. Typically, rather than a series of tone signals
each with its own meaning, the content will consist of a single tone
played out continuously or a single sequence of tones and possibly
silence, repeated cyclically for some period of time. Often the end
of the tone playout will be triggered by an event fed back in the
other direction, using either in- or out-of-band means. Examples of
this are dial tone or busy tone.
The relationship between position in the network and the tones to be
played out is a complicating factor in this scenario. In the phone
network, tones are generated at different places, depending on the
switching technology and the nature of the tone. This determines,
for example, whether a person making a call to a foreign country
hears her local tones she is familiar with or the tones as used in
the country called.
For analog lines, dial tone is always generated by the local switch.
Integrated Services Digital Network (ISDN) terminals may generate
dial tone locally and then send a Q.931  SETUP message containing
the dialed digits. If the terminal just sends a SETUP message
without any Called Party digits, then the switch does digit
collection (provided by the terminal as KEYPAD key press digit
information within Called Party or Keypad Facility Information
Elements (IEs) of INFORMATION messages), and provides dial tone over
the B-channel. The terminal can either use the audio signal on the
B-channel or use the Q.931 messages to trigger locally generated dial
Ringing tone (also called ringback tone) is generated by the local
switch at the callee, with a one-way voice path opened up as soon as
the callee's phone rings. (This reduces the chance of clipping the
called party's response just after answer. It also permits pre-
answer announcements or in-band call-progress indications to reach
the caller before or in lieu of a ringing tone.) Congestion tone and
special information tones can be generated by any of the switches
along the way, and may be generated by the caller's switch based on
ISDN User Part (ISUP) messages received. Busy tone is generated by
the caller's switch, triggered by the appropriate ISUP message, for
analog instruments, or the ISDN terminal.
In the third scenario, an end system is directly connected to the
Internet and processes the incoming media stream directly. There is
no need to regenerate tone signals, so that time alignment and power
levels are not relevant. These systems rely on sending systems to
generate events in place of tones and do not perform their own audio
waveform analysis. An example of such a system is an Internet
interactive voice response (IVR) system.
In circumstances where exact timing alignment between the audio
stream and the DTMF digits or other events is not important and data
is sent unicast, as in the IVR example, it may be preferable to use a
reliable control protocol rather than RTP packets. In those
circumstances, this payload format would not be used.
Note that in a number of these cases it is possible that the gateway
or end system will be both a sender and receiver of telephone
signals. Sometimes the same class of signals will be sent as
received -- in the case of "RTP trunking" or voice-band data, for
instance. In other cases, such as that of an end system serving
analogue lines, the signals sent will be in a different class from
1.4. Events, States, Tone Patterns, and Voice-Encoded Tones
This document provides the means for in-band transport over the
Internet of two broad classes of signalling information: in-band
tones or tone sequences, and signals sent out-of-band in the PSTN.
Tone signals can be carried using any of the three methods listed
below. Depending on the application, it may be desirable to carry
the signalling information in more than one form at once.
1. The gateway or end system can change to a higher-bandwidth codec
such as G.711  when tone signals are to be conveyed. See new
ITU-T Recommendation V.152  for a formal treatment of this
approach. Alternatively, for fax, text, or modem signals
respectively, a specialized transport such as T.38 , RFC 4103
, or V.150.1 modem relay  may be used. Finally, 64
kbit/s channels may be carried transparently using the RFC 4040
Clearmode payload type . These methods are out of scope of
the present document, but may be used along with the payload
types defined here.
2. The sending gateway can simply measure the frequency components
of the voice-band signals and transmit this information to the
RTP receiver using the tone representation defined in this
document (Section 4). In this mode, the gateway makes no attempt
to discern the meaning of the tones, but simply distinguishes
tones from speech signals. An end system may use the same
approach using configured rather than measured frequencies.
All tone signals in use in the PSTN and meant for human
consumption are sequences of simple combinations of sine waves,
either added or modulated. (However, some modem signals such as
the ANSam tone  or systems dependent on phase shift keying
cannot be conveyed so simply.)
3. As a third option, a sending gateway can recognize tones such as
ringing or busy tone or DTMF digit '0', and transmit a code that
identifies them using the telephone-event payload defined in this
document (Section 2). The receiver then produces a tone signal
or other indication appropriate to the signal. Generally, since
the recognition of signals at the sender often depends on their
on/off pattern or the sequence of several tones, this recognition
can take several seconds. On the other hand, the gateway may
have access to the actual signalling information that generates
the tones and thus can generate the RTP packet immediately,
without the detour through acoustic signals.
The third option (use of named events) is the only feasible method
for transmitting out-of-band PSTN signals as content within RTP
2. RTP Payload Format for Named Telephone Events
The RTP payload format for named telephone events is designated as
"telephone-event", the media type as "audio/telephone-event". In
accordance with current practice, this payload format does not have a
static payload type number, but uses an RTP payload type number
established dynamically and out-of-band. The default clock frequency
is 8000 Hz, but the clock frequency can be redefined when assigning
the dynamic payload type.
Named telephone events are carried as part of the audio stream and
MUST use the same sequence number and timestamp base as the regular
audio channel to simplify the generation of audio waveforms at a
gateway. The named telephone-event payload type can be considered to
be a very highly-compressed audio codec and is treated the same as
2.2. Use of RTP Header Fields
The event duration described in Section 2.5 begins at the time given
by the RTP timestamp. For events that span multiple RTP packets, the
RTP timestamp identifies the beginning of the event, i.e., several
RTP packets may carry the same timestamp. For long-lasting events
that have to be split into segments (see below, Section 184.108.40.206), the
timestamp indicates the beginning of the segment.
2.2.2. Marker Bit
The RTP marker bit indicates the beginning of a new event. For long-
lasting events that have to be split into segments (see below,
Section 220.127.116.11), only the first segment will have the marker bit
2.3. Payload Format
The payload format for named telephone events is shown in Figure 1.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
| event |E|R| volume | duration |
Figure 1: Payload Format for Named Events
2.3.1. Event Field
The event field is a number between 0 and 255 identifying a specific
telephony event. An IANA registry of event codes for this field has
been established (see IANA Considerations, Section 7). The initial
content of this registry consists of the events defined in Section 3.
2.3.2. E ("End") Bit
If set to a value of one, the "end" bit indicates that this packet
contains the end of the event. For long-lasting events that have to
be split into segments (see below, Section 18.104.22.168), only the final
packet for the final segment will have the E bit set.
2.3.3. R Bit
This field is reserved for future use. The sender MUST set it to
zero, and the receiver MUST ignore it.
2.3.4. Volume Field
For DTMF digits and other events representable as tones, this field
describes the power level of the tone, expressed in dBm0 after
dropping the sign. Power levels range from 0 to -63 dBm0. Thus,
larger values denote lower volume. This value is defined only for
events for which the documentation indicates that volume is
applicable. For other events, the sender MUST set volume to zero and
the receiver MUST ignore the value.
2.3.5. Duration Field
The duration field indicates the duration of the event or segment
being reported, in timestamp units, expressed as an unsigned integer
in network byte order. For a non-zero value, the event or segment
began at the instant identified by the RTP timestamp and has so far
lasted as long as indicated by this parameter. The event may or may
not have ended. If the event duration exceeds the maximum
representable by the duration field, the event is split into several
contiguous segments as described below (Section 22.214.171.124).
The special duration value of zero is reserved to indicate that the
event lasts "forever", i.e., is a state and is considered to be
effective until updated. A sender MUST NOT transmit a zero duration
for events other than those defined as states. The receiver SHOULD
ignore an event report with zero duration if the event is not a
Events defined as states MAY contain a non-zero duration, indicating
that the sender intends to refresh the state before the time duration
has elapsed ("soft state").
For a sampling rate of 8000 Hz, the duration field is sufficient
to express event durations of up to approximately 8 seconds.
2.4. Optional Media Type Parameters
As indicated in the media type registration for named events in
Section 7.1.1, the telephone-event media type supports two optional
parameters: the "events" parameter and the "rate" parameter.
The "events" parameter lists the events supported by the
implementation. Events are listed as one or more comma-separated
elements. Each element can be either a single integer providing the
value of an event code or an integer followed by a hyphen and a
larger integer, presenting a range of consecutive event code values.
The list does not have to be sorted. No white space is allowed in
the argument. The union of all of the individual event codes and
event code ranges designates the complete set of event numbers
supported by the implementation.
The "rate" parameter describes the sampling rate, in Hertz, and hence
the units for the RTP timestamp and event duration fields. The
number is written as an integer. If omitted, the default value is
2.4.1. Relationship to SDP
The recommended mapping of media type optional parameters to SDP is
given in Section 3 of RFC 3555 . The "rate" media type parameter
for the named event payload type follows this convention: it is
expressed as usual as the <clock rate> component of the a=rtpmap:
The "events" media type parameter deviates from the convention
suggested in RFC 3555 because it omits the string "events=" before
the list of supported events.
a=fmtp:<format> <list of values>
The list of values has the format and meaning described above.
For example, if the payload format uses the payload type number 100,
and the implementation can handle the DTMF tones (events 0 through
15) and the dial and ringing tones (assuming as an example that these
were defined as events with codes 66 and 70, respectively), it would
include the following description in its SDP message:
m=audio 12346 RTP/AVP 100
The following sample media type definition corresponds to the SDP
This section defines the procedures associated with the named event
payload type. Additional procedures may be specified in the
documentation associated with specific event codes.
2.5.1. Sending Procedures
126.96.36.199. Negotiation of Payloads
Events are usually sent in combination with or alternating with other
payload types. Payload negotiation may specify separate event and
other payload streams, or it may specify a combined stream that mixes
other payload types with events using RFC 2198  redundancy
headers. The purpose of using a combined stream may be for debugging
or to ease the transition between general audio and events.
Negotiation of payloads between sender and receiver is achieved by
out-of-band means, using SDP, for example.
The sender SHOULD indicate what events it supports, using the
optional "events" parameter associated with the telephone-event media
type. If the sender receives an "events" parameter from the
receiver, it MUST restrict the set of events it sends to those listed
in the received "events" parameter. For backward compatibility, if
no "events" parameter is received, the sender SHOULD assume support
for the DTMF events 0-15 but for no other events.
Events MAY be sent in combination with older events using RFC 2198
 redundancy. Section 188.8.131.52 describes how this can be used to
avoid packet and RTP header overheads when retransmitting final event
reports. Section 2.6 discusses the use of additional levels of RFC
2198 redundancy to increase the probability that at least one copy of
the report of the end of an event reaches the receiver. The
following SDP shows an example of such usage, where G.711 audio
appears in a separate stream, and the primary component of the
redundant payload is events.
m=audio 12344 RTP/AVP 99
m=audio 12346 RTP/AVP 100 101
When used in accordance with the offer-answer model (RFC 3264 ),
the SDP a=ptime: attribute indicates the packetization period that
the author of the session description expects when receiving media.
This value does not have to be the same in both directions. The
appropriate period may vary with the application, since increased
packetization periods imply increased end-to-end response times in
instances where one end responds to events reported from the other.
Negotiation of telephone-events sessions using SDP MAY specify such
differences by separating events corresponding to different
applications into different streams. In the example below, events
0-15 are DTMF events, which have a fairly wide tolerance on timing.
Events 32-49 and 52-60 are events related to data transmission and
are subject to end-to-end response time considerations. As a result,
they are assigned a smaller packetization period than the DTMF
m=audio 12344 RTP/AVP 99
m=audio 12346 RTP/AVP 100
For further discussion of packetization periods see Section 2.6.3.
184.108.40.206. Transmission of Event Packets
DTMF digits and other named telephone events are carried as part of
the audio stream, and they MUST use the same sequence number and
timestamp base as the regular audio channel to simplify the
generation of audio waveforms at a gateway.
An audio source SHOULD start transmitting event packets as soon as it
recognizes an event and continue to send updates until the event has
ended. The update packets MUST have the same RTP timestamp value as
the initial packet for the event, but the duration MUST be increased
to reflect the total cumulative duration since the beginning of the
The first packet for an event MUST have the M bit set. The final
packet for an event MUST have the E bit set, but setting of the "E"
bit MAY be deferred until the final packet is retransmitted (see
Section 220.127.116.11). Intermediate packets for an event MUST NOT have
either the M bit or the E bit set.
Sending of a packet with the E bit set is OPTIONAL if the packet
reports two events that are defined as mutually exclusive states, or
if the final packet for one state is immediately followed by a packet
reporting a mutually exclusive state. (For events defined as states,
the appearance of a mutually exclusive state implies the end of the
A source has wide latitude as to how often it sends event updates. A
natural interval is the spacing between non-event audio packets.
(Recall that a single RTP packet can contain multiple audio frames
for frame-based codecs and that the packet interval can vary during a
session.) Alternatively, a source MAY decide to use a different
spacing for event updates, with a value of 50 ms RECOMMENDED.
Timing information is contained in the RTP timestamp, allowing
precise recovery of inter-event times. Thus, the sender does not in
theory need to maintain precise or consistent time intervals between
event packets. However, the sender SHOULD minimize the need for
buffering at the receiving end by sending event reports at constant
DTMF digits and other tone events are sent incrementally to avoid
having the receiver wait for the completion of the event. In some
cases (for example, data session startup protocols), waiting until
the end of a tone before reporting it will cause the session to
fail. In other cases, it will simply cause undesirable delays in
playout at the receiving end.
For robustness, the sender SHOULD retransmit "state" events
18.104.22.168. Long-Duration Events
If an event persists beyond the maximum duration expressible in the
duration field (0xFFFF), the sender MUST send a packet reporting this
maximum duration but MUST NOT set the E bit in this packet. The
sender MUST then begin reporting a new "segment" with the RTP
timestamp set to the time at which the previous segment ended and the
duration set to the cumulative duration of the new segment. The M
bit of the first packet reporting the new segment MUST NOT be set.
The sender MUST repeat this procedure as required until the end of
the complete event has been reached. The final packet for the
complete event MUST have the E bit set (either on initial
transmission or on retransmission as described below).
22.214.171.124.1. Exceptional Procedure for Combined Payloads
If events are combined as a redundant payload with another payload
type using RFC 2198  redundancy, the above procedure SHALL be
applied, but using a maximum duration that ensures that the timestamp
offset of the oldest generation of events in an RFC 2198 packet never
exceeds 0x3FFF. If the sender is using a constant packetization
period, the maximum segment duration can be calculated from the
maximum duration = 0x3FFF - (R-1)*(packetization period in
where R is the highest redundant layer number consisting of event
The RFC 2198 redundancy header timestamp offset value is only 14
bits, compared with the 16 bits in the event payload duration
field. Since with other payloads the RTP timestamp typically
increments for each new sample, the timestamp offset value becomes
limiting on reported event duration. The limit becomes more
constraining when older generations of events are also included in
the combined payload.
126.96.36.199. Retransmission of Final Packet
The final packet for each event and for each segment SHOULD be sent a
total of three times at the interval used by the source for updates.
This ensures that the duration of the event or segment can be
recognized correctly even if an instance of the last packet is lost.
A sender MAY use RFC 2198  with up to two levels of redundancy to
combine retransmissions with reports of new events, thus saving on
header overheads. In this usage, the primary payload is new event
reports, while the first and (if necessary) second levels of
redundancy report first and second retransmissions of final event
reports. Within a session negotiated to allow such usage, packets
containing the RFC 2198 payload SHOULD NOT be sent except when both
primary and retransmitted reports are to be included. All other
packets of the session SHOULD contain only the simple, non-redundant
telephone-event payload. Note that the expected proportion of simple
versus redundant packets affects the order in which they should be
specified on an SDP m= line.
There is little point in sending initial or interim event reports
redundantly because each succeeding packet describes the event
fully (except for typically irrelevant variations in volume).
A sender MAY delay setting the E bit until retransmitting the last
packet for a tone, rather than setting the bit on its first
transmission. This avoids having to wait to detect whether the tone
has indeed ended. Once the sender has set the E bit for a packet, it
MUST continue to set the E bit for any further retransmissions of
188.8.131.52. Packing Multiple Events into One Packet
Multiple named events can be packed into a single RTP packet if and
only if the events are consecutive and contiguous, i.e., occur
without overlap and without pause between them, and if the last event
packed into a packet occurs quickly enough to avoid excessive delays
at the receiver.
This approach is similar to having multiple frames of frame-based
audio in one RTP packet.
The constraint that packed events not overlap implies that events
designated as states can be followed in a packet only by other state
events that are mutually exclusive to them. The constraint itself is
needed so that the beginning time of each event can be calculated at
In a packet containing events packed in this way, the RTP timestamp
MUST identify the beginning of the first event or segment in the
packet. The M bit MUST be set if the packet records the beginning of
at least one event. (This will be true except when the packet
carries the end of one segment and the beginning of the next segment
of the same long-lasting event.) The E bit and duration for each
event in the packet MUST be set using the same rules as if that event
were the only event contained in the packet.
184.108.40.206. RTP Sequence Number
The RTP sequence number MUST be incremented by one in each successive
RTP packet sent. Incrementing applies to retransmitted as well as
initial instances of event reports, to permit the receiver to detect
lost packets for RTP Control Protocol (RTCP) receiver reports.
2.5.2. Receiving Procedures
220.127.116.11. Indication of Receiver Capabilities Using SDP
Receivers can indicate which named events they can handle, for
example, by using the Session Description Protocol (RFC 4566 ).
SDP descriptions using the event payload MUST contain an fmtp format
attribute that lists the event values that the receiver can process.
18.104.22.168. Playout of Tone Events
In the gateway scenario, an Internet telephony gateway connecting a
packet voice network to the PSTN re-creates the DTMF or other tones
and injects them into the PSTN. Since, for example, DTMF digit
recognition takes several tens of milliseconds, the first few
milliseconds of a digit will arrive as regular audio packets. Thus,
careful time and power (volume) alignment between the audio samples
and the events is needed to avoid generating spurious digits at the
receiver. The receiver may also choose to delay playout of the tones
by some small interval after playout of the preceding audio has
ended, to ensure that downstream equipment can discriminate the tones
Some implementations send events and encoded audio packets (e.g.,
PCMU or the codec used for speech signals) for the same time instant
for the duration of the event. It is RECOMMENDED that gateways
render only the telephone-event payload once it is received, since
the audio may contain spurious tones introduced by the audio
compression algorithm. However, it is anticipated that these extra
tones in general should not interfere with recognition at the far
Receiver implementations MAY use different algorithms to create
tones, including the two described here. (Note that not all
implementations have the need to re-create a tone; some may only care
about recognizing the events.) With either algorithm, a receiver may
impose a playout delay to provide robustness against packet loss or
delay. The tradeoff between playout delay and other factors is
discussed further in Section 2.6.3.
In the first algorithm, the receiver simply places a tone of the
given duration in the audio playout buffer at the location indicated
by the timestamp. As additional packets are received that extend the
same tone, the waveform in the playout buffer is extended
accordingly. (Care has to be taken if audio is mixed, i.e., summed,
in the playout buffer rather than simply copied.) Thus, if a packet
in a tone lasting longer than the packet interarrival time gets lost
and the playout delay is short, a gap in the tone may occur.
Alternatively, the receiver can start a tone and play it until one of
the following occurs:
o it receives a packet with the E bit set;
o it receives the next tone, distinguished by a different timestamp
value (noting that new segments of long-duration events also
appear with a new timestamp value);
o it receives an alternative non-event media stream (assuming none
was being received while the event stream was active); or
o a given time period elapses.
This is more robust against packet loss, but may extend the tone
beyond its original duration if all retransmissions of the last
packet in an event are lost. Limiting the time period of extending
the tone is necessary to avoid that a tone "gets stuck". This
algorithm is not a license for senders to set the duration field to
zero; it MUST be set to the current duration as described, since this
is needed to create accurate events if the first event packet is
lost, among other reasons.
Regardless of the algorithm used, the tone SHOULD NOT be extended by
more than three packet interarrival times. A slight extension of
tone durations and shortening of pauses is generally harmless.
A receiver SHOULD NOT restart a tone once playout has stopped. It
MAY do so if the tone is of a type meant for human consumption or is
one for which interruptions will not cause confusion at the receiving
If a receiver receives an event packet for an event that it is not
currently playing out and the packet does not have the M bit set,
earlier packets for that event have evidently been lost. This can be
confirmed by gaps in the RTP sequence number. The receiver MAY
determine on the basis of retained history and the timestamp and
event code of the current packet that it corresponds to an event
already played out and lapsed. In that case, further reports for the
event MUST be ignored, as indicated in the previous paragraph.
If, on the other hand, the event has not been played out at all, the
receiver MAY attempt to play the event out to the complete duration
indicated in the event report. The appropriate behavior will depend
on the event type, and requires consideration of the relationship of
the event to audio media flows and whether correct event duration is
essential to the correct operation of the media session.
A receiver SHOULD NOT rely on a particular event packet spacing, but
instead MUST use the event timestamps and durations to determine
timing and duration of playout.
The receiver MUST calculate jitter for RTCP receiver reports based on
all packets with a given timestamp. Note: The jitter value should
primarily be used as a means for comparing the reception quality
between two users or two time periods, not as an absolute measure.
If a zero volume is indicated for an event for which the volume field
is defined, then the receiver MAY reconstruct the volume from the
volume of non-event audio or MAY use the nominal value specified by
the ITU Recommendation or other document defining the tone. This
ensures backwards compatibility with RFC 2833 , where the volume
field was defined only for DTMF events.
22.214.171.124. Long-Duration Events
If an event report is received with duration equal to the maximum
duration expressible in the duration field (0xFFFF) and the E bit for
the report is not set, the event report may mark the end of a segment
generated according to the procedures of Section 126.96.36.199. If another
report for the same event type is received, the receiver MUST compare
the RTP timestamp for the new event with the sum of the RTP timestamp
of the previous report plus the duration (0xFFFF). The receiver uses
the absence of a gap between the events to detect that it is
receiving a single long-duration event.
The total duration of a long-duration event is (obviously) the sum of
the durations of the segments used to report it. This is equal to
the duration of the final segment (as indicated in the final packet
for that segment), plus 0xFFFF multiplied by the number of segments
preceding the final segment.
188.8.131.52.1. Exceptional Procedure for Combined Payloads
If events are combined as a redundant payload with another payload
type using RFC 2198  redundancy, segments are generated at
intervals of 0x3FFF or less, rather than 0xFFFF, as required by the
procedures of Section 184.108.40.206.1 in this case. If a receiver is using
the events component of the payload, event duration may be only an
approximate indicator of division into segments, but the lack of an E
bit and the adjacency of two reports with the same event code are
strong indicators in themselves.
220.127.116.11. Multiple Events in a Packet
The procedures of Section 18.104.22.168 require that if multiple events are
reported in the same packet, they are contiguous and non-overlapping.
As a result, it is not strictly necessary for the receiver to know
the start times of the events following the first one in order to
play them out -- it needs only to respect the duration reported for
each event. Nevertheless, if knowledge of the start time for a given
event after the first one is required, it is equal to the sum of the
start time of the preceding event plus the duration of the preceding
22.214.171.124. Soft States
If the duration of a soft state event expires, the receiver SHOULD
consider the value of the state to be "unknown" unless otherwise
indicated in the event documentation.
2.6. Congestion and Performance
Packet transmission through the Internet is marked by occasional
periods of congestion lasting on the order of second, during which
network delay, jitter, and packet loss are all much higher than they
are in between these periods. Reference  characterizes this
phenomenon. Well-behaved applications are expected, preferably, to
reduce their demands on the network during such periods of
congestion. At the least, they should not increase their demands.
This section explores both application performance and the
possibilities for good behavior in the face of congestion.
2.6.1. Performance Requirements
Typically, an implementation of the telephone-event payload will aim
to limit the rate at which each of the following impairments occurs:
a. an event encoded at the sender fails to be played out at the
receiver, either because the event report is lost or because it
arrives after playout of later content has started;
b. the start of playout of an event at the receiver is delayed
relative to other events or other media operating on the same
c. the duration of playout of a given event differs from the correct
duration as detected at the sender by more than a given amount;
d. gaps occur in playout of a given event;
e. end-to-end delay for the media stream exceeds a given value.
The relative importance of these constraints varies between
2.6.2. Reliability Mechanisms
To improve reliability, all payload types including telephone-events
can use a jitter buffer, i.e., impose a playout delay, at the
receiving end. This mechanism addresses the first four requirements
listed above, but at the expense of the last one.
The named event procedures provide two complementary redundancy
mechanisms to deal with lost packets:
a. Intra-event updates:
Events that last longer than one packetization period (e.g., 50
ms) are updated periodically, so that the receiver can
reconstruct the event and its duration if it receives any of the
update packets, albeit with delay.
During an event, the RTP event payload format provides
incremental updates on the event. The error resiliency afforded
by this mechanism depends on whether the first or second
algorithm in Section 126.96.36.199 is used and on the playout delay at
the receiver. For example, if the receiver uses the first
algorithm and only places the current duration of tone signal in
the playout buffer, for a playout delay of 120 ms and a
packetization interval of 50 ms, two packets in a row can get
lost without causing a premature end of the tone generated.
b. Repeat last event packet:
As described in Section 188.8.131.52, the last report for an event is
transmitted a total of three times. This mechanism adds
robustness to the reporting of the end of an event.
It may be necessary to extend the level of redundancy to achieve
requirement a) (in Section 2.6.1) in a specific network
environment. Taking the 25-30% loss rate during congestion
periods illustrated in  as typical, and setting an objective
that at least 99% of end-of-event reports will eventually get
through to the receiver under these conditions, simple
probability calculations indicate that each event completion has
to be reported four times. This is one more level of redundancy
than required by the basic "Repeat last event packet" algorithm.
Of course, the objective is probably unrealistically stringent;
it was chosen to make a point.
Where Section 184.108.40.206 indicates that it is appropriate to use the
RFC 2198  audio redundancy mechanism to carry retransmissions
of final event reports, this mechanism MAY also be used to extend
the number of final report retransmissions. This is done by
using more than two levels of redundancy when necessary. The use
of RFC 2198 helps to mitigate the extra bandwidth demands that
would be imposed simply by retransmitting final event packets
more than three times.
These two redundancy mechanisms clearly address requirement a) in the
previous section. They also help meet requirement c), to the extent
that the redundant packets arrive before playout of the events they
report is due to expire. They are not helpful in meeting the other
requirements, although they do not directly cause impairments
themselves in the way that a large jitter buffer increases end-to-end
The playout algorithm is an additional mechanism for meeting the
performance requirements. In particular, using the second algorithm
in Section 220.127.116.11 will meet requirement d) of the previous section
by preventing gaps in playout, but at the potential cost of increases
in duration (requirement c)).
Finally, there is an interaction between the packetization period
used by a sender, the playout delay used by the receiver, and the
vulnerability of an event flow to packet losses. Assuming packet
losses are independent, a shorter packetization interval means that
the receiver can use a smaller playout delay to recover from a given
number of consecutive packet losses, at any stage of event playout.
This improves end-to-end delays in applications where that matters.
In view of the tradeoffs between the different reliability
mechanisms, documentation of specific events SHOULD include a
discussion of the appropriate design decisions for the applications
of those events. This mandate is repeated in the section on IANA
2.6.3. Adjusting to Congestion
So far, the discussion has been about meeting performance
requirements. However, there is also the question of whether
applications of events can adapt to congestion to the point that they
reduce their demands on the networks during congestion. In theory
this can be done for events by increasing the packetization interval,
so that fewer packets are sent per second. This has to be
accompanied by an increased playout delay at the receiving end.
Coordination between the two ends for this purpose is an interesting
issue in itself. If it is done, however, such an action implies a
one-time gap or extended playout of an event when the packetization
interval is first extended, as well as increased end-to-end delay
during the whole period of increased playout delay.
The benefit from such a measure varies primarily depending on the
average duration of the events being handled. In the worst case, as
a first example shows, the reduction in aggregate bandwidth usage due
to an increased packetization interval may be quite modest. Suppose
the average event duration is 3.33 ms (V.21 bits, for instance).
Suppose further that four transmissions in total are required for a
given event report to meet the loss objective. Table 1 shows the
impact of varying packetization intervals on the aggregate bit rate
of the media stream.
| Packetization | Packets/s | IP Packet | Total IP Bit |
| Interval (ms) | | Size (bits) | Rate (bits/s) |
| 50 | 20 | 2440 | 48800 |
| 33.3 | 30 | 1800 | 54000 |
| 25 | 40 | 1480 | 59200 |
| 20 | 50 | 1288 | 64400 |
Table 1: Data Rate at the IP Level versus Packetization Interval
(three retransmissions, 3.33 ms per event)
As can be seen, a doubling of the interval (from 25 to 50 ms) drops
aggregate bit rate by about 20% while increasing end-to-end delay by
25 ms and causing a one-time gap of the same amount. (Extending the
playout of a specific V.21 tone event is out of the question, so the
first algorithm of Section 18.104.22.168 must be used in this application.)
The reduction in number of packets per second with longer
packetization periods is countered by the increase in packet size due
to the increase in number of events per packet.
For events of longer duration, the reduction in bandwidth is more
proportional to the increase in packetization interval. The loss of
final event reports may also be less critical, so that lower
redundancy levels are acceptable. Table 2 shows similar data to
Table 1, but assuming 70-ms events separated by 50 ms of silence (as
in an idealized DTMF-based text messaging session) with only the
basic two retransmissions for event completions.
| Packetization | Packets/s | IP Packet | Total IP Bit |
| Interval (ms) | | Size (bits) | Rate (bits/s) |
| 50 | 20 | 448/520 | 10040 |
| 33.3 | 30 | 448/520 | 14280 |
| 25 | 40 | 448/520 | 18520 |
| 20 | 50 | 448 | 22400 |
Table 2: Data Rate at the IP Level versus Packetization Interval
(two retransmissions, 70 ms per event, 50 ms between events)
In the third column of the table, the packet size is 448 bits when
only one event is being reported and 520 bits when the previous event
is also included. No more than one level of redundancy is needed up
to a packetization interval of 50 ms, although at that point most
packets are reporting two events. Longer intervals require a second
level of redundancy in at least some packets.