Tech-invite3GPPspaceIETFspace
959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 5104

Codec Control Messages in the RTP Audio-Visual Profile with Feedback (AVPF)

Pages: 64
Proposed Standard
Updated by:  77288082
Part 2 of 4 – Pages 12 to 31
First   Prev   Next

Top   ToC   RFC5104 - Page 12   prevText

3.5. Feedback Messages

This section describes the semantics of the different feedback messages and how they apply to the different use cases.

3.5.1. Full Intra Request Command

A Full Intra Request (FIR) Command, when received by the designated media sender, requires that the media sender sends a Decoder Refresh Point (see section 2.2) at the earliest opportunity. The evaluation of such an opportunity includes the current encoder coding strategy and the current available network resources. FIR is also known as an "instantaneous decoder refresh request", "fast video update request" or "video fast update request". Using a decoder refresh point implies refraining from using any picture sent prior to that point as a reference for the encoding process of any subsequent picture sent in the stream. For predictive media types that are not video, the analogue applies. For example, if in MPEG-4 systems scene updates are used, the decoder refresh point consists of the full representation of the scene and is not delta-coded relative to previous updates.
Top   ToC   RFC5104 - Page 13
   Decoder refresh points, especially Intra or IDR pictures, are in
   general several times larger in size than predicted pictures.  Thus,
   in scenarios in which the available bit rate is small, the use of a
   decoder refresh point implies a delay that is significantly longer
   than the typical picture duration.

   Usage in multicast is possible; however, aggregation of the commands
   is recommended.  A receiver that receives a request closely after
   sending a decoder refresh point -- within 2 times the longest round
   trip time (RTT) known, plus any AVPF-induced RTCP packet sending
   delays -- should await a second request message to ensure that the
   media receiver has not been served by the previously delivered
   decoder refresh point.  The reason for the specified delay is to
   avoid sending unnecessary decoder refresh points.  A session
   participant may have sent its own request while another participant's
   request was in-flight to them.  Suppressing those requests that may
   have been sent without knowledge about the other request avoids this
   issue.

   Using the FIR command to recover from errors is explicitly
   disallowed, and instead the PLI message defined in AVPF [RFC4585]
   should be used.  The PLI message reports lost pictures and has been
   included in AVPF for precisely that purpose.

   Full Intra Request is applicable in use-cases 1 and 2.

3.5.1.1. Reliability
The FIR message results in the delivery of a decoder refresh point, unless the message is lost. Decoder refresh points are easily identifiable from the bit stream. Therefore, there is no need for protocol-level notification, and a simple command repetition mechanism is sufficient for ensuring the level of reliability required. However, the potential use of repetition does require a mechanism to prevent the recipient from responding to messages already received and responded to. To ensure the best possible reliability, a sender of FIR may repeat the FIR until the desired content has been received. The repetition interval is determined by the RTCP timing rules applicable to the session. Upon reception of a complete decoder refresh point or the detection of an attempt to send a decoder refresh point (which got damaged due to a packet loss), the repetition of the FIR must stop. If another FIR is necessary, the request sequence number must be increased. A FIR sender shall not have more than one FIR (different request sequence number) outstanding at any time per media sender in the session.
Top   ToC   RFC5104 - Page 14
   The receiver of FIR (i.e., the media sender) behaves in complementary
   fashion to ensure delivery of a decoder refresh point.  If it
   receives repetitions of the FIR more than 2*RTT after it has sent a
   decoder refresh point, it shall send a new decoder refresh point.
   Two round trip times allow time for the decoder refresh point to
   arrive back to the requestor and for the end of repetitions of FIR to
   reach and be detected by the media sender.

   An RTP mixer or RTP switching MCU that receive a FIR from a media
   receiver is responsible to ensure that a decoder refresh point is
   delivered to the requesting receiver.  It may be necessary for the
   mixer/MCU to generate FIR commands.  From a reliability perspective,
   the two legs (FIR-requesting endpoint to mixer/MCU, and mixer/MCU to
   decoder refresh point generating endpoint) are handled independently
   from each other.

3.5.2. Temporal-Spatial Trade-off Request and Notification

The Temporal-Spatial Trade-off Request (TSTR) instructs the video encoder to change its trade-off between temporal and spatial resolution. Index values from 0 to 31 indicate monotonically a desire for higher frame rate. That is, a requester asking for an index of 0 prefers a high quality and is willing to accept a low frame rate, whereas a requester asking for 31 wishes a high frame rate, potentially at the cost of low spatial quality. In general, the encoder reaction time may be significantly longer than the typical picture duration. See use case 3 for an example. The encoder decides whether and to what extent the request results in a change of the trade-off. It returns a Temporal-Spatial Trade-off Notification (TSTN) message to indicate the trade-off that it will use henceforth. TSTR and TSTN have been introduced primarily because it is believed that control protocol mechanisms, e.g., a SIP re-invite, are too heavyweight and too slow to allow for a reasonable user experience. Consider, for example, a user interface where the remote user selects the temporal/spatial trade-off with a slider. An immediate feedback to any slider movement is required for a reasonable user experience. A SIP re-INVITE [RFC3261] would require at least two round-trips more (compared to the TSTR/TSTN mechanism) and may involve proxies and other complex mechanisms. Even in a well-designed system, it could take a second or so until the new trade-off is finally selected. Furthermore, the use of RTCP solves the multicast use case very efficiently. The use of TSTR and TSTN in multipoint scenarios is a non-trivial subject, and can be achieved in many implementation-specific ways.
Top   ToC   RFC5104 - Page 15
   Problems stem from the fact that TSTRs will typically arrive
   unsynchronized, and may request different trade-off values for the
   same stream and/or endpoint encoder.  This memo does not specify a
   translator's, mixer's, or endpoint's reaction to the reception of a
   suggested trade-off as conveyed in the TSTR.  We only require the
   receiver of a TSTR message to reply to it by sending a TSTN, carrying
   the new trade-off chosen by its own criteria (which may or may not be
   based on the trade-off conveyed by the TSTR).  In other words, the
   trade-off sent in a TSTR is a non-binding recommendation, nothing
   more.

   Three TSTR/TSTN scenarios need to be distinguished, based on the
   topologies described in [RFC5117].  The scenarios are described in
   the following subsections.

3.5.2.1. Point-to-Point
In this most trivial case (Topo-Point-to-Point), the media sender typically adjusts its temporal/spatial trade-off based on the requested value in TSTR, subject to its own capabilities. The TSTN message conveys back the new trade-off value (which may be identical to the old one if, for example, the sender is not capable of adjusting its trade-off).
3.5.2.2. Point-to-Multipoint Using Multicast or Translators
RTCP Multicast is used either with media multicast according to Topo-Multicast, or following RFC 3550's translator model according to Topo-Translator. In these cases, unsynchronized TSTR messages from different receivers may be received, possibly with different requested trade-offs (because of different user preferences). This memo does not specify how the media sender tunes its trade-off. Possible strategies include selecting the mean or median of all trade-off requests received, giving priority to certain participants, or continuing to use the previously selected trade-off (e.g., when the sender is not capable of adjusting it). Again, all TSTR messages need to be acknowledged by TSTN, and the value conveyed back has to reflect the decision made.
3.5.2.3. Point-to-Multipoint Using RTP Mixer
In this scenario (Topo-Mixer), the RTP mixer receives all TSTR messages, and has the opportunity to act on them based on its own criteria. In most cases, the mixer should form a "consensus" of potentially conflicting TSTR messages arriving from different participants, and initiate its own TSTR message(s) to the media sender(s). As in the previous scenario, the strategy for forming
Top   ToC   RFC5104 - Page 16
   this "consensus" is up to the implementation, and can, for example,
   encompass averaging the participants' request values, giving priority
   to certain participants, or using session default values.

   Even if a mixer or translator performs transcoding, it is very
   difficult to deliver media with the requested trade-off, unless the
   content the mixer or translator receives is already close to that
   trade-off.  Thus, if the mixer changes its trade-off, it needs to
   request the media sender(s) to use the new value, by creating a TSTR
   of its own.  Upon reaching a decision on the used trade-off, it
   includes that value in the acknowledgement to the downstream
   requestors.  Only in cases where the original source has
   substantially higher quality (and bit rate) is it likely that
   transcoding alone can result in the requested trade-off.

3.5.2.4. Reliability
A request and reception acknowledgement mechanism is specified. The Temporal-Spatial Trade-off Notification (TSTN) message informs the requester that its request has been received, and what trade-off is used henceforth. This acknowledgement mechanism is desirable for at least the following reasons: o A change in the trade-off cannot be directly identified from the media bit stream. o User feedback cannot be implemented without knowing the chosen trade-off value, according to the media sender's constraints. o Repetitive sending of messages requesting an unimplementable trade-off can be avoided.

3.5.3. H.271 Video Back Channel Message

ITU-T Rec. H.271 defines syntax, semantics, and suggested encoder reaction to a Video Back Channel Message. The structure defined in this memo is used to transparently convey such a message from media receiver to media sender. In this memo, we refrain from an in-depth discussion of the available code points within H.271 and refer to the specification text [H.271] instead. However, we note that some H.271 messages bear similarities with native messages of AVPF and this memo. Furthermore, we note that some H.271 message are known to require caution in multicast environments -- or are plainly not usable in multicast or multipoint scenarios. Table 1 provides a brief, simplified overview of the messages currently defined in H.271, their roughly corresponding AVPF or Codec Control Messages (CCMs) (the latter as specified in this memo), and an indication of our current knowledge of their multicast safety.
Top   ToC   RFC5104 - Page 17
   H.271 msg type      AVPF/CCM msg type    multicast-safe
   --------------------------------------------------------------------
   0 (when used for
     reference picture
      selection)        AVPF RPSI       No (positive ACK of pictures)
   1 picture loss       AVPF PLI        Yes
   2 partial loss       AVPF SLI        Yes
   3 one parameter CRC  N/A             Yes (no required sender action)
   4 all parameter CRC  N/A             Yes (no required sender action)
   5 refresh point      CCM FIR         Yes

   Table 1: H.271 messages and their AVPF/CCM equivalents

          Note: H.271 message type 0 is not a strict equivalent to
          AVPF's Reference Picture Selection Indication (RPSI); it is an
          indication of known-as-correct reference picture(s) at the
          decoder.  It does not command an encoder to use a defined
          reference picture (the form of control information envisioned
          to be carried in RPSI).  However, it is believed and intended
          that H.271 message type 0 will be used for the same purpose as
          AVPF's RPSI -- although other use forms are also possible.

   In response to the opaqueness of the H.271 messages, especially with
   respect to the multicast safety, the following guidelines MUST be
   followed when an implementation wishes to employ the H.271 video back
   channel message:

   1. Implementations utilizing the H.271 feedback message MUST stay in
      compliance with congestion control principles, as outlined in
      section 5.

   2. An implementation SHOULD utilize the IETF-native messages as
      defined in [RFC4585] and in this memo instead of similar messages
      defined in [H.271].  Our current understanding of similar messages
      is documented in Table 1 above.  One good reason to divert from
      the SHOULD statement above would be if it is clearly understood
      that, for a given application and video compression standard, the
      aforementioned "similarity" is not given, in contrast to what the
      table indicates.

   3. It has been observed that some of the H.271 code points currently
      in existence are not multicast-safe.  Therefore, the sensible
      thing to do is not to use the H.271 feedback message type in
      multicast environments.  It MAY be used only when all the issues
      mentioned later are fully understood by the implementer, and
      properly taken into account by all endpoints.  In all other cases,
      the H.271 message type MUST NOT be used in conjunction with
      multicast.
Top   ToC   RFC5104 - Page 18
   4. It has been observed that even in centralized multipoint
      environments, where the mixer should theoretically be able to
      resolve issues as documented below, the implementation of such a
      mixer and cooperative endpoints is a very difficult and tedious
      task.  Therefore, H.271 messages MUST NOT be used in centralized
      multipoint scenarios, unless all the issues mentioned below are
      fully understood by the implementer, and properly taken into
      account by both mixer and endpoints.

   Issues to be taken into account when considering the use of H.271 in
   multipoint environments:

   1. Different state on different receivers.  In many environments, it
      cannot be guaranteed that the decoder state of all media receivers
      is identical at any given point in time.  The most obvious reason
      for such a possible misalignment of state is a loss that occurs on
      the path to only one of many media receivers.  However, there are
      other not so obvious reasons, such as recent joins to the
      multipoint conference (be it by joining the multicast group or
      through additional mixer output).  Different states can lead the
      media receivers to issue potentially contradicting H.271 messages
      (or one media receiver issuing an H.271 message that, when
      observed by the media sender, is not helpful for the other media
      receivers).  A naive reaction of the media sender to these
      contradicting messages can lead to unpredictable and annoying
      results.

   2. Combining messages from different media receivers in a media
      sender is a non-trivial task.  As reasons, we note that these
      messages may be contradicting each other, and that their transport
      is unreliable (there may well be other reasons).  In case of many
      H.271 messages (i.e., types 0, 2, 3, and 4), the algorithm for
      combining must be aware both of the network/protocol environment
      (i.e., with respect to congestion) and of the media codec
      employed, as H.271 messages of a given type can have different
      semantics for different media codecs.

   3. The suppression of requests may need to go beyond the basic
      mechanisms described in AVPF (which are driven exclusively by
      timing and transport considerations on the protocol level).  For
      example, a receiver is often required to refrain from (or delay)
      generating requests, based on information it receives from the
      media stream.  For instance, it makes no sense for a receiver to
      issue a FIR when a transmission of an Intra/IDR picture is
      ongoing.
Top   ToC   RFC5104 - Page 19
   4. When using the non-multicast-safe messages (e.g., H.271 type 0
      positive ACK of received pictures/slices) in larger multicast
      groups, the media receiver will likely be forced to delay or even
      omit sending these messages.  For the media sender, this looks
      like data has not been properly received (although it was received
      properly), and a naively implemented media sender reacts to these
      perceived problems where it should not.

3.5.3.1. Reliability
H.271 Video Back Channel Messages do not require reliable transmission, and confirmation of the reception of a message can be derived from the forward video bit stream. Therefore, no specific reception acknowledgement is specified. With respect to re-sending rules, section 3.5.1.1 applies.

3.5.4. Temporary Maximum Media Stream Bit Rate Request and Notification

A receiver, translator, or mixer uses the Temporary Maximum Media Stream Bit Rate Request (TMMBR, "timber") to request a sender to limit the maximum bit rate for a media stream (see section 2.2) to, or below, the provided value. The Temporary Maximum Media Stream Bit Rate Notification (TMMBN) contains the media sender's current view of the most limiting subset of the TMMBR-defined limits it has received, to help the participants to suppress TMMBRs that would not further restrict the media sender. The primary usage for the TMMBR/TMMBN messages is in a scenario with an MCU or mixer (use case 6), corresponding to Topo-Translator or Topo-Mixer, but also to Topo- Point-to-Point. Each temporary limitation on the media stream is expressed as a tuple. The first component of the tuple is the maximum total media bit rate (as defined in section 2.2) that the media receiver is currently prepared to accept for this media stream. The second component is the per-packet overhead that the media receiver has observed for this media stream at its chosen reference protocol layer. As indicated in section 2.2, the overhead as observed by the sender of the TMMBR (i.e., the media receiver) may differ from the overhead observed at the receiver of the TMMBR (i.e., the media sender) due to use of a different reference protocol layer at the other end or due to the intervention of translators or mixers that affect the amount of per packet overhead. For example, a gateway in between the two that converts between IPv4 and IPv6 affects the per-packet overhead by 20 bytes. Other mechanisms that change the overhead include tunnels. The problem with varying overhead is also discussed in
Top   ToC   RFC5104 - Page 20
   [RFC3890].  As will be seen in the description of the algorithm for
   use of TMMBR, the difference in perceived overhead between the
   sending and receiving ends presents no difficulty because
   calculations are carried out in terms of variables that have the same
   value at the sender as at the receiver -- for example, packet rate
   and net media rate.

   Reporting both maximum total media bit rate and per-packet overhead
   allows different receivers to provide bit rate and overhead values
   for different protocol layers, for example, at the IP level, at the
   outer part of a tunnel protocol, or at the link layer.  The protocol
   level a peer reports on depends on the level of integration the peer
   has, as it needs to be able to extract the information from that
   protocol level.  For example, an application with no knowledge of the
   IP version it is running over cannot meaningfully determine the
   overhead of the IP header, and hence will not want to include IP
   overhead in the overhead or maximum total media bit rate calculation.

   It is expected that most peers will be able to report values at least
   for the IP layer.  In certain implementations, it may be advantageous
   to also include information pertaining to the link layer, which in
   turn allows for a more precise overhead calculation and a better
   optimization of connectivity resources.

   The Temporary Maximum Media Stream Bit Rate messages are generic
   messages that can be applied to any RTP packet stream.  This
   separates them from the other codec control messages defined in this
   specification, which apply only to specific media types or payload
   formats.  The TMMBR functionality applies to the transport, and the
   requirements the transport places on the media encoding.

   The reasoning below assumes that the participants have negotiated a
   session maximum bit rate, using a signaling protocol.  This value can
   be global, for example, in case of point-to-point, multicast, or
   translators.  It may also be local between the participant and the
   peer or mixer.  In either case, the bit rate negotiated in signaling
   is the one that the participant guarantees to be able to handle
   (depacketize and decode).  In practice, the connectivity of the
   participant also influences the negotiated value -- it does not make
   much sense to negotiate a total media bit rate that one's network
   interface does not support.

   It is also beneficial to have negotiated a maximum packet rate for
   the session or sender.  RFC 3890 provides an SDP [RFC4566] attribute
   that can be used for this purpose; however, that attribute is not
   usable in RTP sessions established using offer/answer [RFC3264].
   Therefore, an optional maximum packet rate signaling parameter is
   specified in this memo.
Top   ToC   RFC5104 - Page 21
   An already established maximum total media bit rate may be changed at
   any time, subject to the timing rules governing the sending of
   feedback messages.  The limit may change to any value between zero
   and the session maximum, as negotiated during session establishment
   signaling.  However, even if a sender has received a TMMBR message
   allowing an increase in the bit rate, all increases must be governed
   by a congestion control mechanism.  TMMBR indicates known limitations
   only, usually in the local environment, and does not provide any
   guarantees about the full path.  Furthermore, any increases in
   TMMBR-established bit rate limits are to be executed only after a
   certain delay from the sending of the TMMBN message that notifies the
   world about the increase in limit.  The delay is specified as at
   least twice the longest RTT as known by the media sender, plus the
   media sender's calculation of the required wait time for the sending
   of another TMMBR message for this session based on AVPF timing rules.
   This delay is introduced to allow other session participants to make
   known their bit rate limit requirements, which may be lower.

   If it is likely that the new value indicated by TMMBR will be valid
   for the remainder of the session, the TMMBR sender is expected to
   perform a renegotiation of the session upper limit using the session
   signaling protocol.

3.5.4.1. Behavior for Media Receivers Using TMMBR
This section is an informal description of behaviour described more precisely in section 4.2. A media sender begins the session limited by the maximum media bit rate and maximum packet rate negotiated in session signaling, if any. Note that this value may be negotiated for another protocol layer than the one the participant uses in its TMMBR messages. Each media receiver selects a reference protocol layer, forms an estimate of the overhead it is observing (or estimating it if no packets has been seen yet) at that reference level, and determines the maximum total media bit rate it can accept, taking into account its own limitations and any transport path limitations of which it may be aware. In case the current limitations are more restricting than what was agreed on in the session signaling, the media receiver reports its initial estimate of these two quantities to the media sender using a TMMBR message. Overall message traffic is reduced by the possibility of including tuples for multiple media senders in the same TMMBR message. The media sender applies an algorithm such as that specified in section 3.5.4.2 to select which of the tuples it has received are most limiting (i.e., the bounding set as defined in section 2.2). It modifies its operation to stay within the feasible region (as defined
Top   ToC   RFC5104 - Page 22
   in section 2.2), and also sends out a TMMBN to the media receivers
   indicating the selected bounding set.  That notification also
   indicates who was responsible for the tuples in the bounding set,
   i.e., the "owner"(s) of the limitation.  A session participant that
   owns no tuple in the bounding set is called a "non-owner".

   If a media receiver does not own one of the tuples in the bounding
   set reported by the TMMBN, it applies the same algorithm as the media
   sender to determine if its current estimated (maximum total media bit
   rate, overhead) tuple would enter the bounding set if known to the
   media sender.  If so, it issues a TMMBR reporting the tuple value to
   the sender.  Otherwise, it takes no action for the moment.
   Periodically, its estimated tuple values may change or it may receive
   a new TMMBN.  If so, it reapplies the algorithm to decide whether it
   needs to issue a TMMBR.

   If, alternatively, a media receiver owns one of the tuples in the
   reported bounding set, it takes no action until such time as its
   estimate of its own tuple values changes.  At that time, it sends a
   TMMBR to the media sender to report the changed values.

   A media receiver may change status between owner and non-owner of a
   bounding tuple between one TMMBN message and the next.  Thus, it must
   check the contents of each TMMBN to determine its subsequent actions.

   Implementations may use other algorithms of their choosing, as long
   as the bit rate limitations resulting from the exchange of TMMBR and
   TMMBN messages are at least as strict (at least as low, in the bit
   rate dimension) as the ones resulting from the use of the
   aforementioned algorithm.

   Obviously, in point-to-point cases, when there is only one media
   receiver, this receiver becomes "owner" once it receives the first
   TMMBN in response to its own TMMBR, and stays "owner" for the rest of
   the session.  Therefore, when it is known that there will always be
   only a single media receiver, the above algorithm is not required.
   Media receivers that are aware they are the only ones in a session
   can send TMMBR messages with bit rate limits both higher and lower
   than the previously notified limit, at any time (subject to the AVPF
   [RFC4585] RTCP RR send timing rules).  However, it may be difficult
   for a session participant to determine if it is the only receiver in
   the session.  Because of this, any implementation of TMMBR is
   required to include the algorithm described in the next section or a
   stricter equivalent.
Top   ToC   RFC5104 - Page 23
3.5.4.2. Algorithm for Establishing Current Limitations
This section introduces an example algorithm for the calculation of a session limit. Other algorithms can be employed, as long as the result of the calculation is at least as restrictive as the result that is obtained by this algorithm. First, it is important to consider the implications of using a tuple for limiting the media sender's behavior. The bit rate and the overhead value result in a two-dimensional solution space for the calculation of the bit rate of media streams. Fortunately, the two variables are linked. Specifically, the bit rate available for RTP payloads is equal to the TMMBR reported bit rate minus the packet rate used, multiplied by the TMMBR reported overhead converted to bits. As a result, when different bit rate/overhead combinations need to be considered, the packet rate determines the correct limitation. This is perhaps best explained by an example: Example: Receiver A: TMMBR_max total BR = 35 kbps, TMMBR_OH = 40 bytes Receiver B: TMMBR_max total BR = 40 kbps, TMMBR_OH = 60 bytes For a given packet rate (PR), the bit rate available for media payloads in RTP will be: Max_net media_BR_A = TMMBR_max total BR_A - PR * TMMBR_OH_A * 8 ... (1) Max_net media_BR_B = TMMBR_max total BR_B - PR * TMMBR_OH_B * 8 ... (2) For a PR = 20, these calculations will yield a Max_net media_BR_A = 28600 bps and Max_net media_BR_B = 30400 bps, which suggests that receiver A is the limiting one for this packet rate. However, at a certain PR there is a switchover point at which receiver B becomes the limiting one. The switchover point can be identified by setting Max_media_BR_A equal to Max_media_BR_B and breaking out PR: TMMBR_max total BR_A - TMMBR_max total BR_B PR = ------------------------------------------- ... (3) 8*(TMMBR_OH_A - TMMBR_OH_B) which, for the numbers above, yields 31.25 as the switchover point between the two limits. That is, for packet rates below 31.25 per second, receiver A is the limiting receiver, and for higher packet rates, receiver B is more limiting. The implications of this behavior have to be considered by implementations that are going to
Top   ToC   RFC5104 - Page 24
   control media encoding and its packetization.  As exemplified above,
   multiple TMMBR limits may apply to the trade-off between net media
   bit rate and packet rate.  Which limitation applies depends on the
   packet rate being considered.

   This also has implications for how the TMMBR mechanism needs to work.
   First, there is the possibility that multiple TMMBR tuples are
   providing limitations on the media sender.  Secondly, there is a need
   for any session participant (media sender and receivers) to be able
   to determine if a given tuple will become a limitation upon the media
   sender, or if the set of already given limitations is stricter than
   the given values.  In the absence of the ability to make this
   determination, the suppression of TMMBRs would not work.

   The basic idea of the algorithm is as follows.  Each TMMBR tuple can
   be viewed as the equation of a straight line (cf. equations (1) and
   (2)) in a space where packet rate lies along the X-axis and net bit
   rate along the Y-axis.  The lower envelope of the set of lines
   corresponding to the complete set of TMMBR tuples, together with the
   X and Y axes, defines a polygon.  Points lying within this polygon
   are combinations of packet rate and bit rate that meet all of the
   TMMBR constraints.  The highest feasible packet rate within this
   region is the minimum of the rate at which the bounding polygon meets
   the X-axis or the session maximum packet rate (SMAXPR, measured in
   packets per second) provided by signaling, if any.  Typically, a
   media sender will prefer to operate at a lower rate than this
   theoretical maximum, so as to increase the rate at which actual media
   content reaches the receivers.  The purpose of the algorithm is to
   distinguish the TMMBR tuples constituting the bounding set and thus
   delineate the feasible region, so that the media sender can select
   its preferred operating point within that region

   Figure 1 below shows a bounding polygon formed by TMMBR tuples A and
   B.  A third tuple C lies outside the bounding polygon and is
   therefore irrelevant in determining feasible trade-offs between media
   rate and packet rate.  The line labeled ss..s represents the limit on
   packet rate imposed by the session maximum packet rate (SMAXPR)
   obtained by signaling during session setup.  In Figure 1, the limit
   determined by tuple B happens to be more restrictive than SMAXPR.
   The situation could easily be the reverse, meaning that the bounding
   polygon is terminated on the right by the vertical line representing
   the SMAXPR constraint.
Top   ToC   RFC5104 - Page 25
   Net  ^
   Media|a   c   b             s
   Bit  |  a   c  b            s
   Rate |    a   c b           s
        |      a   cb          s
        |        a   c         s
        |          a  bc       s
        |            a b c     s
        |              ab  c   s
        |  Feasible      b   c s
        |   region        ba   s
        |                  b a s c
        |                   b  s   c
        |                    b s a
        |                     bs
        +------------------------------>

              Packet rate

    Figure 1 - Geometric Interpretation of TMMBR Tuples

   Note that the slopes of the lines making up the bounding polygon are
   increasingly negative as one moves in the direction of increasing
   packet rate.  Note also that with slight rearrangement, equations (1)
   and (2) have the canonical form:

          y = mx + b

   where
     m is the slope and has value equal to the negative of the tuple
     overhead (in bits),
   and
     b is the y-intercept and has value equal to the tuple maximum
     total media bit rate.

   These observations lead to the conclusion that when processing the
   TMMBR tuples to select the initial bounding set, one should sort and
   process the tuples by order of increasing overhead.  Once a
   particular tuple has been added to the bounding set, all tuples not
   already selected and having lower overhead can be eliminated, because
   the next side of the bounding polygon has to be steeper (i.e., the
   corresponding TMMBR must have higher overhead) than the latest added
   tuple.

   Line cc..c in Figure 1 illustrates another principle.  This line is
   parallel to line aa..a, but has a higher Y-intercept.  That is, the
   corresponding TMMBR tuple contains a higher maximum total media bit
   rate value.  Since line cc..c is outside the bounding polygon, it
Top   ToC   RFC5104 - Page 26
   illustrates the conclusion that if two TMMBR tuples have the same
   overhead value, the one with higher maximum total media bit rate
   value cannot be part of the bounding set and can be set aside.

   Two further observations complete the algorithm.  Obviously, moving
   from the left, the successive corners of the bounding polygon (i.e.,
   the intersection points between successive pairs of sides) lie at
   successively higher packet rates.  On the other hand, again moving
   from the left, each successive line making up the bounding set
   crosses the X-axis at a lower packet rate.

   The complete algorithm can now be specified.  The algorithm works
   with two lists of TMMBR tuples, the candidate list X and the selected
   list Y, both ordered by increasing overhead value.  The algorithm
   terminates when all members of X have been discarded or removed for
   processing.  Membership of the selected list Y is probationary until
   the algorithm is complete.  Each member of the selected list is
   associated with an intersection value, which is the packet rate at
   which the line corresponding to that TMMBR tuple intersects with the
   line corresponding to the previous TMMBR tuple in the selected list.
   Each member of the selected list is also associated with a maximum
   packet rate value, which is the lesser of the session maximum packet
   rate SMAXPR (if any) and the packet rate at which the line
   corresponding to that tuple crosses the X-axis.

   When the algorithm terminates, the selected list is equal to the
   bounding set as defined in section 2.2.

   Initial Algorithm

   This algorithm is used by the media sender when it has received one
   or more TMMBRs and before it has determined a bounding set for the
   first time.

   1. Sort the TMMBR tuples by order of increasing overhead.  This is
      the initial candidate list X.

   2. When multiple tuples in the candidate list have the same overhead
      value, discard all but the one with the lowest maximum total media
      bit rate value.

   3. Select and remove from the candidate list the TMMBR tuple with the
      lowest maximum total media bit rate value.  If there is more than
      one tuple with that value, choose the one with the highest
      overhead value.  This is the first member of the selected list Y.
      Set its intersection value equal to zero.  Calculate its maximum
Top   ToC   RFC5104 - Page 27
      packet rate as the minimum of SMAXPR (if available) and the value
      obtained from the following formula, which is the packet rate at
      which the corresponding line crosses the X-axis.

          Max PR = TMMBR max total BR / (8 * TMMBR OH) ... (4)

   4. Discard from the candidate list all tuples with a lower overhead
      value than the selected tuple.

   5. Remove the first remaining tuple from the candidate list for
      processing.  Call this the current candidate.

   6. Calculate the packet rate PR at the intersection of the line
      generated by the current candidate with the line generated by the
      last tuple in the selected list Y, using equation (3).

   7. If the calculated value PR is equal to or lower than the
      intersection value stored for the last tuple of the selected list,
      discard the last tuple of the selected list and go back to step 6
      (retaining the same current candidate).

      Note that the choice of the initial member of the selected list Y
      in step 3 guarantees that the selected list will never be emptied
      by this process, meaning that the algorithm must eventually (if
      not immediately) fall through to step 8.

   8. (This step is reached when the calculated PR value of the current
      candidate is greater than the intersection value of the current
      last member of the selected list Y.)  If the calculated value PR
      of the current candidate is lower than the maximum packet rate
      associated with the last tuple in the selected list, add the
      current candidate tuple to the end of the selected list.  Store PR
      as its intersection value.  Calculate its maximum packet rate as
      the lesser of SMAXPR (if available) and the maximum packet rate
      calculated using equation (4).

   9. If any tuples remain in the candidate list, go back to step 5.

   Incremental Algorithm

   The previous algorithm covered the initial case, where no selected
   list had previously been created.  It also applied only to the media
   sender.  When a previously created selected list is available at
   either the media sender or media receiver, two other cases can be
   considered:

        o when a TMMBR tuple not currently in the selected list is a
          candidate for addition;
Top   ToC   RFC5104 - Page 28
        o when the values change in a TMMBR tuple currently in the
          selected list.

   At the media receiver, these cases correspond, respectively, to those
   of the non-owner and owner of a tuple in the TMMBN-reported bounding
   set.

   In either case, the process of updating the selected list to take
   account of the new/changed tuple can use the basic algorithm
   described above, with the modification that the initial candidate set
   consists only of the existing selected list and the new or changed
   tuple.  Some further optimization is possible (beyond starting with a
   reduced candidate set) by taking advantage of the following
   observations.

   The first observation is that if the new/changed candidate becomes
   part of the new selected list, the result may be to cause zero or
   more other tuples to be dropped from the list.  However, if more than
   one other tuple is dropped, the dropped tuples will be consecutive.
   This can be confirmed geometrically by visualizing a new line that
   cuts off a series of segments from the previously existing bounding
   polygon.  The cut-off segments are connected one to the next, the
   geometric equivalent of consecutive tuples in a list ordered by
   overhead value.  Beyond the dropped set in either direction all of
   the tuples that were in the earlier selected list will be in the
   updated one.  The second observation is that, leaving aside the new
   candidate, the order of tuples remaining in the updated selected list
   is unchanged because their overhead values have not changed.

   The consequence of these two observations is that, once the placement
   of the new candidate and the extent of the dropped set of tuples (if
   any) has been determined, the remaining tuples can be copied directly
   from the candidate list into the selected list, preserving their
   order.  This conclusion suggests the following modified algorithm:

       o Run steps 1-4 of the basic algorithm.

       o If the new candidate has survived steps 2 and 4 and has become
          the new first member of the selected list, run steps 5-9 on
          subsequent candidates until another candidate is added to the
          selected list.  Then move all remaining candidates to the
          selected list, preserving their order.

       o If the new candidate has survived steps 2 and 4 and has not
          become the new first member of the selected list, start by
          moving all tuples in the candidate list with lower overhead
          values than that of the new candidate to the selected list,
          preserving their order.  Run steps 5-9 for the new candidate,
Top   ToC   RFC5104 - Page 29
          with the modification that the intersection values and maximum
          packet rates for the tuples on the selected list have to be
          calculated on the fly because they were not previously stored.
          Continue processing only until a subsequent tuple has been
          added to the selected list, then move all remaining candidates
          to the selected list, preserving their order.

          Note that the new candidate could be added to the selected
          list only to be dropped again when the next tuple is
          processed.  It can easily be seen that in this case the new
          candidate does not displace any of the earlier tuples in the
          selected list.  The limitations of ASCII art make this
          difficult to show in a figure.  Line cc..c in Figure 1 would
          be an example if it had a steeper slope (tuple C had a higher
          overhead value), but still intersected line aa..a beyond where
          line aa..a intersects line bb..b.

   The algorithm just described is approximate, because it does not take
   account of tuples outside the selected list.  To see how such tuples
   can become relevant, consider Figure 1 and suppose that the maximum
   total media bit rate in tuple A increases to the point that line
   aa..a moves outside line cc..c.  Tuple A will remain in the bounding
   set calculated by the media sender.  However, once it issues a new
   TMMBN, media receiver C will apply the algorithm and discover that
   its tuple C should now enter the bounding set.  It will issue a TMMBR
   to the media sender, which will repeat its calculation and come to
   the appropriate conclusion.

   The rules of section 4.2 require that the media sender refrain from
   raising its sending rate until media receivers have had a chance to
   respond to the TMMBN.  In the example just given, this delay ensures
   that the relaxation of tuple A does not actually result in an attempt
   to send media at a rate exceeding the capacity at C.

3.5.4.3. Use of TMMBR in a Mixer-Based Multipoint Operation
Assume a small mixer-based multiparty conference is ongoing, as depicted in Topo-Mixer of [RFC5117]. All participants have negotiated a common maximum bit rate that this session can use. The conference operates over a number of unicast paths between the participants and the mixer. The congestion situation on each of these paths can be monitored by the participant in question and by the mixer, utilizing, for example, RTCP receiver reports (RRs) or the transport protocol, e.g., Datagram Congestion Control Protocol (DCCP) [RFC4340]. However, any given participant has no knowledge of the congestion situation of the connections to the other participants. Worse, without mechanisms similar to the ones discussed in this document, the mixer (which is aware of the congestion situation on
Top   ToC   RFC5104 - Page 30
   all connections it manages) has no standardized means to inform media
   senders to slow down, short of forging its own receiver reports
   (which is undesirable).  In principle, a mixer confronted with such a
   situation is obliged to thin or transcode streams intended for
   connections that detected congestion.

   In practice, unfortunately, media-aware streaming thinning is a very
   difficult and cumbersome operation and adds undesirable delay.  If
   media-unaware, it leads very quickly to unacceptable reproduced media
   quality.  Hence, a means to slow down senders even in the absence of
   congestion on their connections to the mixer is desirable.

   To allow the mixer to throttle traffic on the individual links,
   without performing transcoding, there is a need for a mechanism that
   enables the mixer to ask a participant's media encoders to limit the
   media stream bit rate they are currently generating.  TMMBR provides
   the required mechanism.  When the mixer detects congestion between
   itself and a given participant, it executes the following procedure:

   1. It starts thinning the media traffic to the congested participant
      to the supported bit rate.

   2. It uses TMMBR to request the media sender(s) to reduce the total
      media bit rate sent by them to the mixer, to a value that is in
      compliance with congestion control principles for the slowest
      link.  Slow refers here to the available bandwidth / bit rate /
      capacity and packet rate after congestion control.

   3. As soon as the bit rate has been reduced by the sending part, the
      mixer stops stream thinning implicitly, because there is no need
      for it once the stream is in compliance with congestion control.

   This use of stream thinning as an immediate reaction tool followed up
   by a quick control mechanism appears to be a reasonable compromise
   between media quality and the need to combat congestion.

3.5.4.4. Use of TMMBR in Point-to-Multipoint Using Multicast or Translators
In these topologies, corresponding to Topo-Multicast or Topo- Translator, RTCP RRs are transmitted globally. This allows all participants to detect transmission problems such as congestion, on a medium timescale. As all media senders are aware of the congestion situation of all media receivers, the rationale for the use of TMMBR in the previous section does not apply. However, even in this case the congestion control response can be improved when the unicast
Top   ToC   RFC5104 - Page 31
   links are using congestion controlled transport protocols (such as
   TCP or DCCP).  A peer may also report local limitations to the media
   sender.

3.5.4.5. Use of TMMBR in Point-to-Point Operation
In use case 7, it is possible to use TMMBR to improve the performance when the known upper limit of the bit rate changes. In this use case, the signaling protocol has established an upper limit for the session and total media bit rates. However, at the time of transport link bit rate reduction, a receiver can avoid serious congestion by sending a TMMBR to the sending side. Thus, TMMBR is useful for putting restrictions on the application and thus placing the congestion control mechanism in the right ballpark. However, TMMBR is usually unable to provide the continuously quick feedback loop required for real congestion control. Nor do its semantics match those of congestion control given its different purpose. For these reasons, TMMBR SHALL NOT be used as a substitute for congestion control.
3.5.4.6. Reliability
The reaction of a media sender to the reception of a TMMBR message is not immediately identifiable through inspection of the media stream. Therefore, a more explicit mechanism is needed to avoid unnecessary re-sending of TMMBR messages. Using a statistically based retransmission scheme would only provide statistical guarantees of the request being received. It would also not avoid the retransmission of already received messages. In addition, it would not allow for easy suppression of other participants' requests. For these reasons, a mechanism based on explicit notification is used. Upon the reception of a TMMBR, a media sender sends a TMMBN containing the current bounding set, and indicating which session participants own that limit. In multicast scenarios, that allows all other participants to suppress any request they may have, if their limitations are less strict than the current ones (i.e., define lines lying outside the feasible region as defined in section 2.2). Keeping and notifying only the bounding set of tuples allows for small message sizes and media sender states. A media sender only keeps state for the SSRCs of the current owners of the bounding set of tuples; all other requests and their sources are not saved. Once the bounding set has been established, new TMMBR messages should be generated only by owners of the bounding tuples and by other entities that determine (by applying the algorithm of section 3.5.4.2 or its equivalent) that their limitations should now be part of the bounding set.


(next page on part 3)

Next Section