Tech-invite3GPPspaceIETFspace
959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 2022

Support for Multicast over UNI 3.0/3.1 based ATM Networks

Pages: 82
Proposed Standard
Part 1 of 4 – Pages 1 to 16
None   None   Next

Top   ToC   RFC2022 - Page 1
Network Working Group                                        G. Armitage
Request for Comments: 2022                                      Bellcore
Category: Standards Track                                  November 1996


       Support for Multicast over UNI 3.0/3.1 based ATM Networks.

Status of this Memo

   This document specifies an Internet standards track protocol for the
   Internet community, and requests discussion and suggestions for
   improvements.  Please refer to the current edition of the "Internet
   Official Protocol Standards" (STD 1) for the standardization state
   and status of this protocol.  Distribution of this memo is unlimited

Abstract

   Mapping the connectionless IP multicast service over the connection
   oriented ATM services provided by UNI 3.0/3.1 is a non-trivial task.
   This memo describes a mechanism to support the multicast needs of
   Layer 3 protocols in general, and describes its application to IP
   multicasting in particular.

   ATM based IP hosts and routers use a Multicast Address Resolution
   Server (MARS) to support RFC 1112 style Level 2 IP multicast over the
   ATM Forum's UNI 3.0/3.1 point to multipoint connection service.
   Clusters of endpoints share a MARS and use it to track and
   disseminate information identifying the nodes listed as receivers for
   given multicast groups. This allows endpoints to establish and manage
   point to multipoint VCs when transmitting to the group.

   The MARS behaviour allows Layer 3 multicasting to be supported using
   either meshes of VCs or ATM level multicast servers. This choice may
   be made on a per-group basis, and is transparent to the endpoints.
Top   ToC   RFC2022 - Page 2
Table of Contents

   1. Introduction.................................................   4
    1.1 The Multicast Address Resolution Server (MARS).............   5
    1.2 The ATM level multicast Cluster............................   5
    1.3 Document overview..........................................   6
    1.4 Conventions................................................   7
   2. The IP multicast service model...............................   7
   3. UNI 3.0/3.1 support for intra-cluster multicasting...........   8
    3.1 VC meshes..................................................   9
    3.2 Multicast Servers..........................................   9
    3.3 Tradeoffs..................................................  10
    3.4 Interaction with local UNI 3.0/3.1 signalling entity.......  11
   4. Overview of the MARS.........................................  12
    4.1 Architecture...............................................  12
    4.2 Control message format.....................................  12
    4.3 Fixed header fields in MARS control messages...............  13
      4.3.1 Hardware type..........................................  14
      4.3.2 Protocol type..........................................  14
      4.3.3 Checksum...............................................  15
      4.3.4 Extensions Offset......................................  15
      4.3.5 Operation code.........................................  16
      4.3.6 Reserved...............................................  16
   5. Endpoint (MARS client) interface behaviour...................  16
    5.1 Transmit side behaviour....................................  17
      5.1.1 Retrieving Group Membership from the MARS..............  18
      5.1.2 MARS_REQUEST, MARS_MULTI, and MARS_NAK messages........  20
      5.1.3 Establishing the outgoing multipoint VC................  22
      5.1.4 Monitoring updates on ClusterControlVC.................  24
        5.1.4.1 Updating the active VCs............................  24
        5.1.4.2 Tracking the Cluster Sequence Number...............  25
      5.1.5 Revalidating a VC's leaf nodes.........................  26
        5.1.5.1 When leaf node drops itself........................  27
        5.1.5.2 When a jump is detected in the CSN.................  27
      5.1.6 'Migrating' the outgoing multipoint VC.................  27
    5.2. Receive side behaviour....................................  29
      5.2.1 Format of the MARS_JOIN and MARS_LEAVE Messages........  30
        5.2.1.1 Important IPv4 default values......................  32
      5.2.2 Retransmission of MARS_JOIN and MARS_LEAVE messages....  33
      5.2.3 Cluster member registration and deregistration.........  34
    5.3 Support for Layer 3 group management.......................  34
    5.4 Support for redundant/backup MARS entities.................  36
      5.4.1 First response to MARS problems........................  36
      5.4.2 Connecting to a backup MARS............................  37
      5.4.3 Dynamic backup lists, and soft redirects...............  37
    5.5 Data path LLC/SNAP encapsulations..........................  40
      5.5.1 Type #1 encapsulation..................................  40
      5.5.2 Type #2 encapsulation..................................  41
Top   ToC   RFC2022 - Page 3
      5.5.3 A Type #1 example......................................  42
   6. The MARS in greater detail...................................  42
    6.1 Basic interface to Cluster members.........................  43
      6.1.1 Response to MARS_REQUEST...............................  43
      6.1.2 Response to MARS_JOIN and MARS_LEAVE...................  43
      6.1.3 Generating MARS_REDIRECT_MAP...........................  45
      6.1.4 Cluster Sequence Numbers...............................  45
    6.2 MARS interface to Multicast Servers (MCSs).................  46
      6.2.1 MARS_REQUESTs for MCS supported groups.................  47
      6.2.2 MARS_MSERV and MARS_UNSERV messages....................  47
      6.2.3 Registering a Multicast Server (MCS)...................  49
      6.2.4 Modified response to MARS_JOIN and MARS_LEAVE..........  49
      6.2.5 Sequence numbers for ServerControlVC traffic...........  51
    6.3 Why global sequence numbers?...............................  52
    6.4 Redundant/Backup MARS Architectures........................  52
   7. How an MCS utilises a MARS...................................  53
    7.1 Association with a particular Layer 3 group................  53
    7.2 Termination of incoming VCs................................  54
    7.3 Management of outgoing VC..................................  54
    7.4 Use of a backup MARS.......................................  54
   8. Support for IP multicast routers.............................  54
    8.1 Forwarding into a Cluster..................................  55
    8.2 Joining in 'promiscuous' mode..............................  55
    8.3 Forwarding across the cluster..............................  56
    8.4 Joining in 'semi-promiscous' mode..........................  56
    8.5 An alternative to IGMP Queries.............................  57
    8.6 CMIs across multiple interfaces............................  58
   9. Multiprotocol applications of the MARS and MARS clients......  59
   10. Supplementary parameter processing..........................  60
    10.1 Interpreting the mar$extoff field.........................  60
    10.2 The format of TLVs........................................  60
    10.3 Processing MARS messages with TLVs........................  62
    10.4 Initial set of TLV elements...............................  62
   11. Key Decisions and open issues...............................  62
   Security Considerations.........................................  65
   Acknowledgments.................................................  65
   Author's Address................................................  65
   References......................................................  66
   Appendix A. Hole punching algorithms............................  67
   Appendix B. Minimising the impact of IGMP in IPv4 environments..  69
   Appendix C. Further comments on 'Clusters'......................  71
   Appendix D. TLV list parsing algorithm..........................  72
   Appendix E. Summary of timer values.............................  73
   Appendix F. Pseudo code for MARS operation......................  74
Top   ToC   RFC2022 - Page 4
1.  Introduction.

   Multicasting is the process whereby a source host or protocol entity
   sends a packet to multiple destinations simultaneously using a
   single, local 'transmit' operation. The more familiar cases of
   Unicasting and Broadcasting may be considered to be special cases of
   Multicasting (with the packet delivered to one destination, or 'all'
   destinations, respectively).

   Most network layer models, like the one described in RFC 1112 [1] for
   IP multicasting, assume sources may send their packets to abstract
   'multicast group addresses'.  Link layer support for such an
   abstraction is assumed to exist, and is provided by technologies such
   as Ethernet.

   ATM is being utilized as a new link layer technology to support a
   variety of protocols, including IP. With RFC 1483 [2] the IETF
   defined a multiprotocol mechanism for encapsulating and transmitting
   packets using AAL5 over ATM Virtual Channels (VCs). However, the ATM
   Forum's currently published signalling specifications (UNI 3.0 [8]
   and UNI 3.1 [4]) does not provide the multicast address abstraction.
   Unicast connections are supported by point to point, bidirectional
   VCs. Multicasting is supported through point to multipoint
   unidirectional VCs. The key limitation is that the sender must have
   prior knowledge of each intended recipient, and explicitly establish
   a VC with itself as the root node and the recipients as the leaf
   nodes.

   This document has two broad goals:

      Define a group address registration and membership distribution
      mechanism that allows UNI 3.0/3.1 based networks to support the
      multicast service of protocols such as IP.

      Define specific endpoint behaviours for managing point to
      multipoint VCs to achieve multicasting of layer 3 packets.

   As the IETF is currently in the forefront of using wide area
   multicasting this document's descriptions will often focus on IP
   service model of RFC 1112.  A final chapter will note the
   multiprotocol application of the architecture.

   This document avoids discussion of one highly non-trivial aspect of
   using ATM - the specification of QoS for VCs being established in
   response to higher layer needs. Research in this area is still very
   formative [7], and so it is assumed that future documents will
   clarify the mapping of QoS requirements to VC establishment. The
   default at this time is that VCs are established with a request for
Top   ToC   RFC2022 - Page 5
   Unspecified Bit Rate (UBR) service, as typified by the IETF's use of
   VCs for unicast IP, described in RFC 1755 [6].

1.1  The Multicast Address Resolution Server (MARS).

   The Multicast Address Resolution Server (MARS) is an extended analog
   of the ATM ARP Server introduced in RFC 1577 [3].  It acts as a
   registry, associating layer 3 multicast group identifiers with the
   ATM interfaces representing the group's members.  MARS messages
   support the distribution of multicast group membership information
   between MARS and endpoints (hosts or routers).  Endpoint address
   resolution entities query the MARS when a layer 3 address needs to be
   resolved to the set of ATM endpoints making up the group at any one
   time. Endpoints keep the MARS informed when they need to join or
   leave particular layer 3 groups.  To provide for asynchronous
   notification of group membership changes the MARS manages a point to
   multipoint VC out to all endpoints desiring multicast support

   Valid arguments can be made for two different approaches to ATM level
   multicasting of layer 3 packets - through meshes of point to
   multipoint VCs, or ATM level multicast servers (MCS). The MARS
   architecture allows either VC meshes or MCSs to be used on a per-
   group basis.

1.2  The ATM level multicast Cluster.

   Each MARS manages a 'cluster' of ATM-attached endpoints. A Cluster is
   defined as

      The set of ATM interfaces choosing to participate in direct ATM
      connections to achieve multicasting of AAL_SDUs between
      themselves.

   In practice, a Cluster is the set of endpoints that choose to use the
   same MARS to register their memberships and receive their updates
   from.

   By implication of this definition, traffic between interfaces
   belonging to different Clusters passes through an inter-cluster
   device. (In the IP world an inter-cluster device would be an IP
   multicast router with logical interfaces into each Cluster.) This
   document explicitly avoids specifying the nature of inter-cluster
   (layer 3) routing protocols.

   The mapping of clusters to other constrained sets of endpoints (such
   as unicast Logical IP Subnets) is left to each network administrator.
   However, for the purposes of conformance with this document network
   administrators MUST ensure that each Logical IP Subnet (LIS) is
Top   ToC   RFC2022 - Page 6
   served by a separate MARS, creating a one-to-one mapping between
   cluster and unicast LIS.  IP multicast routers then interconnect each
   LIS as they do with conventional subnets. (Relaxation of this
   restriction MAY only occur after future research on the interaction
   between existing layer 3 multicast routing protocols and unicast
   subnet boundaries.)

   The term 'Cluster Member' will be used in this document to refer to
   an endpoint that is currently using a MARS for multicast support.
   Thus potential scope of a cluster may be the entire membership of a
   LIS, while the actual scope of a cluster depends on which endpoints
   are actually cluster members at any given time.

1.3  Document overview.

   This document assumes an understanding of concepts explained in
   greater detail in RFC 1112, RFC 1577, UNI 3.0/3.1, and RFC 1755 [6].

   Section 2 provides an overview of IP multicast and what RFC 1112
   required from Ethernet.

   Section 3 describes in more detail the multicast support services
   offered by UNI 3.0/3.1, and outlines the differences between VC
   meshes and multicast servers (MCSs) as mechanisms for distributing
   packets to multiple destinations.

   Section 4 provides an overview of the MARS and its relationship to
   ATM endpoints. This section also discusses the encapsulation and
   structure of MARS control messages.

   Section 5 substantially defines the entire cluster member endpoint
   behaviour, on both receive and transmit sides. This includes both
   normal operation and error recovery.

   Section 6 summarises the required behaviour of a MARS.

   Section 7 looks at how a multicast server (MCS) interacts with a
   MARS.

   Section 8 discusses how IP multicast routers may make novel use of
   promiscuous and semi-promiscuous group joins. Also discussed is a
   mechanism designed to reduce the amount of IGMP traffic issued by
   routers.

   Section 9 discusses how this document applies in the more general
   (non-IP) case.
Top   ToC   RFC2022 - Page 7
   Section 10 summarises the key proposals, and identifies areas for
   future research that are generated by this MARS architecture.

   The appendices provide discussion on issues that arise out of the
   implementation of this document. Appendix A discusses MARS and
   endpoint algorithms for parsing MARS messages. Appendix B describes
   the particular problems introduced by the current IGMP paradigms, and
   possible interim work-arounds.  Appendix C discusses the 'cluster'
   concept in further detail, while Appendix D briefly outlines an
   algorithm for parsing TLV lists.  Appendix E summarises various timer
   values used in this document, and Appendix F provides example
   pseudo-code for a MARS entity.

1.4  Conventions.

   In this document the following coding and packet representation rules
   are used:

      All multi-octet parameters are encoded in big-endian form (i.e.
      the most significant octet comes first).

      In all multi-bit parameters bit numbering begins at 0 for the
      least significant bit when stored in memory (i.e. the n'th bit has
      weight of 2^n).

      A bit that is 'set', 'on', or 'one' holds the value 1.

      A bit that is 'reset', 'off', 'clear', or 'zero' holds the value
      0.

2.  Summary of the IP multicast service model.

   Under IP version 4 (IPv4), addresses in the range between 224.0.0.0
   and 239.255.255.255 (224.0.0.0/4) are termed 'Class D' or 'multicast
   group' addresses. These abstractly represent all the IP hosts in the
   Internet (or some constrained subset of the Internet) who have
   decided to 'join' the specified group.

   RFC1112 requires that a multicast-capable IP interface must support
   the transmission of IP packets to an IP multicast group address,
   whether or not the node considers itself a 'member' of that group.
   Consequently, group membership is effectively irrelevant to the
   transmit side of the link layer interfaces. When Ethernet is used as
   the link layer (the example used in RFC1112), no address resolution
   is required to transmit packets. An algorithmic mapping from IP
   multicast address to Ethernet multicast address is performed locally
   before the packet is sent out the local interface in the same 'send
   and forget' manner as a unicast IP packet.
Top   ToC   RFC2022 - Page 8
   Joining and Leaving an IP multicast group is more explicit on the
   receive side - with the primitives JoinLocalGroup and LeaveLocalGroup
   affecting what groups the local link layer interface should accept
   packets from. When the IP layer wants to receive packets from a
   group, it issues JoinLocalGroup. When it no longer wants to receive
   packets, it issues LeaveLocalGroup. A key point to note is that
   changing state is a local issue, it has no effect on other hosts
   attached to the Ethernet.

   IGMP is defined in RFC 1112 to support IP multicast routers attached
   to a given subnet. Hosts issue IGMP Report messages when they perform
   a JoinLocalGroup, or in response to an IP multicast router sending an
   IGMP Query. By periodically transmitting queries IP multicast routers
   are able to identify what IP multicast groups have non-zero
   membership on a given subnet.

   A specific IP multicast address, 224.0.0.1, is allocated for the
   transmission of IGMP Query messages. Host IP layers issue a
   JoinLocalGroup for 224.0.0.1 when they intend to participate in IP
   multicasting, and issue a LeaveLocalGroup for 224.0.0.1 when they've
   ceased participating in IP multicasting.

   Each host keeps a list of IP multicast groups it has been
   JoinLocalGroup'd to. When a router issues an IGMP Query on 224.0.0.1
   each host begins to send IGMP Reports for each group it is a member
   of. IGMP Reports are sent to the group address, not 224.0.0.1, "so
   that other members of the same group on the same network can overhear
   the Report" and not bother sending one of their own. IP multicast
   routers conclude that a group has no members on the subnet when IGMP
   Queries no longer elicit associated replies.

3. UNI 3.0/3.1 support for intra-cluster multicasting.

   For the purposes of the MARS protocol, both UNI 3.0 and UNI 3.1
   provide equivalent support for multicasting. Differences between UNI
   3.0 and UNI 3.1 in required signalling elements are covered in RFC
   1755.

   This document will describe its operation in terms of 'generic'
   functions that should be available to clients of a UNI 3.0/3.1
   signalling entity in a given ATM endpoint. The ATM model broadly
   describes an 'AAL User' as any entity that establishes and manages
   VCs and underlying AAL services to exchange data. An IP over ATM
   interface is a form of 'AAL User' (although the default LLC/SNAP
   encapsulation mode specified in RFC1755 really requires that an 'LLC
   entity' is the AAL User, which in turn supports the IP/ATM
   interface).
Top   ToC   RFC2022 - Page 9
   The most fundamental limitations of UNI 3.0/3.1's multicast support
   are:

      Only point to multipoint, unidirectional VCs may be established.

      Only the root (source) node of a given VC may add or remove leaf
      nodes.

   Leaf nodes are identified by their unicast ATM addresses.  UNI
   3.0/3.1 defines two ATM address formats - native E.164 and NSAP
   (although it must be stressed that the NSAP address is so called
   because it uses the NSAP format - an ATM endpoint is NOT a Network
   layer termination point).  In UNI 3.0/3.1 an 'ATM Number' is the
   primary identification of an ATM endpoint, and it may use either
   format. Under some circumstances an ATM endpoint must be identified
   by both a native E.164 address (identifying the attachment point of a
   private network to a public network), and an NSAP address ('ATM
   Subaddress') identifying the final endpoint within the private
   network. For the rest of this document the term will be used to mean
   either a single 'ATM Number' or an 'ATM Number' combined with an 'ATM
   Subaddress'.

3.1 VC meshes.

   The most fundamental approach to intra-cluster multicasting is the
   multicast VC mesh.  Each source establishes its own independent point
   to multipoint VC (a single multicast tree) to the set of leaf nodes
   (destinations) that it has been told are members of the group it
   wishes to send packets to.

   Interfaces that are both senders and group members (leaf nodes) to a
   given group will originate one point to multipoint VC, and terminate
   one VC for every other active sender to the group. This criss-
   crossing of VCs across the ATM network gives rise to the name 'VC
   mesh'.

3.2 Multicast Servers.

   An alternative model has each source establish a VC to an
   intermediate node - the multicast server (MCS). The multicast server
   itself establishes and manages a point to multipoint VC out to the
   actual desired destinations.

   The MCS reassembles AAL_SDUs arriving on all the incoming VCs, and
   then queues them for transmission on its single outgoing point to
   multipoint VC. (Reassembly of incoming AAL_SDUs is required at the
   multicast server as AAL5 does not support cell level multiplexing of
   different AAL_SDUs on a single outgoing VC.)
Top   ToC   RFC2022 - Page 10
   The leaf nodes of the multicast server's point to multipoint VC must
   be established prior to packet transmission, and the multicast server
   requires an external mechanism to identify them. A side-effect of
   this method is that ATM interfaces that are both sources and group
   members will receive copies of their own packets back from the MCS
   (An alternative method is for the multicast server to explicitly
   retransmit packets on individual VCs between itself and group
   members. A benefit of this second approach is that the multicast
   server can ensure that sources do not receive copies of their own
   packets.)

   The simplest MCS pays no attention to the contents of each AAL_SDU.
   It is purely an AAL/ATM level device. More complex MCS architectures
   (where a single endpoint serves multiple layer 3 groups) are
   possible, but are beyond the scope of this document. More detailed
   discussion is provided in section 7.

3.3 Tradeoffs.

   Arguments over the relative merits of VC meshes and multicast servers
   have raged for some time. Ultimately the choice depends on the
   relative trade-offs a system administrator must make between
   throughput, latency, congestion, and resource consumption. Even
   criteria such as latency can mean different things to different
   people - is it end to end packet time, or the time it takes for a
   group to settle after a membership change? The final choice depends
   on the characteristics of the applications generating the multicast
   traffic.

   If we focussed on the data path we might prefer the VC mesh because
   it lacks the obvious single congestion point of an MCS.  Throughput
   is likely to be higher, and end to end latency lower, because the
   mesh lacks the intermediate AAL_SDU reassembly that must occur in
   MCSs. The underlying ATM signalling system also has greater
   opportunity to ensure optimal branching points at ATM switches along
   the multicast trees originating on each source.

   However, resource consumption will be higher. Every group member's
   ATM interface must terminate a VC per sender (consuming on-board
   memory for state information, instance of an AAL service, and
   buffering in accordance with the vendors particular architecture). On
   the contrary, with a multicast server only 2 VCs (one out, one in)
   are required, independent of the number of senders. The allocation of
   VC related resources is also lower within the ATM cloud when using a
   multicast server. These points may be considered to have merit in
   environments where VCs across the UNI or within the ATM cloud are
   valuable (e.g. the ATM provider charges on a per VC basis), or AAL
   contexts are limited in the ATM interfaces of endpoints.
Top   ToC   RFC2022 - Page 11
   If we focus on the signalling load then MCSs have the advantage when
   faced with dynamic sets of receivers. Every time the membership of a
   multicast group changes (a leaf node needs to be added or dropped),
   only a single point to multipoint VC needs to be modified when using
   an MCS. This generates a single signalling event across the MCS's
   UNI. However, when membership change occurs in a VC mesh, signalling
   events occur at the UNIs of every traffic source - the transient
   signalling load scales with the number of sources. This has obvious
   ramifications if you define latency as the time for a group's
   connectivity to stabilise after change (especially as the number of
   senders increases).

   Finally, as noted above, MCSs introduce a 'reflected packet' problem,
   which requires additional per-AAL_SDU information to be carried in
   order for layer 3 sources to detect their own AAL_SDUs coming back.

   The MARS architecture allows system administrators to utilize either
   approach on a group by group basis.

3.4 Interaction with local UNI 3.0/3.1 signalling entity.

   The following generic signalling functions are presumed to be
   available to local AAL Users:

   L_CALL_RQ     - Establish a unicast VC to a specific endpoint.
   L_MULTI_RQ    - Establish multicast VC to a specific endpoint.
   L_MULTI_ADD   - Add new leaf node to previously established VC.
   L_MULTI_DROP  - Remove specific leaf node from established VC.
   L_RELEASE     - Release unicast VC, or all Leaves of a multicast VC.

   The signalling exchanges and local information passed between AAL
   User and UNI 3.0/3.1 signalling entity with these functions are
   outside the scope of this document.

   The following indications are assumed to be available to AAL Users,
   generated by the local UNI 3.0/3.1 signalling entity:

   L_ACK          - Succesful completion of a local request.
   L_REMOTE_CALL  - A new VC has been established to the AAL User.
   ERR_L_RQFAILED - A remote ATM endpoint rejected an L_CALL_RQ,
                    L_MULTI_RQ, or L_MULTI_ADD.
   ERR_L_DROP     - A remote ATM endpoint dropped off an existing VC.
   ERR_L_RELEASE  - An existing VC was terminated.

   The signalling exchanges and local information passed between AAL
   User and UNI 3.0/3.1 signalling entity with these functions are
   outside the scope of this document.
Top   ToC   RFC2022 - Page 12
4.  Overview of the MARS.

   The MARS may reside within any ATM endpoint that is directly
   addressable by the endpoints it is serving. Endpoints wishing to join
   a multicast cluster must be configured with the ATM address of the
   node on which the cluster's MARS resides.  (Section 5.4 describes how
   backup MARSs may be added to support the activities of a cluster.
   References to 'the MARS' in following sections will be assumed to
   mean the acting MARS for the cluster.)

4.1  Architecture.

   Architecturally the MARS is an evolution of the RFC 1577 ARP Server.
   Whilst the ARP Server keeps a table of {IP,ATM} address pairs for all
   IP endpoints in an LIS, the MARS keeps extended tables of {layer 3
   address, ATM.1, ATM.2, ..... ATM.n} mappings. It can either be
   configured with certain mappings, or dynamically 'learn' mappings.
   The format of the {layer 3 address} field is generally not
   interpreted by the MARS.

   A single ATM node may support multiple logical MARSs, each of which
   support a separate cluster. The restriction is that each MARS has a
   unique ATM address (e.g. a different SEL field in the NSAP address of
   the node on which the multiple MARSs reside).  By definition a single
   instance of a MARS may not support more than one cluster.

   The MARS distributes group membership update information to cluster
   members over a point to multipoint VC known as the ClusterControlVC.
   Additionally, when Multicast Servers (MCSs) are being used it also
   establishes a separate point to multipoint VC out to registered MCSs,
   known as the ServerControlVC.  All cluster members are leaf nodes of
   ClusterControlVC. All registered multicast servers are leaf nodes of
   ServerControlVC (described further in section 6).

   The MARS does NOT take part in the actual multicasting of layer 3
   data packets.

4.2  Control message format.

   By default all MARS control messages MUST be LLC/SNAP encapsulated
   using the following codepoints:

      [0xAA-AA-03][0x00-00-5E][0x00-03][MARS control message]
          (LLC)       (OUI)     (PID)

   (This is a PID from the IANA OUI.)
Top   ToC   RFC2022 - Page 13
   MARS control messages are made up of 4 major components:

      [Fixed header][Mandatory fields][Addresses][Supplementary TLVs]

   [Fixed header] contains fields indicating the operation being
   performed and the layer 3 protocol being referred to (e.g IPv4, IPv6,
   AppleTalk, etc). The fixed header also carries checksum information,
   and hooks to allow this basic control message structure to be re-used
   by other query/response protocols.

   The [Mandatory fields] section carries fixed width parameters that
   depend on the operation type indicated in [Fixed header].

   The following [Addresses] area carries variable length fields for
   source and target addresses - both hardware (e.g. ATM) and layer 3
   (e.g. IPv4). These provide the fundamental information that the
   registrations, queries, and updates use and operate on. For the MARS
   protocol fields in [Fixed header] indicate how to interpret the
   contents of [Addresses].

   [Supplementary TLVs] represents an optional list of TLV (type,
   length, value) encoded information elements that may be appended to
   provide supplementary information.  This feature is described in
   further detail in section 10.

   MARS messages contain variable length address fields. In all cases
   null addresses SHALL be encoded as zero length, and have no space
   allocated in the message.

   (Unique LLC/SNAP encapsulation of MARS control messages means MARS
   and ARP Server functionality may be implemented within a common
   entity, and share a client-server VC, if the implementor so chooses.
   Note that the LLC/SNAP codepoint for MARS is different to the
   codepoint used for ATMARP.)

4.3  Fixed header fields in MARS control messages.

   The [Fixed header] has the following format:

      Data:
       mar$afn      16 bits  Address Family (0x000F).
       mar$pro      56 bits  Protocol Identification.
       mar$hdrrsv   24 bits  Reserved. Unused by MARS control protocol.
       mar$chksum   16 bits  Checksum across entire MARS message.
       mar$extoff   16 bits  Extensions Offset.
       mar$op       16 bits  Operation code.
       mar$shtl      8 bits  Type & length of source ATM number. (r)
       mar$sstl      8 bits  Type & length of source ATM subaddress. (q)
Top   ToC   RFC2022 - Page 14
   mar$shtl and mar$sstl provide information regarding the source's
   hardware (ATM) address. In the MARS protocol these fields are always
   present, as every MARS message carries a non-null source ATM address.
   In all cases the source ATM address is the first variable length
   field in the [Addresses] section.

   The other fields in [Fixed header] are described in the following
   subsections.

4.3.1  Hardware type.

   mar$afn defines the type of link layer addresses being carried. The
   value of 0x000F SHALL be used by MARS messages generated in
   accordance with this document. The encoding of ATM addresses and
   subaddresses when mar$afn = 0x000F is described in section 5.1.2.
   Encodings when mar$afn != 0x000F are outside the scope of this
   document.

4.3.2  Protocol type.

   The mar$pro field is made up of two subfields:

      mar$pro.type 16 bits  Protocol type.
      mar$pro.snap 40 bits  Optional SNAP extension to protocol type.

   The mar$pro.type field is a 16 bit unsigned integer representing the
   following number space:

      0x0000 to 0x00FF  Protocols defined by the equivalent NLPIDs.
      0x0100 to 0x03FF  Reserved for future use by the IETF.
      0x0400 to 0x04FF  Allocated for use by the ATM Forum.
      0x0500 to 0x05FF  Experimental/Local use.
      0x0600 to 0xFFFF  Protocols defined by the equivalent Ethertypes.

   (based on the observations that valid Ethertypes are never smaller
   than 0x600, and NLPIDs never larger than 0xFF.)

   The NLPID value of 0x80 is used to indicate a SNAP encoded extension
   is being used to encode the protocol type. When mar$pro.type == 0x80
   the SNAP extension is encoded in the mar$pro.snap field.  This is
   termed the 'long form' protocol ID.

   If mar$pro.type != 0x80 then the mar$pro.snap field MUST be zero on
   transmit and ignored on receive. The mar$pro.type field itself
   identifies the protocol being referred to. This is termed the 'short
   form' protocol ID.
Top   ToC   RFC2022 - Page 15
   In all cases, where a protocol has an assigned number in the
   mar$pro.type space (excluding 0x80) the short form MUST be used when
   transmitting MARS messages. Additionally, where a protocol has valid
   short and long forms of identification, receivers MAY choose to
   recognise the long form.

   mar$pro.type values other than 0x80 MAY have 'long forms' defined in
   future documents.

   For the remainder of this document references to mar$pro SHALL be
   interpreted to mean mar$pro.type, or mar$pro.type in combination with
   mar$pro.snap as appropriate.

   The use of different protocol types is described further in section
   9.

4.3.3 Checksum.

   The mar$chksum field carries a standard IP checksum calculated across
   the entire MARS control message (excluding the LLC/SNAP header). The
   field is set to zero before performing the checksum calculation.

   As the entire LLC/SNAP encapsulated MARS message is protected by the
   32 bit CRC of the AAL5 transport, implementors MAY choose to ignore
   the checksum facility. If no checksum is calculated these bits MUST
   be reset before transmission. If no checksum is performed on
   reception, this field MUST be ignored. If a receiver is capable of
   validating a checksum it MUST only perform the validation when the
   received mar$chksum field is non-zero. Messages arriving with
   mar$chksum of 0 are always considered valid.

4.3.4 Extensions Offset.

   The mar$extoff field identifies the existence and location of an
   optional supplementary parameters list. Its use is described in
   section 10.
Top   ToC   RFC2022 - Page 16
4.3.5 Operation code.

   The mar$op field is further subdivided into two 8 bit fields -
   mar$op.version (leading octet) and mar$op.type (trailing octet).
   Together they indicate the nature of the control message, and the
   context within which its [Mandatory fields], [Addresses], and
   [Supplementary TLVs] should be interpreted.

      mar$op.version
         0               MARS protocol defined in this document.
         0x01 - 0xEF     Reserved for future use by the IETF.
         0xF0 - 0xFE     Allocated for use by the ATM Forum.
         0xFF            Experimental/Local use.

      mar$op.type
         Value indicates operation being performed, within context of
         the control protocol version indicated by mar$op.version.

   For the rest of this document references to the mar$op value SHALL be
   taken to mean mar$op.type, with mar$op.version = 0x00. The values
   used in this document are summarised in section 11.

   (Note this number space is independent of the ATMARP operation code
   number space.)

4.3.6 Reserved.

   mar$hdrrsv may be subdivided and assigned specific meanings for other
   control protocols indicated by mar$op.version != 0.



(page 16 continued on part 2)

Next Section