choice. However, it is possible for distinct NSAPs to represent
access to essentially different network services. For example, one
NSAP may provide access to a connectionless network service by means
of an internetwork protocol. Another NSAP may provide access to a
connection-oriented service, for use in communicating on a local
subnetwork. It is also possible to have several distinct NSAPs on
the same subnetwork, each of which provides some service features of
local interest that distinguishes it from the other NSAPs.
A transport entity accessing an X.25 service could use the logical
channel numbers for the virtual circuits as NCEP_ids. An NSAP
providing access only to a permanent virtual circuit would need only
a single NCEP_id to multiplex the transport connections. Similarly,
a CSMA/CD network would need only a single NCEP_id, although the
network is connectionless.
6.2 Management issues.
The Class 4 transport protocol has been succesfully operated over
both connectionless and connection-oriented network services. In
both modes of operation there exists some information about the
network service that a transport implementation could make use of to
enhance performance. For example, knowledge of expected delay to a
destination would permit optimal selection of retransmission timer
value for a connection instance. The information that transport
implementations could use and the mechanisms for obtaining and
managing that information are, as a group, not well understood.
Projects are underway within ISO committees to address the management
of OSI as an architecture and the management of the transport layer
as a layer.
For operation of the Class 4 transport protocol over
connection-oriented network service several issues must be addressed
a. When should a new network connection be opened to support a
transport connection (versus multiplexing)?
b. When a network connection is no longer being used by any
transport connection, should the network connection be closed
or remain open awaiting a new transport connection?
c. When a network connection is aborted, how should the peer
transport entities that were using the connection cooperate to
re-establish it? If splitting is not to be used, how can this
re-establishment be achieved such that one and only one
network connection results?
The Class 4 transport specification permits a transport entity to
multiplex several transport connections (TCs) over a single network
connection (NC) and to split a single TC across several NCs. The
implementor must decide whether to support these options and, if so,
how. Even when the implementor decides never to initiate splitting
or multiplexing the transport entity must be prepared to accept this
behavior from other transport implementations. When multiplexing is
used TPDUs from multiple TCs can be concatenated into a single
network service data unit (NSDU). Therefore, damage to an NSDU may
effect several TCs. In general, Class 2 connections should not be
multiplexed with Class 4 connections. The reason for this is that if
the error rate on the network connection is high enough that the
error recovery capability of Class 4 is needed, then it is too high
for Class 2 operation. The deciding criterion is the tolerance of
the user for frequent disconnection and data errors.
Several issues in splitting must be considered:
1) maximum number of NCs that can be assigned to a given TC;
2) minimum number of NCs required by a TC to maintain the "quality
of service" expected (default of 1);
3) when to split;
4) inactivity control;
5) assignment of received TPDU to TC; and
6) notification to TC of NC status (assigned, dissociated, etc ).
All of these except 3) are covered in the formal description. The
methods used in the formal description need not be used explicitly,
but they suggest approaches to implementation.
To support the possibility of multiplexing and splitting the
implementor must provide a common function below the TC state
machines that maps a set of TCs to a set of NCs. The formal
description provides a general means of doing this, requiring mainly
implementation environment details to complete the mechanism.
Decisions about when network connections are to be opened or closed
can be made locally using local decision criteria. Factors that may
effect the decision include costs of establishing an NC, costs of
maintaining an open NC with little traffic flowing, and estimates of
the probability of data flow between the source node and known
destinations. Management of this type is feasible when a priori
knowledge exists but is very difficult when a need exists to adapt to
dynamic traffic patterns and/or fluctuating network charging
To handle the issue of re-establishment of the NC after failure, the
ISO has proposed an addendum N3279 [ISO85c] to the basic transport
standard describing a network connection management subprotocol
(NCMS) to be used in conjunction with the transport protocol.
7 Enhanced checksum algorithm.
7.1 Effect of checksum on transport performance.
Performance experiments with Class 4 transport at the NBS have
revealed that straightforward implementation of the Fletcher checksum
using the algorithm recommended in the ISO transport standard leads
to severe reduction of transport throughput. Early modeling
indicated throughput drops of as much as 66% when using the checksum.
Work by Anastase Nakassis [NAK85] of the NBS led to several improved
implementations. The performance degradation due to checksum is now
in the range of 40-55%, when using the improved implementations.
It is possible that transport may be used over a network that does
not provide error detection. In such a case the transport checksum
is necessary to ensure data integrity. In many instances, the
underlying subnetwork provides some error checking mechanism. The
HDLC frame check sequence as used by X.25, IEEE 802.3 and 802.4 rely
on a 32 bit cyclic redundancy check and satellite link hardware
frequently provides the HDLC frame check sequence. However, these
are all link or physical layer error detection mechanisms which
operate only point-to-point and not end-to-end as the transport
checksum does. Some links provide error recovery while other links
simply discard damaged messages. If adequate error recovery is
provided, then the transport checksum is extra overhead, since
transport will detect when the link mechanism has discarded a message
and will retransmit the message. Even when the IP fragments the
TPDU, the receiving IP will discover a hole in the reassembly buffer
and discard the partially assembled datagram (i.e., TPDU). Transport
will detect this missing TPDU and recover by means of the
7.2 Enhanced algorithm.
The Fletcher checksum algorithm given in an annex to IS 8073 is not
part of the standard, and is included in the annex as a suggestion to
implementors. This was done so that as improvements or new
algorithms came along, they could be incorporated without the
necessity to change the standard.
Nakassis has provided three ways of coding the algorithm, shown
below, to provide implementors with insight rather than universally
transportable code. One version uses a high order language (C). A
second version uses C and VAX assembler, while a third uses only VAX
assembler. In all the versions, the constant MODX appears. This
represents the maximum number of sums that can be taken without
experiencing overflow. This constant depends on the processor's word
size and the arithmetic mode, as follows:
Choose n such that
(n+1)*(254 + 255*n/2) <= 2**N - 1
where N is the number of usable bits for signed (unsigned)
arithmetic. Nakassis shows [NAK85] that it is sufficient
n <= sqrt( 2*(2**N - 1)/255 )
and that n = sqrt( 2*(2**N - 1)/255 ) - 2 generally yields
usable values. The constant MODX then is taken to be n.
Some typical values for MODX are given in the following table.
BITS/WORD MODX ARITHMETIC
15 14 signed
16 21 unsigned
31 4102 signed
32 5802 unsigned
This constant is used to reduce the number of times mod 255 addition
is invoked, by way of speeding up the algorithm.
It should be noted that it is also possible to implement the checksum
in separate hardware. However, because of the placement of the
checksum within the TPDU header rather than at the end of the TPDU,
implementing this with registers and an adder will require
significant associated logic to access and process each octet of the
TPDU and to move the checksum octets in to the proper positions in the
TPDU. An alternative to designing this supporting logic is to use a
fast, microcoded 8-bit CPU to handle this access and the computation.
Although there is some speed penalty over separate logic, savings
may be realized through a reduced chip count and development time.
7.2.1 C language algorithm.
#define MODX 4102
encodecc( mess,len,k )
unsigned char mess ; /* the TPDU to be checksummed */
k; /* position of first checksum octet
as an offset from mess */
mess[k] = i ;
; calling sequence optm(message,length,&c0,&c1) where
; message is an array of bytes
; length is the length of the array
; &c0 and &c1 are the addresses of the counters to hold the
; remainder of; the first and second order partial sums
movl 4(ap),r8 ; r8---> message
movl 8(ap),r9 ; r9=length
clrq r4 ; r5=r4=0
clrq r6 ; r7=r6=0
clrl r3 ; clear high order bytes of r3
movl #255,r10 ; r10 holds the value 255
movl #4102,r11 ; r11= MODX
xloop: movl r11,r7 ; if r7=MODX
cmpl r9,r7 ; is r9>=r7 ?
bgeq yloop ; if yes, go and execute the inner
; loop MODX times.
movl r9,r7 ; otherwise set r7, the inner loop
yloop: movb (r8)+,r3 ;
addl2 r3,r4 ; sum1=sum1+byte
addl2 r4,r6 ; sum2=sum2+sum1
sobgtr r7,yloop ; while r7>0 return to iloop
; for mod 255 addition
ediv r10,r6,r0,r6 ; r6=remainder
ediv r10,r4,r0,r4 ;
subl2 r11,r9 ; adjust r9
bgtr xloop ; go for another loop if necessary
movl r4,@12(ap) ; first argument
movl r6,@16(ap) ; second argument
ashl #8,r6,r0 ;
addl2 r4,r0 ;
7.2.3 Assembler algorithm.
buff0: .blkb 3 ; allocate 3 bytes so that aloop is
; optimally aligned
; macro implementation of Fletcher's algorithm.
; calling sequence ip=encodemm(message,length,k) where
; message is an array of bytes
; length is the length of the array
; k is the location of the check octets if >0,
; an indication not to encode if 0.
movl 4(ap),r8 ; r8---> message
movl 8(ap),r9 ; r9=length
clrq r4 ; r5=r4=0
clrq r6 ; r7=r6=0
clrl r3 ; clear high order bytes of r3
movl #255,r10 ; r10 holds the value 255
movl 12(ap),r2 ; r2=k
bleq bloop ; if r2<=0, we do not encode
subl3 r2,r9,r11 ; set r11=L-k
addl2 r8,r2 ; r2---> octet k+1
clrb (r2) ; clear check octet k+1
clrb -(r2) ; clear check octet k, r2---> octet k.
bloop: movw #4102,r7 ; set r7 (inner loop counter) = to MODX
cmpl r9,r7 ; if r9>=MODX, then go directly to adjust r9
bgeq aloop ; and execute the inner loop MODX times.
movl r9,r7 ; otherwise set r7, the inner loop counter,
; equal to r9, the number of the
; unprocessed characters
aloop: movb (r8)+,r3 ;
addl2 r3,r4 ; c0=c0+byte
addl2 r4,r6 ; sum2=sum2+sum1
sobgtr r7,aloop ; while r7>0 return to iloop
; for mod 255 addition
ediv r10,r6,r0,r6 ; r6=remainder
ediv r10,r4,r0,r4 ;
subl2 #4102,r9 ;
bgtr bloop ; go for another loop if necessary
ashl #8,r6,r0 ; r0=256*r6
addl2 r4,r0 ; r0=256*r6+r4
cmpl r2,r7 ; since r7=0, we are checking if r2 is
bleq exit ; zero or less: if yes we bypass
; the encoding.
movl r6,r8 ; r8=c1
mull3 r11,r4,r6 ; r6=(L-k)*c0
ediv r10,r6,r7,r6 ; r6 = (L-k)*c0 mod(255)
subl2 r8,r6 ; r6= ((L-k)*c0)%255 -c1 and if negative,
bgtr byte1 ; we must
addl2 r10,r6 ; add 255
byte1: movb r6,(r2)+ ; save the octet and let r2---> octet k+1
addl2 r6,r4 ; r4=r4+r6=(x+c0)
subl3 r4,r10,r4 ; r4=255-(x+c0)
bgtr byte2 ; if >0 r4=octet (k+1)
addl2 r10,r4 ; r4=255+r4
byte2: movb r4,(r2) ; save y in octet k+1
8 Parameter selection.
8.1 Connection control.
Expressions for timer values used to control the general transport
connection behavior are given in IS 8073. However, values for the
specific factors in the expressions are not given and the expressions
are only estimates. The derivation of timer values from these
expressions is not mandatory in the standard. The timer value
expressions in IS 8073 are for a connection-oriented network service
and may not apply to a connectionless network service.
The following symbols are used to denote factors contributing to
timer values, throughout the remainder of this Part.
Elr = expected maximum transit delay, local to remote
Erl = expected maximum transit delay, remote to local
Ar = time needed by remote entity to generate an acknowledgement
Al = time needed by local entity to generate an acknowledgement
x = local processing time for an incoming TPDU
Mlr = maximum NSDU lifetime, local to remote
Mrl = maximum NSDU lifetime, remote to local
T1 = bound for maximum time local entity will wait for
acknowledgement before retransmitting a TPDU
R = bound for maximum local entity will continue to transmit a
TPDU that requires acknowledgment
N = bound for maximum number of times local entity will transmit
a TPDU requiring acknowledgement
L = bound for the maximum time between the transmission of a
TPDU and the receipt of any acknowledgment relating to it.
I = bound for the time after which an entity will initiate
procedures to terminate a transport connection if a TPDU is
not received from the peer entity
W = bound for the maximum time an entity will wait before
transmitting up-to-date window information
These symbols and their definitions correspond to those given in
Clause 12 of IS 8073.
8.1.1 Give-up timer.
The give-up timer determines the amount of time the transport
entity will continue to await an acknowledgement (or other
appropriate reply) of a transmitted message after the message
has been retransmitted the maximum number of times. The
recommendation given in IS 8073 for values of this timer is
T1 + W + Mrl, for DT and ED TPDUs
T1 + Mrl, for CR, CC, and DR TPDUs,
T1 = Elr + Erl + Ar + x.
However, it should be noted that Ar will not be known for either the
CR or the CC TPDU, and that Elr and Erl may vary considerably due to
routing in some conectionless network services. In Part 8.3.1, the
determination of values for T1 is discussed in more detail. Values
for Mrl generally are relatively fixed for a given network service.
Since Mrl is usually much larger than expected values of T1, a
rule-of-thumb for the give-up timer value is 2*Mrl + Al + x for the
CR, CC and DR TPDUs and 2*Mrl + W for DT and ED TPDUs.
8.1.2 Inactivity timer.
This timer measures the maximum time period during which a
transport connection can be inactive, i.e., the maximum time an
entity can wait without receiving incoming messages. A usable value
for the inactivity timer is
I = 2*( max( T1,W )*N ).
This accounts for the possibility that the remote peer is using a
window timer value different from that of the local peer. Note that
an inactivity timer is important for operation over connectionless
network services, since the periodic receipt of AK TPDUs is the only
way that the local entity can be certain that its peer is still
8.1.3 Window timer.
The window timer has two purposes. It is used to assure that the
remote peer entity periodically receives the current state of the
local entity's flow control, and it ensures that the remote peer
entity is aware that the local entity is still functioning. The
first purpose is necessary to place an upper bound on the time
necessary to resynchronize the flow control should an AK TPDU which
notifies the remote peer of increases in credit be lost. The second
purpose is necessary to prevent the inactivity timer of the remote
peerfrom expiring. The value for the window timer, W, depends on
several factors, among which are the transit delay, the
acknowledgement strategy, and the probability of TPDU loss in the
network. Generally, W should satisfy the following condition:
W > C*(Erl + x)
where C is the maximum amount of credit offered. The rationale for
this condition is that the right-hand side represents the maximum
time for receiving the entire window. The protocol requires that all
data received be acknowledged when the upper edge of the window is
seen as a sequence number in a received DT TPDU. Since the window
timer is reset each time an AK TPDU is transmitted, there is usually
no need to set the timer to any less than the value on the right-hand
side of the condition. An exception is when both C and the maximum
TPDU size are large, and Erl is large.
When the probability that a TPDU will be lost is small, the value of
W can be quite large, on the order of several minutes. However, this
increases the delay the peer entity will experience in detecting the
deactivation of the local transport entity. Thus, the value of W
should be given some consideration in terms of how soon the peer
entity needs to detect inactivity. This could be done by placing
such information into a quality of service record associated with the
When the expected network error rate is high, it may be necessary to
reduce the value of W to ensure that AK TPDUs are being received by
the remote entity, especially when both entities are quiescent for
some period of time.
8.1.4 Reference timer.
The reference timer measures the time period during which a
source reference must not be reassigned to another transport
connection, in order that spurious duplicate messages not
interfere with a new connection. The value for this timer
given in IS 8073 is
L = Mlr + Mrl + R + Ar
R = T1*N + z
in which z is a small tolerance quantity to allow for factors
internal to the entity. The use of L as a bound, however, must be
considered carefully. In some cases, L may be very large, and not
realistic as an upper or a lower bound. Such cases may be
encountered on routes over several catenated networks where R is set
high to provide adequate recovery from TPDU loss. In other cases L
may be very small, as when transmission is carried out over a LAN and
R is set small due to low probability of TPDU loss. When L is
computed to be very small, the reference need not be timed out at
all, since the probability of interference is zero. On the other
hand, if L is computed to be very large a smaller value can be used.
One choice for the value might be
L = min( R,(Mrl + Mlr)/2 )
If the reference number assigned to a new connection by an
entity is monotonically incremented for each new connection through
the entire available reference space (maximum 2**16 - 1), the timer
is not critical: the sequence space is large enough that it is likely
that there will be no spurious messages in the network by the time
reference numbers are reused.
8.2 Flow control.
The peer-to-peer flow control mechanism in the transport protocol
determines the upper bound on the pace of data exchange that occurs
on transport connections. The transport entity at each end of
a connection offers a credit to its peer representing the number of
data messages it is currently willing to accept. All received
data messages are acknowledged, with the acknowledgement message
containing the current receive credit information. The three
credit allocation schemes discussed below present a diverse set
of examples of how one might derive receive credit values.
8.2.1 Pessimistic credit allocation.
Pessimistic credit allocation is perhaps the simplest form of flow
control. It is similar in concept to X-on/X-off control. In this
method, the receiver always offers a credit of one TPDU. When the DT
TPDU is received, the receiver responds with an AK TPDU carrying a
credit of zero. When the DT TPDU has been processed by the receiving
entity, an additional AK TPDU carrying a credit of one will be sent.
The advantage to this approach is that the data exchange is very
tightly controlled by the receiving entity. The disadvantages are:
1) the exchange is slow, every data message requiring at least
the time of two round trips to complete the transfer transfer, and 2)
the ratio of acknowledgement to data messages sent is 2:1. While not
recommeneded, this scheme illustrates one extreme method of credit
8.2.2 Optimistic credit allocation.
At the other extreme from pessimistic credit allocation is optimistic
credit allocation, in which the receiver offers more credit than
it has buffers. This scheme has two dangers. First, if the
receiving user is not accepting data at a fast enough rate, the
receiving transport's buffers will become filled. Since the
credit offered was optimistic, the sending entity will continue to
transmit data, which must be dropped by the receiving entity for
lack of buffers. Eventually, the sender may reach the maximum
number of retransmissions and terminate the connection.
The second danger in using optimistic flow control is that the
sending entity may transmit faster than the receiving entity can
consume. This could result from the sender being implemented on
a faster machine or being a more efficient implementation. The
resultant behavior is essentially the same as described above:
receive buffer saturation, dropped data messages, and connection
The two dangers cited above can be ameliorated by implementing
the credit reduction scheme as specified in the protocol. However,
optimistic credit allocation works well only in limited
circumstances. In most situations it is inappropriate and
inefficient even when using credit reduction. Rather than seeking
to avoid congestion, optimistic allocation causes it, in most cases,
and credit reduction simply allows one to recover from congestion
once it has happened. Note that optimistic credit allocation
combined with caching out-of-sequence messages requires a
sophisticated buffer management scheme to avoid reasssembly deadlock
and subsequent loss of the transport connection.
8.2.3 Buffer-based credit allocation.
Basing the receive credit offered on the actual availability of
receive buffers is a better method for achieving flow control.
Indeed, with few exceptions, the implementations that have been
studied used this method. It continuous flow of data and
eliminating the need for the credit-restoring acknowledgements.
Since only available buffer space is offered, the dangers of
optimistic credit allocation are also avoided.
The amount of buffer space needed to maintain a continuous bulk
data transfer, which represents the maximum buffer requirement, is
dependent on round trip delay and network speed. Generally, one
would want the buffer space, and hence the credit, large enough to
allow the sender to send continuously, so that incremental credit
updates arrive just prior to the sending entity exhausting the
available credit. One example is a single-hop satellite link
operating at 1.544 Mbits/sec. One report [COL85] indicates that
the buffer requirement necessary for continuous flow is approximately
120 Kbytes. For 10 Mbits/sec. IEEE 802.3 and 802.4 LANs, the figure
is on the order of 10K to 15K bytes [BRI85, INT85, MIL85].
An interesting modification to the buffer-based credit allocation
scheme is suggested by R.K. Jain [JAI85]. Whereas the approach
described above is based strictly on the available buffer space, Jain
suggests a scheme in which credit is reduced voluntarily by the
sending entity when network congestion is detected. Congestion
is implied by the occurrence of retransmissions. The sending
entity, recognizing retransmissions, reduces the local value of
credit to one, slowly raising it to the actual receive credit
allocation as error-free transmissions continue to occur. This
technique can overcome various types of network congestion occurring
when a fast sender overruns a slow receiver when no link level flow
control is available.
8.2.4 Acknowledgement policies.
It is useful first to review the four uses of the acknowledgement
message in Class 4 transport. An acknowledgement message:
1) confirms correct receipt of data messages,
2) contains a credit allocation, indicating how many
data messages the entity is willing to receive
from the correspondent entity,
3) may optionally contain fields which confirm
receipt of critical acknowledgement messages,
known as flow control confirmation (FCC), and
4) is sent upon expiration of the window timer to
maintain a minimum level of traffic on an
otherwise quiescent connection.
In choosing an acknowledgement strategy, the first and third uses
mentioned above, data confirmation and FCC, are the most relevant;
the second, credit allocation, is determined according to the
flow control strategy chosen, and the fourth, the window
acknowledgement, is only mentioned briefly in the discussion on
flow control confirmation.
188.8.131.52 Acknowledgement of data.
The primary purpose of the acknowledgement message is to confirm
correct receipt of data messages. There are several choices that
the implementor must make when designing a specific
implementation. Which choice to make is based largely on the
operating environment (e.g., network error characteristics).
The issues to be decided upon are discussed in the sections below.
184.108.40.206.1 Misordered data messages.
Data messages received out of order due to network misordering
or loss can be cached or discarded. There is no single determinant
that guides the implementor to one or the other choice. Rather,
there are a number of issues to be considered.
One issue is the importance of maintaining a low delay as perceived
by the user. If transport data messages are lost or damaged in
transit, the absence of a positive acknowledgement will trigger a
retransmission at the sending entity. When the retransmitted data
message arrives at the receiving transport, it can be delivered
to the user. If subsequent data messages had been cached, they
could be delivered to the user at the same time. The delay
between the sending and receiving users would, on average, be
shorter than if messages subsequent to a lost message were
dependent on retransmission for recovery.
A second factor that influences the caching choice is the cost of
transmission. If transmission costs are high, it is more economical
to cache misordered data, in conjunction with the use of
selective acknowledgement (described below), to avoid
There are two resources that are conserved by not caching misordered
data: design and implementation time for the transport entity and CPU
processing time during execution. Savings in both categories
accrue because a non-caching implementation is simpler in its buffer
management. Data TPDUs are discarded rather than being reordered.
This avoids the overhead of managing the gaps in the received
data sequence space, searching of sequenced message lists, and
inserting retransmitted data messages into the lists.
220.127.116.11.2 Nth acknowledgement.
In general, an acknowledgement message is sent after receipt of
every N data messages on a connection. If N is small compared to the
credit offered, then a finer granularity of buffer control is
afforded to the data sender's buffer management function. Data
messages are confirmed in small groups, allowing buffers to be
reused sooner than if N were larger. The cost of having N small is
twofold. First, more acknowledgement messages must be generated by
one transport entity and processed by another, consuming some of the
CPU resource at both ends of a connection. Second, the
acknowledgement messages consume transmission bandwidth, which may
be expensive or limited.
For larger N, buffer management is less efficient because the
granularity with which buffers are controlled is N times the maximum
TPDU size. For example, when data messages are transmitted to a
receiving entity employing this strategy with large N, N data
messages must be sent before an acknowledgement is returned
(although the window timer causes the acknowledgement to be sent
eventually regardless of N). If the minimum credit allocation for
continuous operation is actually a fraction of N, a credit of N
must still be offered, and N receive buffers reserved, to achieve a
continuous flow of data messages. Thus, more receive buffers
are used than are actually needed. (Alternatively, if one relies on
the timer, which must be adjusted to the receipt time for N and
will not expire until some time after the fraction of N has been
sent, there may be idle time.)
The choice of values for N depends on several factors. First, if the
rate at which DT TDPUs are arriving is relatively low, then there is
not much justification for using a value for N that exceeds 2. On
the other hand, if the DT TPDU arrival rates is high or the TPDU's
arrive in large groups (e.g., in a frame from a satellite link), then
it may be reasonable to use a larger value for N, simply to avoid the
overhead of generating and sending the acknowledgements while
procesing the DT TPDUs. Second, the value of N should be related to
the maximum credit to be offered. Letting C be the maximum credit to
be offered, one should choose N < C/2, since the receipt of C TPDUs
without acknowledging will provoke sending one in any case. However,
since the extended formats option for transport provides max C =
2**16 - 1, a choice of N = 2**15 - 2 is likely to cause some of the
sender's retransmission timers to expire. Since the retransmitted
TPDU's will arrive out of sequence, they will provoke the sending of
AK TPDU's. Thus, not much is gained by using an N large. A better
choice is N = log C (base 2). Third, the value of should be related
to the maximum TPDU size used on the connection and the overall
buffer management. For example, the buffer management may be tied to
the largest TPDU that any connection will use, with each connection
managing the actual way in which the negotiated TPDU size relates to
this buffer size. In such case, if a connection has negotiated a
maximum TPDU size of 128 octets and the buffers are 2048 octets, it
may provide better management to partially fill a buffer before
acknowledging. If the example connection has two buffers and has
based offered credit on this, then one choice for N could be 2*log(
2048/128 ) = 8. This would mean that an AK TPDU would be sent when a
buffer is half filled ( 2048/128 = 16 ), and a double buffering
scheme used to manage the use of the two buffers. the use of the t
There are two studies which indicate that, in many cases, 2 is a good
choice for N [COL85, BRI85]. The increased granularity in buffer
management is reasonably small when compared to the credit
allocation, which ranges from 8K to 120K octets in the studies cited.
The benefit is that the number of acknowledgements generated (and
consumed) is cut approximately in half.
18.104.22.168.3 Selective acknowledgement.
Selective acknowledgement is an option that allows misordered data
messages to be confirmed even in the presence of gaps in the received
message sequence. (Note that selective acknowledgement is only
meaningul whe caching out-of-orderdata messags.) The advantage to
using this mechanism is hat i grealy reduces the number of
unnecessary retransmissions, thus saving both computing time and
transmission bandwidth [COL85] (see the discussion in Part 22.214.171.124.1
for more details).
126.96.36.199 Flow control confirmation and fast retransmission.
Flow control confirmation (FCC) is a mechanism of the transport
protocol whereby acknowledgement messages containing critical flow
control information are confirmed. The critical acknowledgement
messages are those that open a closed flow control window and
certain ones that occur subsequent to a credit reduction. In
principle, if these critical messages are lost, proper
resynchroniztion of the flow control relies on the window timer,
which is generally of relatively long duration. In order to reduce
delay in resynchronizing the flow control, the receiving entity can
repeatedly send, within short intervals, AK TPDUs carrying a request
for confirmation of the flow control state, a procedure known as
"fast" retransmission (of the acknowledgement). If the sender
responds with an AK TPDU carrying an FCC parameter, fast
retransmission is halted. If no AK TPDU carrying the FCC parameter
is received, the fast transmission halts after having reached a
maximum number of retransmissions, and the window timer resumes
control of AK TPDU transmission. It should be noted that FCC is an
optional mechanism of transport and the data sender is not required
to respond to a request for confirmation of the flow control state
wih an AK TPDU carrying the FCC parameter.
Some considerations for deciding whether or not to use FCC and fast
retransmisson procedures are as follows:
1) likelihood of credit reduction on a given transport connection;
2) probability of TPDU loss;
3) expected window timer period;
4) window size; and
5) acknowledgement strategy.
At this time, there is no reported experience with using FCC and fast
retransmission. Thus, it is not known whether or not the procedures
produce sufficient reduction of resynchronization delay to warrant
When implementing fast retransmission, it is suggested that the timer
used for the window timer be employed as the fast timer, since the
window is disabled during fast retransmission in any case. This will
avoid having to manage another timer. The formal description
expressed the fast retransmission timer as a separate timer for
188.8.131.52 Concatenation of acknowledgement and data.
When full duplex communication is being operated by two transport
entities, data and acknowledgement TPDUs from each one of the
entities travel in the same direction. The transport protocol
permits concatenating AK TPDUs in the same NSDU as a DT TPDU. The
advantage of using this feaure in an implementation is that fewer
NSDUs will be transmitted, and, consequently, fewer total octets will
be sent, due to the reduced number of network headers transmitted.
However, when operating over the IP, this advantage may not
necessarily be recognized, due to the possible fragmentation of the
NSDU by the IP. A careful analysis of the treatment of the NSDU in
internetwork environments should be done to determine whether or not
concatenation of TPDUs is of sufficient benefit to justify its use in
8.2.5 Retransmission policies.
There are primarily two retransmission policies that can be
employed in a transport implementation. In the first of these, a
separate retransmission timer is initiated for each data message
sent by the transport entity. At first glance, this approach appears
to be simple and straightforward to implement. The deficiency of
this scheme is that it is inefficient. This derives from two
sources. First, for each data message transmitted, a timer must be
initiated and cancelled, which consumes a significant amount of CPU
processing time [BRI85]. Second, as the list of outstanding
timers grows, management of the list also becomes increasingly
expensive. There are techniques which make list management more
efficient, such as a list per connection and hashing, but
implementing a policy of one retransmission timer per transport
connection is a superior choice.
The second retransmission policy, implementing one retransmission
timer for each transport conenction, avoids some of the
inefficiencies cited above: the list of outstanding timers is
shorter by approximately an order of magnitude. However, if the
entity receiving the data is generating an acknowledgement for
every data message, the timer must still be cancelled and restarted
for each data/acknowledgement message pair (this is an additional
impetus for implementing an Nth acknowledgement policy with N=2).
The rules governing the single timer per connection scheme are
1) If a data message is transmitted and the
retransmission timer for the connection is not
already running, the timer is started.
2) If an acknowledgement for previously unacknowledged
data is received, the retransmission timer is restarted.
3) If an acknowledgement message is received for the
last outstanding data message on the connection
then the timer is cancelled.
4) If the retransmission timer expires, one or more
unacknowledged data messages are retransmitted,
beginning with the one sent earliest. (Two
reports [HEA85, BRI85] suggest that the number
to retransmit is one.)
8.3 Protocol control.
8.3.1 Retransmission timer values.
184.108.40.206 Data retransmission timer.
The value for the reference timer may have a significant impact on
the performance of the transport protocol [COL85]. However,
determining the proper value to use is sometimes difficult.
According to IS 8073, the value for the timer is computed using the
transit delays, Erl and Elr, the acknowledgement delay, Ar, and the
local TPDU processing time, x:
T1 = Erl + Elr + Ar + x
The difficulty in arriving at a good retransmission timer value is
directly related to the variability of these factors Of the two,
Erl and Elr are the most susceptible to variation, and therefore have
the most impact on determining a good timer value. The
following paragraphs discuss methods for choosing retransmission
timer values that are appropriate in several network environments.
In a single-hop satellite environment, network delay (Erl or Elr) has
small variance because of the constant propagation delay of about 270
ms., which overshadows the other components of network delay.
Consequently, a fixed retransmission timer provides good performance.
For example, for a 64K bit/sec. link speed and network queue size
of four, 650 ms. provides good performance [COL85].
Local area networks also have constant propagation delay.
However, propagation delay is a relatively unimportant factor in
total network delay for a local area network. Medium access delay
and queuing delay are the significant components of network delay,
and (Ar + x) also plays a significant role in determining an
appropriate retransmission timer. From the discussion presented in
Part 220.127.116.11 typical numbers for (Ar + x) are on the order of 5 - 6.5
ms and for Erl or Elr, 5 - 35 ms. Consequently, a reasonable value
for the retransmission timer is 100 ms. This value works well for
local area networks, according to one cited report [INT85] and
simulation work performed at the NBS.
For better performance in an environment with long propagation
delays and significant variance, such as an internetwork an adaptive
algorithm is preferred, such as the one suggested value for TCP/IP
[ISI81]. As analyzed by Jain [JAI85], the algorithm uses an
exponential averaging scheme to derive a round trip delay estimate:
D(i) = b * D(i-1) + (1-b) * S(i)
where D(i) is the update of the delay estimate, S(i) is the sample
round trip time measured between transmission of a given packet and
receipt of its acknowledgement, and b is a weighting factor
between 0 and 1, usually 0.5. The retransmission timer is
expressed as some multiplier, k, of D. Small values of k cause
quick detection of lost packets, but result in a higher number of
false timeouts and, therefore, unnecessary retransmissions. In
addition, the retransmission timer should be increased
arbitrarily for each case of multiple transmissions; an exponential
increase is suggested, such that
D(i) = c * D(i-1)
where c is a dimensionless parameter greater than one.
The remaining parameter for the adaptive algorithm is the initial
delay estimate, D(0). It is preferable to choose a slightly
larger value than needed, so that unnecessary retransmissions do
not occur at the beginning. One possibility is to measure the round
trip delay during connection establishment. In any case, the
timer converges except under conditions of sustained congestion.
18.104.22.168 Expedited data retransmission timer.
The timer which governs retransmission of expedited data should
be set using the normal data retransmission timer value.
22.214.171.124 Connect-request/confirm retransmission timer.
Connect request and confirm messages are subject to Erl + Elr,
total network delay, plus processing time at the receiving
transport entity, if these values are known. If an accurate estimate
of the round trip time is not known, two views can be espoused in
choosing the value for this timer. First, since this timer
governs connection establishment, it is desirable to minimize delay
and so a small value can be chosen, possibly resulting in unnecessary
retransmissions. Alternatively, a larger value can be used, reducing
the possibility of unnecessary retransmissions, but resulting in
longer delay in connection establishment should the connect request
or confirm message be lost. The choice between these two views is
dictated largely by local requirements.
126.96.36.199 Disconnect-request retransmission timer.
The timer which governs retransmission of the disconnect request
message should be set from the normal data retransmission timer
188.8.131.52 Fast retransmission timer.
The fast retransmission timer causes critical acknowledgement
messages to be retransmitted avoiding delay in resynchronizing
credit. This timer should be set to approximately Erl + Elr.
8.3.2 Maximum number of retransmissions.
This transport parameter determines the maximum number of times a
data message will be retransmitted. A typical value is eight. If
monitoring of network service is performed then this value can be
adjusted according to observed error rates. As a high error rate
implies a high probability of TPDU loss, when it is desirable to
continue sending despite the decline in quality of service, the
number of TPDU retransmissions (N) should be increased and the
retransmission interval (T1) reduced.
8.4 Selection of maximum Transport Protocol data unit size.
The choice of maximum size for TPDUs in negotiation proposals depends
on the application to be served and the service quality of the
supporting network. In general, an application which produces large
TSDUs should use as large TPDUs as can be negotiated, to reduce the
overhead due to a large number of small TPDUs. An application which
produces small TSDUs should not be affected by the choice of a large
maximum TPDU size, since a TPDU need not be filled to the maximum
size to be sent. Consequently, applications such as file transfers
would need larger TPDUs while terminals would not. On a high
bandwidth network service, large TPDUs give better channel
utilization than do smaller ones. However, when error rates are
high, the likelihood for a given TPDU to be damaged is correlated to
the size and the frequency of the TPDUs. Thus, smaller TPDU size in
the condition of high error rates will yield a smaller probability
that any particular TPDU will be lost.
The implementor must choose whether or not to apply a uniform maximum
TPDU size to all connections. If the network service is uniform in
service quality, then the selection of a uniform maximum can simplify
the implementation. However, if the network quality is not uniform
and it is desirable to optimize the service provided to the transport
user as much as possible, then it may be better to determine the
maximum size on an individual connection basis. This can be done at
the time of the network service access if the characteristics of the
subnetwork are known.
NOTE: The maximum TPDU size is important in the calculation of the
flow control credit, which is in numbers of TPDUs offered. If buffer
space is granted on an octet base, then credit must be granted as
buffer space divided by maximum TPDU size. Use of a smaller TPDU
size can be equivalent to optimistic credit allocation and can lead
to the expected problems, if proper analysis of the management is not
9 Special options.
Special options may be obtained by taking advantage of the manner in
which IS 8073 and N3756 have been written. It must be emphasized
that these options in no way violate the intentions of the standards
bodies that produced the standards. Flexibility was deliberately
written into the standards to ensure that they do not constrain
applicability to a wide variety of situations.
The negotiation procedures in IS 8073 have deliberate ambiguities in
them to permit flexibility of usage within closed groups of
communicants (the standard defines explicitly only the behavior among
open communicants). A closed group of communicants in an open system
is one which, by reason of organization, security or other special
needs, carries on certain communication among its members which is
not of interest or not accessible to other open system members.
Examples of some closed groups within DOD might be: an Air Force
Command, such as the SAC; a Navy base or an Army post; a ship;
Defense Intelligence; Joint Chiefs of Staff. Use of this
characteristic does not constitute standard behavior, but it does not
violate conformance to the standard, since the effects of such usage
are not visible to non-members of the closed group. Using the
procedures in this way permits options not provided by the standard.
Such options might permit,for example, carrying special protection
codes on protocol data units or for identifying DT TPDUs as carrying
a particular kind of message.
Standard negotiation procedures state that any parameter in a
received CR TPDU that is not defined by the standard shall be
ignored. This defines only the behavior that is to be exhibited
between two open systems. It does not say that an implementation
which recognizes such non-standard parameters shall not be operated
in networks supporting open systems interconnection. Further, any
other type TPDU containing non-standard parameters is to be treated
as a protocol error when received. The presumption here is that the
non-standard parameter is not recognized, since it has not been
defined. Now consider the following example:
Entity A sends Entity B a CR TPDU containing a non-standard
Entity B has been implemented to recognize the non-standard parameter
and to interpret its presence to mean that Entity A will be sending
DT TPDUs to Entity B with a special protection identifier parameter
Entity B sends a CC TPDU containing the non-standard parameter to
indicate to Entity A that it has received and understood the
parameter, and is prepared to receive the specially marked DT TPDUs
from Entity A. Since Entity A originally sent the non-standard
parameter, it recognizes the parameter in the CC TPDU and does not
treat it as a protocol error.
Entity A may now send the specially marked DT TPDUs to Entity B and
Entity B will not reject them as protocol errors.
Note that Entity B sends a CC TPDU with the non-standard parameter
only if it receives a CR TPDU containing the parameter, so that it
does not create a protocol error for an initiating entity that does
not use the parameter. Note also that if Entity B had not recognized
the parameter in the CR TPDU, it would have ignored it and not
returned a CC TPDU containing the parameter. This non-standard
behavior is clearly invisible and inaccessible to Transport entities
outside the closed group that has chosen to implement it, since they
are incapable of distinguishing it from errors in protocol.
9.2 Recovery from peer deactivation.
Transport does not directly support the recovery of the transport
connection from a crashed remote transport entity. A partial
recovery is possible, given proper interpretation of the state tables
in Annex A to IS 8073 and implementation design. The interpretation
of the Class 4 state tables necessary to effect this operation is as
Whenever a CR TPDU is received in the state OPEN, the entity is
required only to record the new network connection and to reset the
inactivity timer. Thus, if the initiator of the original connection
is the peer which crashed, it may send a new CR TPDU to the surviving
peer, somehow communicating to it the original reference numbers
(there are several ways that this can be done).
Whenever a CC TPDU is received in the
state OPEN, the receiver is required only to record the new network
connection, reset the inactivity timer and send either an AK, DT or
ED TPDU. Thus, if the responder for the original connection is the
peer which crashed, it may send a new CC TPDU to the surviving peer,
communicating to it the original reference numbers.
In order for this procedure to operate properly, the situation in a.,
above, requires a CC TPDU to be sent in response. This could be the
original CC TPDU that was sent, except for new reference numbers.
The original initiator will have sent a new reference number in the
new CR TPDU, so this would go directly into the CC TPDU to be
returned. The new reference number for the responder could just be a
new assignment, with the old reference number frozen. In the
situation in b., the originator could retain its reference number (or
assign a new one if necessary), since the CC TPDU should carry both
old reference numbers and a new one for the responder (see below).
In either situation, only the new reference numbers need be extracted
from the CR/CC TPDUs, since the options and parameters will have been
previously negotiated. This procedure evidently requires that the CR
and CC TPDUs of each connection be stored by the peers in nonvolatile
memory, plus particulars of the negotiations.
To transfer the new reference numbers, it is suggested that the a new
parameter in the CR and CC TPDU be defined, as in Part 9.1, above.
This parameter could also carry the state of data transfer, to aid in
resynchronizing, in the following form:
1) the last DT sequence number received by the peer that crashed;
2) the last DT sequence number sent by the peer that
3) the credit last extended by the peer that crashed;
4) the last credit perceived as offered by the surviving peer;
5) the next DT sequence number the peer that crashed expects to
send (this may not be the same as the last one sent, if the last
one sent was never acknowledged);
6) the sequence number of an unacknowledged ED TPDU, if any;
7) the normal data sequence number corresponding to the
transmission of an unacknowledged ED TPDU, if any (this is to
ensure the proper ordering of the ED TPDU in the normal data
A number of other considerations must be taken into account when
attempting data transfer resynchronization. First, the recovery will
be greatly complicated if subsequencing or flow control confirmation
is in effect when the crash occurs. Careful analysis should be done
to determine whether or not these features provide sufficient benefit
to warrant their inclusion in a survivable system. Second,
non-volatile storage of TPDUs which are unacknowledged must be used
in order that data loss at the time of recovery can be minimized.
Third, the values for the retranmsission timers for the communicating
peers must allow sufficient time for the recovery to be attempted.
This may result in longer delays in retransmitting when TPDUs are
lost under normal conditions. One way that this might be achieved is
for the peers to exchange in the original CR/CC TPDU exchange, their
expected lower bounds for the retransmission timers, following the
procedure in Part 9.1. In this manner, the peer that crashed may be
determine whether or not a new connection should be attempted. Fourth,
while the recovery involves directly only the transport peers when
operating over a connectionless network service, recovery when
operating over a connection-oriented network service requires some
sort of agreement as to when a new network connection is to be
established (if necessary) and which peer is responsible for doing
it. This is required to ensure that unnecessary network
connections are not opened as a result of the recovery. Splitting
network connections may help to ameliorate this problem.
9.3 Selection of transport connection reference numbers.
In N3756, when the reference wait period for a connection begins, the
resources associated with the connection are released and the
reference number is placed in a set of frozen references. A timer
associated with this number is started, and when it expires, the
number is removed from the set. A function which chooses reference
numbers checks this set before assigning the next reference number.
If it is desired to provide a much longer period by the use of a
large reference number space, this can be met by replacing the
implementation dependent function "select_local_ref" (page TPE-17 of
N3756) by the following code:
function select_local_ref : reference_type;
last_ref := (last_ref + 1) mod( N+1 ) + 1;
while last_ref in frozen_ref[class_4] do
last_ref := (last_ref + 1) mod( N+1 ) + 1;
select_local_ref := last_ref;
where "last_ref" is a new variable to be defined in declarations
(pages TPE-10 - TPE-11), used to keep track of the last reference
value assigned, and N is the length of the reference number cycle,
which cannot exceed 2**16 - 1 since the reference number fields in
TPDUs are restricted to 16 bits in length.
9.4 Obtaining Class 2 operation from a Class 4 implementation.
The operation of Class 4 as described in IS 8073 logically contains
that of the Class 2 protocol. The formal description, however, is
written assuming Class 4 and Class 2 to be distinct. This was done
because the description must reflect the conformance statement of IS
8073, which provides that Class 2 alone may be implemented.
However, Class 2 operation can be obtained from a Class 4
implementation, which would yield the advantages of lower complexity,
smaller memory requirements, and lower implementation costs as
compared to implementing the classes separately. The implementor
will have to make the following provisions in the transport entity
and the Class 4 transport machine to realize Class 2 operation.
1) Disable all timers. In the formal description, all Class 4
timers except the reference timer are in the Class 4 TPM.
These timers can be designed at the outset to be enabled or
not at the instantiation of the TPM. The reference timer is
in the Transport Entity module (TPE) and is activated by the
TPE recognizing that the TPM has set its "please_kill_me"
variable to "freeze". If the TPM sets this variable instead
to "now", the reference timer for that transport connection is
never started. However, IS 8073 provides that the reference
timer can be used, as a local entity management decision, for
The above procedure should be used when negotiating from Class
4 to Class 2. If Class 2 is proposed as the preferred class,
then it is advisable to not disable the inactivity timer, to
avoid the possibility of deadlock during connection
establishment if the peer entity never responds to the CR
TPDU. The inactivity timer should be set when the CR TPDU is
sent and deactivated when the CC TPDU is received.
2) Disable checksums. This can be done simply by ensuring that
the boolean variable "use_checksums" is always set to "false"
whenever Class 2 is to be proposed or negotiated.
3) Never permit flow control credit reduction. The formal
description makes flow control credit management a function of
the TPE operations and such management is not reflected in the
operation of the TPM. Thus, this provision may be handled by
always making the "credit-granting" mechanism aware of the
class of the TPM being served.
4) Include Class 2 reaction to network service events. The Class
4 handling of network service events is more flexible than
that of Class 2 to provide the recovery behavior
characteristic of Class 4. Thus, an option should be provided
on the handling of N_DISCONNECT_indication and
N_RESET_indication for Class 2 operation. This consists of
sending a T_DISCONNECT_indication to the Transport User,
setting "please_kill_me" to "now" (optionally to "freeze"),
and transitioning to the CLOSED state, for both events. (The
Class 4 action in the case of the N_DISCONNECT is to remove
the network connection from the set of those associated with
the transport connection and to attempt to obtain a new
network connection if the set becomes empty. The action on
receipt of the N_RESET is to do nothing, since the TPE has
already issued the N_RESET_response.)
5) Ensure that TPDU parameters conform to Class 2. This implies
that subsequence numbers should not be used on AK TPDUs, and
no flow control confirmation parameters should ever appear in
an AK TPDU. The checksum parameter is prevented from
appearing by the "false" value of the "use_checksums"
variable. (The acknowledgement time parameter in the CR and
CC TPDUs will not be used, by virtue of the negotiation
procedure. No special assurance for its non-use is
The TPE management of network connections should see to it
that splitting is never attempted with Class 4 TPMs running as
Class 2. The handling of multiplexing is the same for both
classes, but it is not good practice to multiplex Class 4 and
Class 2 together on the same network connection.
[BRI85] Bricker, A., L. Landweber, T. Lebeck, M. Vernon,
"ISO Transport Protocol Experiments," Draft Report
prepared by DLS Associates for the Mitre Corporation,
[COL85] Colella, Richard, Marnie Wheatley, Kevin Mills,
"COMSAT/NBS Experiment Plan for Transport Protocol,"
NBS, Report No. NBSIR 85-3141, May l985.
[CHK85] Chernik, C. Michael, "An NBS Host to Front End
Protocol," NBSIR 85-3236, August 1985.
[CHO85] Chong, H.Y., "Software Development and Implementation
of NBS Class 4 Transport Protocol," October 1985
(available from the author).
[HEA85] Heatley, Sharon, Richard Colella, "Experiment Plan:
ISO Transport Over IEEE 802.3 Local Area Network,"
NBS, Draft Report (available from the authors),
[INT85] "Performance Comparison Between 186/51 and 552,"
The Intel Corporation, Reference No. COM,08, January
[ISO84a] IS 8073 Information Processing - Open Systems
Interconnection - Transport Protocol Specification,
available from ISO TC97/SC6 Secretariat, ANSI,
1430 Broadway, New York, NY 10018.
[ISO84b] IS 7498 Information Processing - Open Systems
Interconnection - Basic Reference Model, available
from ANSI, address above.
[ISO85a] DP 9074 Estelle - A Formal Description Technique
Based on an Extended State Transition Model,
available from ISO TC97/SC21 Secretariat, ANSI,
[ISO85b] N3756 Information Processing - Open Systems
Interconnection - Formal Description of IS 8073
in Estelle. (Working Draft, ISO TC97/SC6)
[ISO85c] N3279 Information Processing - Open Systems
Interconnection - DAD1, Draft Addendum to IS 8073
to Provide a Network Connection Management
Service, ISO TC97/SC6 N3279, available from
SC6 Secretariat, ANSI, address above.
[JAI85] Jain, Rajendra K., "CUTE: A Timeout Based Congestion
Control Scheme for Digitial Network Architecture,"
Digital Equipment Corporation (available from the
author), March 1985.
[LIN85] Linn, R.J., "The Features and Facilities of Estelle,"
Proceedings of the IFIP WG 6.1 Fifth International
Workshop on Protocol Specification, Testing and
Verification, North Holland Publishing, Amsterdam,
[MIL85a] Mills, Kevin L., Marnie Wheatley, Sharon Heatley,
"Predicting Transport Protocol Performance",
[MIL85b] Mills, Kevin L., Jeff Gura, C. Michael Chernik,
"Performance Measurement of OSI Class 4 Transport
Implementations," NBSIR 85-3104, January 1985.
[NAK85] Nakassis, Anastase, "Fletcher's Error Detection
Algorithm: How to Implement It Efficiently and
How to Avoid the Most Common Pitfalls," NBS,
[NBS83] "Specification of a Transport Protocol for
Computer Communications, Volume 3: Class 4
Protocol," February 1983 (available from
the National Technical Information Service).
[NTA84] Hvinden, Oyvind, "NBS Class 4 Transport Protocol,
UNIX 4.2 BSD Implementation and User Interface
Description," Norwegian Telecommunications
Administration Establishment, Technical Report
No. 84-4053, December 1984.
[NTI82] "User-Oriented Performance Measurements on the
ARPANET: The Testing of a Proposed Federal
Standard," NTIA Report 82-112 (available from
NTIA, Boulder CO)
[NTI85] "The OSI Network Layer Addressing Scheme, Its
Implications, and Considerations for Implementation",
NTIA Report 85-186, (available from NTIA, Boulder CO)
[RFC85] Mills, David, "Internet Delay Experiments," RFC889,
December 1983 (available from the Network Information
[SPI82] Spirn, Jeffery R., "Network Modeling with Bursty
Traffic and Finite Buffer Space," Performance
Evaluation Review, vol. 2, no. 1, April 1982.
[SPI84] Spirn, Jeffery R., Jade Chien, William Hawe,
"Bursty Traffic Local Area Network Modeling,"
IEEE Journal on Selected Areas in Communications,
vol. SAC-2, no. 1, January 1984.