COMMUNICATIONS AND INTERNET ARCHITECTURES 1 1. Introduction Network links can be divided into two categories: those using point -to-point connections… [625175]
15-6-2020
Computer
Networks
The Medium Access Control
Sublayer
Miguel Ángel Vecina García
COMMUNICATIONS AND INTERNET ARCHITECTURES
1
1. Introduction
Network links can be divided into two categories: those using point -to-point connections
and those using broadcast channels. In this project deals with broadcast links and their
protocols. In the literature, broadcast channels are sometimes referred to as multiaccess
channels or random -access channels . The protocols used to determine who goes next on a
multiaccess channel belong to a sublayer of the data link layer called the MAC (Medium Access
Control ) sublayer. The MAC sublayer is especially important in LANs, particularly wireless ones
because wireless is naturally a broadcast channel.
2. The Medium Access Control Sublayer
2.1. The channel allocation problem
The central theme of this chapter is how to allocate a single broadcast channel among
compet ing users. The channel might be a portion of the wireless spectrum in a geographic
region, or a single wire or optical fiber to which multiple nodes are connected. In both cases, the
channel connects each user to all other users and any user who makes full use of the channel
interferes with other users who also wish to use the channel.
2.1.1. Static Channel Allocation
A traditional way of solving the problem is to do frequency division multiplexing (FDM) . If
there are N users, the bandwidth is divided into N parts of the same size, assigning each user
one of those parts. Since each user has its own channel, the problem of simultaneous access to
the medium no longer exists. This mechanism is simple and efficient when the number of users
is low and all of them h ave a high traffic load.
In other circumstances, FDM presents some problems. Dividing a channel into N static sub –
channels is inherently inefficient, since when some users are inactive, some of the total capacity
of the channel is being wasted. It is also a very rigid scheme in the face of variations in the
number of users on the network. In computer networks, traffic is usually in bursts, and as a
result, most channels are inactive for a long time.
The same argument can be made for the case of time divis ion multiplexing. Each user is
statically assigned the i -th time slot, and if he or she does not use it, it is simply lost. Clearly,
dynamic channel assignment mechanisms are necessary.
2.1.2. Assumptions for Dynamic Channel Allocation
Underlying all the work d one in this area are the following five key assumptions:
1. Independent Traffic . The model consists of N independent stations (e.g., computers,
telephones), each with a program or user that generates frames for transmission. Once
a frame has been generated, the station is blocked and does nothing until the frame has
been successfully transmitted.
2. Single Channel . A single channel is available for all communication. All stations can
transmit on it and all can receive from it.
3. Observable Collisions . If two frames are transmitted simultaneously, they overlap in
time and the resulting signal is garbled. This event is called a collision . A collided frame
must be transmitted again later.
2
4. Continuous or Slotted Time . Time may be assumed continuous, in whi ch case frame
transmission can begin at any instant. Alternatively, time may be slotted or divided into
discrete intervals (called slots). Frame transmissions must then begin at the start of a
slot. A slot may contain 0, 1, or more frames, corresponding to an idle slot, a successful
transmission, or a collision, respectively.
5. Carrier Sense or No Carrier Sense . With the carrier sense assumption, stations can tell
if the channel is in use before trying to use it. No station will attempt to use the channel
while it is sensed as busy. If there is no carrier sense, stations cannot sense the channel
before trying to use it. They just go ahead and transmit. Only later can they determine
whether the transmission was successful.
2.2. Multiple Access Protocols
2.2.1. ALOHA
Aloha was a pioneering computer networking system developed at the University of Hawaii.
It was first deployed in 1970 by Norman Abramson and his colleagues, although the network
itself is no longer in use, it was built to allow people from different locations to access the main
computer systems using packet radio, where there was a main node and a series of secondary
nodes that were located on the different islands of the archipelago, in this way Aloha is based
on using a shared medium for transmissi on, where the same frequency is used for all the nodes.
The Aloha scheme was very simple, as data was sent via teletype, the transmission rate normally
did not exceed 80 characters per second. When two stations tried to broadcast at the same time,
both tra nsmissions were interlinked, and the data had to be forwarded manually, this implied
the need for some kind of system to control who could broadcast and at what time.
• Pure ALOHA
According to the distribution of the nodes in the islands and the main node to have
communication it was thought that the different stations would share the same channel without
worrying about whether it was free or not. When a station wished to transmit, it would simply
broadcast a frame. Once completed, it was awaiting conf irmation that the information had been
correctly received by the recipient. If after a certain time no confirmation was received, the
transmitter assumed that a collision would have occurred (remember that a collision occurs
when two or more stations enter information into the channel at the same time, resulting in the
invalidation of all the frames that were affected) so a random time was expected and then the
frame was resent.
The main problem of the protocol is that the sending of frames by the nodes i s done in
a confusing way and it is enough that two frames collide or overlap, only in one bit, so that both
are useless and must be retransmitted, since the nodes will only realize the problem after the
transmission is finished. On the other hand, the sec ond frame could collide with a third one and
3
so on, the collisions increase in a non -linear way and the performance decreases quickly. Aloha's
maximum throughput is 18.4%, which is achieved with a 50% channel utilization, this means that
81.6% of the total available bandwidth is being wasted basically due to stations trying to
broadcast at the same time.
• Slotted ALOHA
To improve the performance of Aloha, Aloha was defined as slotted by Roberts Melcalfe in
1972, with the only difference that the nodes can only transmit in certain moments of time or
slots. This synchronism means that when a terminal wants to transmit, it must wait until the
beginning of the new period to do so. This way the number of collisions is lower than in Aloha
Puro. However, this doe s not mean that collisions do not occur, when two stations want to
transmit, and wait until the next slot, producing a collision. They try it again, and there is another
collision. From then on, both stations manage to broadcast successfully. In this way t he number
of collisions produced is less than if we were working with simple aloha in which there were four
collisions that here have been successful transmissions. This small change results in a 50%
increase in yield and a doubling of the yield to 36.8%.
2.2.2. Carrier Sense Multiple Access Protocols
Protocols in which stations listen for a carrier (i.e., a transmission) and act accordingly are
called carrier sense protocols . Several them have been proposed.
• Persistent and Nonpersistent CSMA
The first carrier sense protocol that we will study here is called 1-persistent CSMA (Carrier
Sense Multiple Access ). The CSMA 1 -persistent protocol works as follows: when you have to
transmit a frame, first listen to the channel and if it is free, send th e frame, otherwise wait until
it is free and then send it. It is called CSMA 1 -persistent because there is probability 1, i.e.
certainty that the frame will be transmitted when the channel is free. In a real situation with
high traffic it is very possible that when a node finishes transmitting there are several waiting to
4
send their data, and with CSMA 1 -persistent all the frames will be emitted at the same time and
will collide, being able to repeat the process several times with the consequent degradation of
the performance. A collision will occur even if they do not start transmitting exactly at the same
time, it is enough if two nodes start transmitting with a time difference less than the distance
separating them, since in that case both will detect the free channel at the time of starting the
transmission. It follows then, that in this type of network the signal propagation delay can have
a significant effect on performance. The performance obtained with this protocol can reach 55%
with a 100% occupancy rate.
A second carrier sense protocol is nonpersistent CSMA . Before sending the channel is
listened and if the channel is free the frame is transmitted. If you are busy, instead of listening,
you wait for a random time that is given by an algorithm calle d backoff, after which the process
is repeated. The protocol has a lower efficiency than CSMA 1 -persistent for moderate traffic,
because it introduces a higher latency; however, it behaves better in situations of intense traffic
because it avoids the colli sions produced by the stations that are waiting for the end of the
transmission of a frame in a given moment.
The last protocol is p-persistent CSMA . It uses time intervals and works as follows: when the
node has something to send, it listens to the chann el first, if it is busy it waits for a random time.
When the channel is free, a random number is selected with even distribution between 0 and 1,
if the number is lower than p, the frame is transmitted. Otherwise, it waits for the next time slot
to transmi t and repeats the algorithm until the frame is transmitted or another node uses in
channel, in which case it waits for a random time and starts the process again from the
beginning. The efficiency of the protocol is generally higher than that of CSMA 1 -persistent and
CSMA non -persistent.
• CSMA with Collision Detection
A problem with the previous protocols is that once a frame has started transmitting it
continues to be transmitted even when a collision is detected. Since it is more efficient to stop
transmitting and wait a random time to do it again, the Carrier Detection Multiple Access
protocols with Collision Detection or CSMA/CD implement this improvement.
5
In a CSMA/CD network the only circumstance in which a collision can occur is when two
hosts start transmitting at the same time, or with a time difference small enough that the signal
from one has not been able to reach the other before it starts transmitting. In simple words, the
node did not manage to "hear" that another node had already started transmission, due to the
delay in signal propagation.
This period of time is called the “containment period” and corresponds to one of the three
possible states that a CSMA/CD network has, the other two states are transmission and free
state.
2.2.3. Collision -Free Protocols
In the protocols to be described, we assume that there are exactly N stations, each
programmed with a unique address from 0 to N − 1. It does not matter that some stations may
be inactive part of the time. We also assume tha t propagation delay is negligible.
• A Bit -Map Protocol
The simplest non -collision protocol is the so -called BIT MAP METHOD. In this case, each
contest period has exactly N slots. To start working, an exploratory round of N intervals is
established in which each host, starting from 0, has the possibility to send a bit with the value 0
or 1 to indicate if it has any frame to transmit. After the N intervals, all hosts have stated their
status, and all know who has data to transmit. Then, in an orderly fashion, each host that had
data to transmit does so. Once this is done, a new exploratory round is established. If any host
has the need to transmit right after having let its turn pass, it will have to wait for the next round.
Considering the performance, this protocol generates an additional N -bit frame. If the
network has no traffic, a bitmap frame is generated that is continuously running around the
network.
The performance of this protocol increases as network traffic incre ases. One problem
with this protocol is that the quality of service it offers to each node is not the same. In saturation
situations this effect disappears, since if all the hosts have frames to send each one will be able
to transmit one frame at a time.
6
In short, the bitmap protocol is more efficient and more homogeneous in its behaviour as
the network load increases.
• Token Passing
It consists of a single station that can transmit at a given time and is precisely the one that
has the token at that time, this is responsible for assigning permission to transmit data.
The information that travels in it travels in one direction only along the network. It does
not require routing, since each packet is passed to its neighbour and so consecutively, for
exam ple, we have three workstations A, B, C, etc., if a station A transmits a message, it is passed
to B, regardless of whether it is directed to B or to another, then by C ,etc.
The token keeps circulating constantly throughout the ring while no station need s to
transmit. When a machine wants to send or request data to the network, it must wait for the
empty token to arrive. When it arrives, it attaches the message to the token and activates a
signal indicating that the bus is busy. The message continues its journey in order, until it reaches
the destination station. The sending station can check if the token found the destination station
and if it delivered the corresponding information (Acknowledgement), in these cases when the
other computer receives the in formation the token returns to the source station that sent the
message with a message that the information was received. The token is then released for use
by any other computer. A device must wait until the token arrives at that location before it can
attach the message it wants to send to another workstation.
• Binary Countdown
It implicitly assumes that the transmission delays are negligible so that all stations see
asserted bits essentially instantaneously.
To avoid conflicts, an arbitration rule must be applied: as soon as a station sees that a high –
order bit position that is 0 in its address has been overwritten with a 1, it gives up.
It has the property that higher -numbered stations have a higher priority than lower -numbered
stations, which may be either good or bad, depending on the context.
7
The channel efficiency of this method is d/(d + log2 N). If, however, the frame format has
been cleverly chosen so that the sender’s address is the first field in the frame, even these log2
N bits are not wasted, and the efficiency is 100%.
2.2.4. Limited -Contention Protocols
We have seen that protocols with containment (i.e. with collisions) are ideal when traffic
levels are low, as they have small delays and do not introduce overheating; all data transmit ted
are frames of useful information. On the other hand, when traffic increases, it is preferable to
lose part of the channel's capacity in enabling mechanisms that enable 'turn -by-turn', otherwise
it is not possible to use the channel to its full potentia l.
One could think of an ideal protocol containing the best of both worlds. It should be astute
enough to operate 'chaotically' (i.e. with collisions) at low levels of traffic and put in place
rigorous arbitration mechanisms if traffic increases above cer tain levels considered dangerous,
i.e. it should be self -adaptive. These types of protocols are called limited containment protocols.
In case the network has little traffic, these protocols will behave according to some of the
collision protocols we have seen. But when certain occupancy thresholds are exceeded, the
protocol will divide the channel into intervals, assigning one to each computer, in strict rotation.
This behaviour is equivalent to performing time division multiplexing on the channel. In prac tice,
it is usually a few computers that generate most of the traffic (remember that traffic is self –
similar), so the ideal is to identify the 'culprits' and isolate them at their own intervals,
independent of the rest of the computers; in this way, those computers with high traffic achieve
good performance without harming the 'silent' majority. It is precisely the early identification of
these 'culprits' that is the key to the functioning of these protocols. Computers do not
necessarily have to be identifi ed individually, it is sufficient to detect a group with high traffic
(which will presumably contain some 'suspect') and isolate it from the rest. One of the protocols
that works with this principle is the so -called adaptive tree walk protocol.
2.2.5. Wireless LAN Protocols
Non -guided electromagnetic waves are an ideal medium for the creation of broadcast
networks; we have already seen how some of the first experiences (Aloha) were made with this
type of transmission medium. Nowadays, with the rise of mobile systems, local networks based
on radio waves and infrared have appeared; infrared systems, due to their characteristics, have
a reduced range and require strict direct vision between transmitter and receiver. The radio can
only transmit at very low power (0.1 W) due to legal restrictions, so its range is also reduced,
although not as much as the infrared. Normally the band known as Industrial/Scientific/Medical
(2.4 – 2.484 GHz) is used. Typically, a wireless LAN consists of a set of base stations, l inked
together by some sort of cable, and a series of mobile stations that communicate with the
nearest base station. The set of base stations forms a miniature cellular system.
Given that the transmission is carried out by means of electromagnetic waves, we could
think that we are faced with a case like that of the Aloha networks. However, there are more
efficient alternatives than the Aloha for this type of environment, such as the one described
below.
Let's assume four computers A, B, C and D placed in line and separated 10 meters each from
the next:
8
Let us also assume that the maximum range of each of them is 12 meters.
Now imagine that we implement a CSMA protocol for your communication; the sequence
of events to transmit a frame could be as foll ows:
– A wish to transmit data to B; when it detects the medium, it finds it free and starts
transmission.
– With A transmitting C wants to transmit data to B; it detects the medium and finds it
free (C does not listen to A because it is 20m away), so C starts transmitting.
The result is a collision at the receiver (B) that is not detected by either A or C. This is known
as the hidden station problem.
Now let us imagine the same distribution of stations and another sequence of events:
– B wants to transm it data to A, detects the free medium and starts transmission.
– C then wishes to transmit data to D; since it detects that B is transmitting, it waits until
it is finished to avoid a collision.
The result is that a transmission that in principle could hav e been made without interference
(since A cannot hear C and D cannot hear B) is not carried out, thus reducing the efficiency of
the system. This is known as the exposed station problem.
An early and influential protocol that tackles these problems for wireless LANs is MACA
(Multiple Access with Collision Avoidance ) (Karn, 1990). The basic idea behind it is for the
sender to stimulate the receiver into outputting a short frame, so stations nearby can detect this
transmission and avoid transmitting for th e duration of the upcoming (large) data frame. This
technique is used instead of carrier sense.
2.3. ETHERNET
We will begin our study of real systems with Ethernet, probably the most ubiquitous kind
of computer network in the world.
9
2.3.1. Classic Ethernet Physical Layer
They called the system Ethernet after the luminiferous ether , through which
electromagnetic radiation was once thought to propagate.
The Xerox Ethernet was so successful that DEC, Intel, and Xerox drew up a standard in 1978
for a 10 -Mbps Ethernet, called the DIX standard . With a minor change, the DIX standard became
the IEEE 802.3 standard in 1983.
Classic Ethernet snaked around the building as a single long cable to which all the computers
were attached.
Over each of these cables, information was sent using the Manchester encoding. An
Ethernet could contain multiple cable segments and multiple repeaters, but no two transceivers
could be more than 2.5 km apart and no path between any two transceivers could traverse more
than four repeaters.
2.3.2. Classic Ethernet MAC Sublayer Protocol
The format used to send frames is shown in Fig. 4 -14. First comes a Preamble of 8 bytes,
each containing the bit pattern 10101010 (except for the last byte, in which the last 2 b its are
set to 11). This last byte is called the Start of Frame delimiter for 802.3. The Manchester
encoding of this pattern produces a 10-MHz square wave for 6.4 μsec to allow the receiver’s
clock to synchronize with the sender’s. The last two 1 bits tell the receiver that the rest of the
frame is about to start.
Next come two addresses, one for the destination and one for the source. They are each
6 bytes long. The first transmitted bit of the destination address is a 0 for ordinary addresses
and a 1 for group addresses. Group addresses allow multiple stations to listen to a single address.
When a frame is sent to a group address, all the stations in the group r eceive it. Sending to a
group of stations is called multicasting . The special address consisting of all 1 bit is reserved for
broadcasting .
Next comes the Type or Length field, depending on whether the frame is Ethernet or
IEEE 802.3. Ethernet uses a Type field to tell the receiver what to do with the frame.
Next come the data, up to 1500 bytes. This limit was chosen somewhat arbitrarily at the time
the Ethernet standard was cast in stone, mostly based on the fact that a transceiver needs
enough RAM to hol d an entire frame and RAM was expensive in 1978.
10
If the data portion of a frame is less than 46 bytes, the Pad field is used to fill out the frame to
the minimum size.
The final field is the Checksum . In fact, it is defined exactly by the generator polynomial we
gave there, which popped up for PPP, ADSL, and other links too. This CRC is an error detecting
code that is used to determine if the bits of the frame have been received correctly. It just does
error detection, with the frame droppe d if an error is detected.
• CSMA/CD with Binary Exponential Backoff
In a variety of computer networks, binary exponential backspace or truncated binary
exponential backspace refers to an algorithm used to space repeated retransmissions of the
same block o f data, often to avoid network congestion.
Examples are frame relay in multiple access carrier detection with collision avoidance
(CSMA/CA) and multiple access carrier detection with collision detection (CSMA/CD) networks,
where this algorithm is part of the access channel method used to send data over these
networks. In Ethernet networks, the algorithm is commonly used to schedule retransmissions
after collisions. The retransmission is delayed by an amount of time derived from the time slot
and the number of retransmission attempts.
After c collisions, a random number of slot times between 0 and 2 ^c – 1 is chosen. After the
first collision, each emitter will wait for 0 or 1 slot. After the second collision, senders should
wait between 0 and 3 times for a slot (included). After the third collision, senders will wait
anywhere from 0 to 7 times a slot (inclusive), and so on. As the number of retransmissions
attempted increases, the number of delay possibilities increases exponentially.
The 'truncated' simply means that after a certain number of increases, the exponentiation
stops; that is, the transmission time reaches a ceiling, and after that it does not increase any
further. For example, if the ceiling is set at i = 10 (as it is in the 802.3 IEEE standard CSMA / CD),
then the maximum delay is 1023 times the slot. This is useful because these delays cause other
stations to crash as well. There is a possibility that, in a busy network, hundreds of people may
be trapped in a single collision set.
2.3.3. Ethernet Performance
Ethernet is a set of technologies and protocols that are used primarily in LANs. The
performance of Ethernet is analysed by computing the efficiency of the channel under different
load conditions.
Let us assume an Ethernet network has k stations and each station transmits with a
probability p during a contention slot. Let A be the probability that some station acquires the
channel. A is calculated as:
A = kp (1−p)kp
The value of A is maximized at p = 1/k. If there can be innumerable stations connected
to the Ethernet network, i.e. k → ∞, the maximum value of A will be 1/e.
Let Q be the probability that the contention period has exactly j slots. Q is calculated as:
Q = A (1−A)j−1
Let M be the mean number of slots per contention. So, the value of M will be :
11
𝑀=∑𝑗𝐴(1−𝐴)𝑗−1∝
𝑗=0=1
𝐴
Given that τ is the propagation time, each slot has duration 2 τ. Hence the mean
contention interval, 𝑤 will be 2 τ/A.
Let P be the time is seconds for a fra me to propagate.
The channel efficiency, when a number of stations want to send frame, can be calculated as:
𝐶ℎ𝑎𝑛𝑛𝑒𝑙 𝐸𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑐𝑦 =𝑃
𝑃+2𝜏
𝐴
Let F be the length of frame, B be the cable length, L be the cable length, c be the speed
of signa l propagation and e be the contention slots per frame. The channel efficiency in terms
of these parameters is:
𝐶ℎ𝑎𝑛𝑛𝑒𝑙 𝐸𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑐𝑦 =1
1+2𝐵𝐿𝑒
𝑐𝐹
2.3.4. Switched Ethernet
The Ethernet's CSMA/CD media access control scheme works well on most networks, it does
not pose a problem when networking consists of opening and saving files or searching
databases; but newer applications, such as video conferencing, transferring large m edia files,
generate a constant flow of data, and if you have more than one in a segment, the result is a
very high traffic load and many CSMA collisions. The consequence of the multiple collisions is
the degradation of the overall performance of the netwo rk.
A technique called Ethernet Switch, improves the performance of the network at the
individual nodes without leading to an upgrade of the LAN cards installed in the PC or the cabling
system. Switched hubs do not increase the speed of transmissions, bu t they do eliminate cable
containment at each node, which translates into higher performance and results similar to those
that would be obtained if faster transmissions were available.
Installing a switch does not require changing LAN adapters and Ethern et hubs, but the cards
will behave as if they were the only ones on the network. Unlike other network alternatives,
switched hubs do not split the bandwidth between active nodes but a high -speed processor
located within the hub transfers the packets across a backplane operating at hundreds of
megabits per second. This is called collapsed support segment architecture because it acts as a
series of individual cable hubs connected via a high -speed support link.
Older computers cannot take advantage of the 10 -megabit Ethernet channel, so most
companies offer Ethernet switching products in versions that allow 10 -megabit bandwidth to be
shared between one to eight nodes.
2.3.5. Fast Ethernet
Fast Ethernet or High Speed Ethernet is the name of a series of IEEE standards for 100 Mbps
(megabits per second) Ethernet networks. The name Ethernet comes from the physical concept
12
of ether. The fast prefix was added at the time to differentiate it from the original 10 Mbps
Ethern et version.
Due to increased storage capacity and processing power, today's PCs have the ability to
handle high quality graphics and complex multimedia applications. When these files are stored
and shared on a network, transfers from one client to another result in a high use of network
resources.
Traditional networks operated between 4 and 16 Mbps. More than 40% of all PCs are
connected to Ethernet. Traditionally, Ethernet worked at 10 Mbps. At these speeds, because
companies produce large files, they can have long delays when sending files over the network.
These delays result in the need for greater speed in the networks.
Fast Ethernet is not the fastest version of Ethernet today, with Gigabit Ethernet and 10
Gigabit Ethernet being the fastest.
2.3.6. Gigabit Ethernet
Gigabit Ethernet, also known as Gigae, is an extension of the Ethernet standard (specifically
IEEE version 802.3ab and 802.3z) that achieves a transmission capacity of 1 gigabit per second,
corresponding to about 1000 megabits per second of performance against about 100 of Fast
Ethernet (also called 100BASE -TX).
2.3.7. 10 Gigabit Ethernet
10 Gigabit Ethernet (XGbE or 10GbE) is the latest (year 2003) and fastest Ethernet standard.
IEEE 802.3ae defines a version of Ethernet with a nominal speed of 10 Gbit/s, ten times faster
than gigabit Ethernet.
The 10 Gigabit Ethernet standard contains seven media types for LAN, MAN and WAN. It has
been specified in the IEEE 802.3ae supplementary standard and will be included in a future
revision of the IEEE 802.3 standard.
There are different standards for the physical level (PHY). The letter X stands for 8B/10B
coding and is used for copper interfaces. The most common optical variety is called LAN PHY,
used to connect routers and switches to each other. Eve n though it is called a LAN, it can be
used with 10GBase -LR and 10GBase -ER up to 80 km. LAN PHY uses a line rate of 10.3 Gbit/s and
66B encoding (1 transition every 66 bits at least). WAN PHY (marked with a W) encapsulates the
Ethernet frames for transmiss ion on an SDH/SONET STS -192c channel.
13
2.4. Wireless LANS
2.4.1. The 802.11 Architecture and Protocol Stack
An 802.11 LAN is based on a cellular architecture, i.e. the system is divided into cells, where
each cell (called Basic Service Set, BSS) is controlled by a Base Station called a Access (AP),
although it can also work without it in the case that the machines communicate with each other.
The Access Points of the different cells are connected through some kind of backbone (called
Distribution Sys tem).
The fully interconnected wireless LAN, including the various cells, the respective Access
Points and the Distribution System is referred to in the standard as an Extended Service Set
(Extended Service Set, ESS).
2.4.2. The 802.11 Physical Layer
The init ial 802.11 standard defines two forms of spread spectrum modulation for the
physical layer: frequency hopping (802.11 FHSS) and direct sequence (802.11 DSSS). These two
standards specify a 2.4GHz operating frequency with data rates of 1 and 2Mbps. Another initial
physical layer utilizes infrared passive reflection techniques for transmission of data at 1 and
2Mbps; however, this standard has not been implemented in products.
In late 1999, the IEEE published two supplements to this 802.11 standard: 802.11a and
802.11b. The 802.11a standard defines operation at up to 54Mbps using orthogonal frequency
division multiplexing (OFDM) modulation in the 5.8GHz frequency band. The IEEE 802.11b
version of the standard is a data rate extension of the initial 802.11 DSS S, providing operation in
the 2.4GHz band with additional data rates of 5.5 and 11Mbps.
Most companies implementing wireless LANs today are installing 802.11b -based systems.
The 802.11 DSSS radios interoperate with 802.11b access points; however, the 802. 11 FHSS
radios do not.
2.4.3. The 802.11 MAC Sublayer Protocol
The multiple access method in IEEE 802.11 is the so -called Distribution Function Distributed
Cordination Function (DCF) using the well -known Carrier Census Multiple Access Method with
Prevention of Collisions, (Carrier Sense Multiple Access / Collision Avoidance, CSMA/CA) . This
method requires each wireless node to listen to the shared medium to find out if other nodes
are transmitting. If the channel is unoccupied, the node can transmit, otherwise the node listens
until the transmission ends, and enters a random waiting period before running the procedure.
This prevents some stations from monopolizing the ch annel by start transmitting immediately
after the other one is finished.
Receipt of packages at DCF requires confirmation from the destiny. There is a short period
of time between the sending of the ACK by recipient called Short Inter Frame Space, SIFS. In
802.11, the confirmation ACK has priority over any other traffic, achieving the outstanding
feature that is the great speed of confirmations. Any transmission other than an ACK should
14
expect at least a DIFS (DCF Inter Frame Space) before transmitting any data. If the transmitter
detects a half busy again, returns to BackOff time but reducing the waiting time. This will be
repeated until the waiting time reaches ZERO where the node is enabled to transmit, after the
next transmission.
This method is similar to that used in the 802.3 Ethernet protocol and assumes that all nodes
listen to the channel simultaneously. This is not always true on a wireless channel, where the
Hidden node. Let's look at the following case, nodes A and B are within the range of Acce ss Point
but Node B does not know that Node A exists because it is outside its range and therefore cannot
know whether it is transmitting or not.
This is solved using a second method of carrier sensing called Virtual Carrier Sense enables
a node to reserv e the channel for a certain period of time using RTS/CTS frames.
In the example above, Node A sends an RTS (Request To Send) to Access Point. This RTS, has a
field that specifies the time you request the reservation and is not heard by Node B because it's
out of range. The information in the reserve is stored by the remaining nodes within range of A
on a called Network Allocation Vector (NAV). The AP responds with a CTS that contains the time
allocated for the reservation. Node B that receives CTS from the AP updates its NAV table
according to the information provided, thus resolving the hidden node problem.
2.4.4. The 802.11 Frame Structure
All Layer 2 frames consist of a header, a content, and an FCS section. The 802.11 frame
format is similar to the Ethernet frame format, except that it contains more fields. As shown in
Figure 4 -29, all 802.11 wireless frames contain the following fields:
– Frame Control: identifies the type of wireless frame and contains subfields for protocol
version, frame type, address type, power management and security settings.
– Duration: in general, it is used to indicate the remaining duration needed to receive the
next frame transmission.
– Address 1: Usually contains the MAC address of the wireless receiving device or A P.
– Address 2: usually contains the MAC address of the wireless device or AP transmitter.
– Address 3: sometimes contains the MAC address of the destination, such as the router
interface (default gateway) to which the AP is connected.
– Sequence Control: contai ns the subfields Sequence Number and Fragment Number. The
Sequence Number indicates the sequence number of each frame. The Fragment
Number indicates the number of each frame that was sent from a fragmented frame.
– Address 4: usually missing, as it is used o nly in ad hoc mode.
– Content: contains the data for transmission.
– FCS: is the Frame Check Sequence, used for layer 2 error control.
15
2.5. Broadband Wireless
To stimulate the market, IEEE formed a group to standardize a broadband wireless
metropolitan area ne twork. The next number available in the 802 -numbering space was 802.16 ,
so the standard got this number. Informally the technology is called WiMAX (Worldwide
Interoperability for Microwave Access ). We will use the terms 802.16 and WiMAX
interchangeably.
Like the other 802 standards, 802.16 was heavily influenced by the OSI model, including the
(sub)layers, terminology, service primitives, and more. Unfortunately, also like OSI, it is fairly
complicated. In fact, the WiMAX Forum was create d to define interoperable subsets of the
standard for commercial offerings.
2.5.1. Comparison of 802.16 with 802.11 and 3G
Like 802.11, WiMAX is all about wirelessly connecting devices to the Internet at megabit/sec
speeds, instead of using cable or DSL. The devices may be mobile, or at least portable. WiMAX
did not start by adding low -rate data on the side of voice -like cellular networks; 802.16 was
designed to carry IP packets over the air and to connect to an IP -based wired network with a
minimum of fus s. The packets may carry peer -to-peer traffic, VoIP calls, or streaming media to
support a range of applications. Also, like 802.11, it is based on OFDM technology to ensure
good performance in spite of wireless signal degradations such as multipath fading , and on
MIMO technology to achieve high levels of throughput.
With all of these features, 802.16 most closely resembles the 4G cellular networks that are
now being standardized under the name LTE (Long Term Evolution ). While 3G cellular networks
are based on CDMA and support voice and data, 4G cellular networks will be based on OFDM
with MIMO, and they will target data, with voice as just one application. It looks like WiMAX and
4G are on a collision course in terms of technology and applications. Per haps this convergence
is unsurprising, given that the Internet is the killer application and OFDM and MIMO are the
best -known technologies for efficiently using the spectrum.
2.5.2. The 802.16 Architecture and Protocol Stack
The general structure of the IEEE pr otocol stack is shown as belo w:
As shown in the diagram, IEEE 802.16 lays down the standards for physical layer and data
link layer.
• Physical Layer − The two popular services of the physical layer are fixed WiMAX and
mobile WiMAX. They operate in the licensed spectrum below 11 GHz. Fixed WiMAX was
16
released in 2003 and uses OFDM; while mobile WiMAX was released in 2005 and uses
scalable OFDM.
• Data Link Layer − The data link layer is subdivided into three sublayers −
o Security sublayer − This is the bottommost layer and is concerned with security
and privacy of the wireless network. It deals with encryption, decryption, and
key management.
o MAC common sublayer − The MAC sublayer is concerned with channel
management. The channel management is connection oriented, a feature that
plays due to which quality of service (QoS) guarantees are given to the
subscriber. The base station controls the system. It schedules the channels from
base station to the subscriber (downlink channels) and also manages the
chann els from the subscriber to the base station (uplink channels).
o Service specific convergence sublayer − This is equivalent to logical link control
layer of other systems. It provides the required services and interface to
network layer.
2.5.3. The 802.16 Physica l Layer
The physical layer of WiMAX is based on OFDM. This is a transmission scheme that allows
the transfer of data, video and multimedia at high speed and is used in various broadband
systems such as DSL, Wi -Fi, etc. It is an efficient scheme for the transmission of very high data
rates in a NLOS environment or a multipath radius.
2.5.4. The 802.16 MAC Sublayer Protocol
The IEEE 802.16 MAC has been designed for connection -oriented channel management for
point -to-multipoint (PMP) broadband services. This imp lies that one base station is connected
to multiple subscriber stations. The base station controls the system. It schedules the channels
from the base station to the subscriber (downlink channels) and also manages the channels from
the subscriber to the ba se station (uplink channels).
The MAC layer accepts data packets called MAC service data units (MSDUs) from the upper
layer. It then organizes them into MAC protocol data units (MPDUs) for transmission over the
air interface. The reverse procedure is followed in case of receiving the transmissions.
A convergence sublayer is included in the versions IEEE 802.16 -2004 and IEEE 802.16e -2005
of the MAC sublayer providing interfaces with a number of higher -layer protocols, like ATM TDM
Voice, E thernet, IP, etc.
2.5.5. The 802.16 Frame Structure
The IEEE 802.16 set of standards lays down the specifications for wireless broadband
technology. It has been commercialized as Worldwide Interoperability for Microwave Access
(WiMAX) that is responsible for th e delivery of last -mile wireless broadband access.
The IEEE 802.16 MAC sublayer is the most important sublayer and concerned with channel
management. It has been designed for connection -oriented channel management for point -to-
multipoint (PMP) broadband se rvices.
17
2.6. BLUETOOTH
Bluetooth 1.0 was released in July 1999, and since then the SIG has never looked back. All
manner of consumer electronic devices now uses Bluetooth, from mobile phones and laptops
to headsets, printers, keyboards, mice, gameboxes, watches, music players, navigation units,
and more. The Bluetooth protocols let these devices find and connect to each other, an act called
pairing , and securely transfer data.
2.6.1. Bluetooth Architecture
2.6.2. Bluetooth Applications
– Wireless connection via OBEX.
– Transfer of contact sheets, appointments, and reminders between devices via OBEX
– Replacement of the traditional cable communication between GPS equipment and
medical equipment .
– Remote controls (traditionally dominated by infrared) .
– Send small advertisements from advertisers to Bluetooth devices. A business could send
advertising to mobile phones whose Bluetooth (those with it) was activated when
passing by.
– Sony PlayStation 3, PlayStation 4, Microsoft Xbox 360, Xbox One, Wii U and Ninte ndo
Switch consoles all feature Bluetooth, allowing them to use wireless controllers,
although the original Wii U Gamepad connects to the console via Wi -Fi and the Wii
controllers use infrared technology for the pointer function.
2.6.3. The Bluetooth Protocol Stack
The Bluetooth standard has many protocols grouped loosely into the layers shown in Fig. 4 –
35. The first observation to make is that the structure does not follow the OSI model, the TCP/IP
model, the 802 model, or any other model.
18
2.6.4. The Bluetooth Radio Layer
Three forms of modulation are used to send bits on a channel. The basic scheme is to use
frequency shift keying to send a 1 -bit symbol every microsecond, giving a gross data rate of 1
Mbps. Enhanced rates were introduced with the 2.0 version of Bluetooth. These rates use phase
shift keying to send either 2 or 3 bits per symbol, for gross data rates of 2 or 3 Mbps. The
enhanced rates are only used in the data portion of frames.
2.6.5. The Bluetooth Link Layers
The link control (or baseband) layer is the closest thing Bluetooth has to a MAC sublayer. It
turns the raw bit stream into frames and defines some key formats. In the simplest form, the
master in each piconet defines a series of 625 μsec time slots, with the master’s transmis sions
starting in the even slots and the slaves’ transmissions starting in the odd ones. This scheme is
traditional time division multiplexing, with the master getting half the slots and the slaves
sharing the other half. Frames can be 1, 3, or 5 slots lon g. Each frame has an overhead of 126
bits for an access code and header, plus a settling time of 250 –260 μsec per hop to allow the
inexpensive radio circuits to become stable. The payload of the frame can be encrypted for
confidentiality with a key that is chosen when the master and slave connect. Hops only happen
between frames, not during a frame. The result is that a 5 -slot frame is much more efficient than
a 1-slot frame because the overhead is constant, but more data is sent.
2.6.6. The Bluetooth Frame Struc ture
Bluetooth defines several frame formats, the most important of which is shown in two forms
in Fig. 4 -36. It begins with an access code that usually identifies the master so that slaves within
radio range of two masters can tell which traffic is for t hem. Next comes a 54 -bit header
containing typical MAC sublayer fields. If the frame is sent at the basic rate, the data field comes
next. It has up to 2744 bits for a five -slot transmission. For a single time slot, the format is the
same except that the d ata field is 240 bits.
19
2.7. RFID
We have looked at MAC designs from LANs up to MANs and down to PANs. As a last
example, we will study a category of low -end wireless devices that people may not recognize as
forming a computer network: the RFID (Radio Frequency IDentification) tags and readers that
we described in Sec. 1.5.4.
2.7.1. EPC Gen 2 Architecture
The architecture of an EPC Gen 2 RFID network is shown in Fig. 4 -37. It has two key
components: tags and readers. RFID tags are small, inexpensive device s that have a unique 96 –
bit EPC identifier and a small amount of memory that can be read and written by the RFID reader.
The memory might be used to record the location history of an item, for example, as it moves
through the supply chain.
2.7.2. EPC Gen 2 Ph ysical Layer
The physical layer defines how bits are sent between the RFID reader and tags. Much of it
uses methods for sending wireless signals that we have seen previously. In the U.S.,
transmissions are sent in the unlicensed 902 –928 MHz ISM band. This band falls in the UHF (Ultra
High Frequency) range, so the tags are referred to as UHF RFID tags. The reader performs
frequency hopping at least every 400 msec to spread its signal across the channel, to limit
interference and satisfy regulatory requireme nts.
2.7.3. EPC Gen 2 Tag Identification Layer
We have seen many ways of tackling the multiple access problem in this chapter. The closest
protocol for the current situation, in which the tags cannot hear each other’s transmissions, is
slotted ALOHA, one of the earliest protocols we studied. This protocol is adapted for use in Gen
2 RFID.
The sequence of messages used to identify a tag is shown in Fig. 4 -39. In the first slot (slot
0), the reader sends a Query message to start the process. Each QRepeat message advance to
the next slot. The reader also tells the tags the range of slots over which to randomize
20
transmissions. Using a range is necessary because the reader synchronizes tags when it starts
the process; unlike stations on an Ethernet, tags do not wake up with a message at a time of
their choosing.
2.8. DATA LINK LAYER SWITCHING
Datalink Switching (DLSw) is a tunnelling protocol designed to tunnel unrouteable, non -IP
based protocols such as IBM Systems Network Ar chitecture (SNA) and NBF over an IP network.
DLSw was initially documented in IETF RFC 1434 in 1993. In 1995 it was further documented in
the IETF RFC 1795. DLSw version 2 was presented in 1997 in IETF RFC 2166 as an improvement
to RFC 1795. Cisco Systems has its own proprietary extensions to DLSw in DLSw+. According to
Cisco, DLSw+ is 100% IETF RFC 1795 compliant but includes some proprietary extensions that
can be used when both devices are Cisco.
Some organisations are starting to replace DLSw tunnelling with the more modern
Enterprise Extender (EE) protocol which is a feature of IBM APPN on z/OS systems. Microsoft
refers to EE as IPDLC. Enterprise Extender uses UDP traffic at the Transport Layer rather than
the Network Layer. Cisco deploy Enterprise Exte nder on their hardware via the IOS feature
known as SNAsW (SNA Switch).
2.8.1. Bridges
There are several ways of interconnecting multiple networks. When two or more networks
are interconnected under the physical layer, the type of device is usually called a Repeater or
Hub. When interconnected to the data link layer, it is called a Bridge. W hen interconnected at
the network layer, is called Routers. When interconnected at higher layers, it is usually called
Gateway. When range or extension is the only problem for interconnection, repeaters may solve
the problem as long as a maximum distance be tween two stations is not exceeded. Hub is the
simplest repeater in Ethernet local area networks. Because the collision domains of LANs
connected as repeaters is the entire network, all traffic appear in both networks. But if LANs are
connected to a Bridge with the MAC address filtering, local traffic stays in its local LAN. In
general, bridges operating at a data link layer imply capability to work with multiple network
layers. In this case, a bridge either connects one Ethernet and the other token ring. B ut a bridge
is needed to deal with difference in MAC formats, in maximum frame length, in buffering, timers
and security requirements. In a common case a bridge interconnects LANs of the same type to
have a frame filtering capability. A bridge has to monit or the MAC address of each frame. Two
types of bridges are widely used: transparent bridges and source routing bridges. Transparent
bridges are typically used in Ethernet while source routing bridges are typically used in token
ring and FTTI networks. We f ocus on transparent bridges. The basic process works as follows: it
21
creates a lookup table based on backward learning. The table associates each station with a port
number. The bridge observes source address of arriving frames. It discards a frame if sourc e and
destination are in the same LAN. Forward a frame if source and destination are in different LANs.
Use flooding if the destination is unknown. Let's look at an example where three LANs are
interconnected by two bridges when empty lookup tables initial ly. Suppose station S1 sends a
frame to station S5, the frame carries the MAC address of S5 as a destination address, the MAC
address of S1 as the source address. When bridge B1 receives the frame, it finds the empty table
and adds as one source address an d is a port number on which the frame arrived. As the
destination address is not found in table, the frame is forwarded to port 2 and transmitted on
LAN2. The bridge B2 perform the same process, adding the source address into its lookup table
and forwardin g their frame to LAN3. S5 eventually receives the frame destination suite. Both
bridges have learned the location of S1 by the backward learning process. Next station S3 sends
a frame to your station S2. Both bridges, B1 and B2, receive the frame, see if t hey are connected
to the same LAN as a suite. But B1 cannot find the address S3 in its lookup table, so it adds S3
and port 2 into its table. It then forwards the frames to port 1 which S2 finally receive. Bridge B2
also doesn't find the source address. It adds the new information in its table and forward the
frame on LAN3. So forwarding is a wasted. Now assume that S4 sends a frame to S3. Bridge B2
records the address of S4 and a port number in which the frame arrived. Since the record of S4
is not found i n the table, then B2 checks the station address of the frame in the forwarding table.
It matches one of the entries. So the bridge forwards the frame to the port indicated in the entry,
which is port 1. When Bridge B1 receives the frame, it adds the source address and there's a
port number in which is a frame arrive into the table. The bridge, however, finds the destination
address because the port number in the entry is the same as on which the frame arrived. The
frame is discarded and another transmitted to LAN1. Therefore, the traffic is confined to LAN2
and LAN3 only. Now assume, S2 sends a frame to S1, bridge B1 first adds the address of S2 in its
forwarding table. See, if the bridge has to learn the address of S1 it discards the frame after
finding out that S1 is already connected to the same port. Therefore, the traffic is completely
isolated in LAN1. Please note that in this case, bridge B2 cannot learn the address of S2 because
the frame is not transmitted to LAN2. In a static network, tables eventua lly store all station
addresses and learning stops. In practice, stations are added and moved all the time. Adaptive
learning introduced timer to age each entry and force it to be relearned periodically. The
learning process works fine as long as no loops in the interconnected network. To remove loops
in a network, our committee specified an approach called spanning tree algorithm.
2.8.2. Virtual LANs
Today, the cables have changed, and hubs have become switches, but the wiring pattern
is still the same. This pattern makes it possible to configure LANs logically rather than physically.
For example, if a company wants k LANs, it could buy k switches. By carefully choosing which
22
connectors to plug into which switches, the occupants of a LAN can be chosen in a wa y that
makes organizational sense, without too much regard to geography.
To make the VLANs function correctly, configuration tables have to be set up in the
bridges. These tables tell which VLANs are accessible via which ports. When a frame comes in
from, say, the Gray VLAN, it must be forwarded on all the ports marked with a G. This holds for
ordinary (i.e., unicast) traffic for which the bridges have not learned the location of the
destination, as well as for multicast and broadcast traffic. Note that a port may be labelled with
multiple VLAN colours.
Before leaving the subject of VLAN routing, it is worth making one last observation.
Many people in the Internet and Ethernet worlds are fanatically in favour of connectionless
networking and violently opposed to anything smacking of connections in the data link or
network layers. Yet VLANs introduce something that is surprisingly similar to a connection. To
use VLANs properly, each frame carries a new special identifier that is used as an index into a
table inside the switch to look up where the frame is supposed to be sent. That is precisely what
happens in connection -oriented networks. In connectionless networks, it is the destination
address that is used for routing, not some kind of connection identifier.
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: COMMUNICATIONS AND INTERNET ARCHITECTURES 1 1. Introduction Network links can be divided into two categories: those using point -to-point connections… [625175] (ID: 625175)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
