[Music] having now looked at some of the principles involved with the link layer we're ready to take a look at link layer protocols which themselves are going to introduce issues of both theory and practice there's a lot to be covering here at the network layer we encountered a relatively small number of protocols primarily ip bgp and ospf and at the link layer we're going to see a much larger number of protocols and generally this is keeping in keeping with the internet hourglass architecture that we talked about earlier with ip and the network layer being the
thin waste and there being more diversity of protocols at the link layer here's how we're going to structure our study of multiple access links and protocols we're going to start off by looking at the multiple access link itself so broadcast channel then we're going to take a look at three classes three families of multiple access protocols we're going to look at channel partitioning protocols random access protocols and taking turn protocols then we're going to wrap up our study by taking a look at the cable access network where we'll find a number of these approaches in
use so there's a lot to cover so let's get started and let's get started by taking a look at the broadcast multiple access link well let's begin our discussion of multiple access links and protocols by talking about links in the link layer there are basically two types of links there are point-to-point links that operate between a single sender a single receiver and then there are broadcast links where there are multiple senders and multiple receivers attached to this broadcast link and we're going to encounter broadcast links in a lot of different scenarios a shared wire for
instance cabled ethernet the original ethernet shared radio 4g 5g lte systems that we'll take a look at shared radio like 802.11 wi-fi networks shared radio and satellite networks and in a human sense we humans talk to each other all the time over a shared medium at a cocktail party in a class wherever conversations take place in the case that we have a single shared broadcast channel we're going to need a multiple access protocol to coordinate access to this shared broadcast channel and one very important property of this broadcast channel is that when there are two
or more simultaneous transmissions by a node they're going to interfere with each other we'll refer to that as a collision and if a node receives two or more signals at the same time there are two senders colliding with each other neither of those transmissions are going to be successful so we're going to need a multiple access protocol to coordinate the transmissions by nodes in the network multiple access protocols a distributed algorithm that determines how nodes are going to share the channel that's to say to determine when an individual node can transmit and when you think
about a multiple access protocol the real challenge is well i want to share a communication channel but any communication about how to actually share that channel is going to have to use the channel itself there's no sort of out of band secondary channel that can be used for coordination this is going to be at the heart of the multiple access problem before we start taking a look at multiple access protocols let's take a step back and think well ideally what would we like a multiple access protocol to do well first when a node wants to
transmit hopefully it'll be able to transmit at rate r the maximum rate of the channel and when m nodes want to transmit we want some kind of fairness we want each node to be able to send at an average rate of r over m we might like it to be fully decentralized so there's no special node no central node that's going to coordinate transmission and maybe no synchronization of clocks or slots we'd like to keep it simple well multiple access protocols is one of my favorite topics to teach in a computer networks course and that's
because we as humans have protocols that are going to govern how it is we actually share a communication medium like the air between us when there's a group of people who want to communicate so think about it you're in class you're in a meeting you're in a restaurant you're talking with a friend how is it that as humans we organize how we determine how we're going to coordinate who's speaking at a given point in time and i can almost guarantee you that anything you can think of in terms of how humans execute protocols to determine
whose chance it is to get to talk there's going to be an analog in a multiple access protocol so think about it for a second okay enough thinking what you come up with well when i teach in class with face-to-face students a lot of times the first thing students will say is well look in this classroom you're always saying any questions any comments we raise our hands we're recognized and then we speak so that's an example of a multiple access protocol absolutely are there any questions before we proceed are there any questions before we proceed
well that's an example of what we might call a taking turn protocol the technique being used there is known as polling where we have a centralized device that pulls client devices to see if those client devices have data that they want to send in other taking turn protocols like token passing for example the right to transmit is explicitly passed from one node to another now it's your turn now it's your turn let's now move on to a second class of multiple access protocols but again with human analogs and think about conversation it's not explicitly coordinated
people talk when they have something to say i think that's a pretty common form of human multiple access hey hey jerry how are you doing how you doing i'm good i'm good thank you thanks for asking um yeah well this approach of speaking when you have something to say without explicit coordination sometimes works but sometimes as we just saw it really doesn't this kind of approach though is known as a random access protocol and it's actually widely used in practice as we'll see if you're polite like a human being and you listen before you speak
that's known as carrier sensing and if you're super polite if somebody else starts talking while you're talking you stop that's called collision detection when you stop transmitting when someone else is speaking well the last class of protocols that we'll take a look at are what are known as channel partitioning protocols we take the channel divided into chunks time or frequency we assign those time slots or frequency bands to individual nodes at the beginning of each section each candidate will have two minutes uninterrupted to answer my first question the debate commission will then turn on their
microphone only when it is their turn to answer and the commission will turn it off exactly when the two minutes have expired after that both microphones will remain on but on behalf of the voters i'm going to ask you to please speak one at a time the goal is for you to hear each other and for the american people to hear every word of what you both have to say well that's a great human example of a centralized controller allocating tdma slots very explicitly followed by a period of random access with collisions i might add
as well so now we've seen the three broad classes of multiple access protocols that we want to study channel partitioning protocols random access protocols and taking turns protocols so now let's dive down a little bit more into the details of these protocols and see their use in link level networks we're going to look at two channel partitioning mac protocols time division multiple access and frequency division multiple access and you may recall we encountered these earlier in chapter one when we were looking at access networks let's look at tdma first in tdma access to the channels
divided up into rounds and each round is further divided into slots and each node is allocated one or more slots within a round if slots are unused or unassigned they're going to go idle so in this example here we see a six station local area network stations one three and four have packets to send and so they do so in their assigned slots and note that slots two five and six are going idle frequency division multiple access or fdma is similar to tdma except now the frequency band the entire frequency band is going to be
divided into sub bands and each node is going to be allocated one or more sub bands within this larger frequency band so just like slots are allocated in tdma frequency sub bands are allocated in fdma and if a frequency subband is unassigned or a node has nothing to send but hasn't assigned sub band then that subband is going to go idle so in this example here nodes 1 3 and 4 have packets to send and send them on their frequency bands while frequency bands 2 5 and 6 are going idle now that we've covered channel
partitioning protocols tdma and fdma let's next turn our attention to random access protocols now remember with random access protocols there's no a priori coordination among nodes there's no sense of turn and as a result two nodes two or more nodes may transmit simultaneously at the same time resulting in collisions so at the heart of the random access problem are going to be effective ways to either avoid or recover from such collisions the first random access protocol invented the aloha protocols really also one of the simplest but it really pioneered two key ideas that we'll see
in all later random access protocols the first idea was to allow collisions to happen in the first place remember tdma and fdma that we just looked at avoid collisions by carefully allocating time slots and frequency bands the second key innovation in aloha was to use randomization to recover from collisions so here's the setting for slotted aloha let's assume that all frames are the same size that times divided into time slots and that the length of a time slot is the time needed to transmit a single frame nodes are synchronized they've got a common clock and
nodes begin a frame transmission only at the start of slot times and of course if two or more nodes transmit in a slot they're going to be collisions and these collisions are detected by the senders and here's how slotted aloe operates it's really simple when a node has a frame to send it simply transmits that frame in the next slot couldn't be simpler if there are no collisions we've got a success however if there are collisions a node's going to retransmit that frame in each subsequent slot with probability p until it's successful and so here
we see the use of randomization after a collision the colliding nodes are going to randomize when they attempt a re-transmission of that message and you can see why that has to happen if two nodes were to always retransmit their messages in the following slot they'd collide forever with randomization that's to say transmitting with probability p hopefully only one of these nodes is going to transmit and be successful and the other will transmit in some later slot we'll see that randomization is used in many real world link layer random access protocols including ethernet and wi-fi even
though the randomization there is done in slightly different way but it was aloha that first put the word random in random access protocols here's an example of slotted aloha in action first time slot nodes 1 2 and 3 all transmit so there's a collision these three nodes then randomize their re-transmission times and do so randomly in such a way that in the next time slot no one transmits then nodes 1 and 2 both transmit and collide again in the third time slot nodes 1 and 2 randomize again and finally in slot 4 here node 2
transmits all by itself and is successful node 1 later transmits successfully here as does node 3d a slot later well we can identify a number of pros and cons about slotted aloha so random access protocol single active node can continuously transmit at the full rate of the channel so that's good it's highly decentralized the only synchronization among the nodes is really the timing and finally it's simple you got a frame you send it that's really easy on the con side there's some overhead to synchronization the major issues here are the wasting of time transmission slots
due to collision and due to idle slots and it's not too hard to quantify the cost of these wasted slots so let's do so let's define the efficiency of slotted aloha as the long run fraction of slots that are successful when n nodes always have a message to send and transmit in a slot with probability p it's a little bit of combinatorics and calculus don't worry if you're rusty on this or haven't seen it we can proceed as follows for a given node to be successful it needs to transmit first and this is going to
happen with probability p and the other n minus 1 nodes need to not transmit and this happens with probability 1 minus p to the n minus 1 power and so the probability that a given node successful in a slot is this value here and the probability that any single node among the n is successful is just n times this quantity and finally if we want the value of p that maximizes efficiency for a large number of nodes we then find over all values of p and for large n that the maximum efficiency for slotted aloha
is just 0.37 wow that's to say in the long run and in the best case only 37 percent of the slots are going to successfully carry a frame so if you have a 1 megabit per second channel for example the maximum throughput you could achieve is just 370 kilobits per second this then is the price the cost of using the slotted aloha protocol and i note that a simpler version of aloha without slot boundaries it's known as pure aloa you just transmit when you have a frame and again randomize after a collision that pure aloha
has an efficiency that's even lower just half of this already pretty small value of 0.37 another class of random access protocols known as carrier sense multiple access csma these protocols can overcome these inefficiencies in aloha and you can think of an aloha sender as sort of an impolite conversationalist impolite in the sense that well you got something to say you just said csma protocols are more like polite conversationalists more polite in the sense that they listen before they speak in simple csma you listen before you transmit that is you sense the channel and if the
channel sensed idle you transmit the entire frame just as in aloha if the channel sends busy you don't transmit you defer your transmission randomly until a later point in time sometimes this is called random back off in the human analogy here is clear your parents and your teachers hopefully taught you you listen before you speak and you don't interrupt others an even more polite version of csma is known as csma with collision detection csmacd in csmacd again you never speak when others are speaking but if you do start to speak and someone else starts speaking
at the same time a collision can still occur these transmissions can be detected within a relatively short period of time and the colliding transmission aborted thereby reducing channel wastage rather than transmitting the full length of a colliding frame you stop as soon as a collision is detected collision detection is pretty easy in a wired situation but it's more difficult in wireless links as we'll see in chapter 7. now you might be wondering that with csma being a polite conversationalist listening before speaking how there can be collisions after all you listen before you speak but you've
all undoubtedly had the experience of you and a friend in conversation both listening before speaking and yet both starting to speak at the same time as we saw in this video earlier i'm good i'm good thank you thanks and the issue here has to do with propagation delay and the distance between nodes it takes time for a transmission being sent by one node to reach another node in particular even though a node sends the channel idle it doesn't necessarily mean that another node hasn't already begun transmitting but that transmission just hasn't had time to propagate
yet to the node performing the carrier sensing here's an example illustrating this idea four nodes here are laid out across the x-axis separated in space the y-axis going down represents increasing time at t 0 node 2 senses the channel idle and begins transmitting you can see its transmission in yellow here propagating to the left and to the right as time increases so far so good at t1 node 4 senses the channel idle since the beginning of node two's transmission hasn't reached it yet now we know that node two's already started transmitting but because of the
signal propagation delays node four doesn't know that yet so node four senses the channel idle and begins transmitting and its signal propagates left and right shown in red here eventually node 2 and node 4's transmissions collide and interfere with each other in the checkered red and yellow areas here so even if nodes perform carrier sensing collisions can happen due to the physical distances between nodes and signal propagation delays in order to minimize the amount of wasted transmission time a transmitting node can perform what's known as collision detection to stop transmitting if it hears that its
own transmissions are being interfered with in this case using collision detection rather than transmitting a full frame that has suffered a collision a node will only transmit part of that frame thus freeing up the channel sooner that's a win well we've said that ethernet uses carrier sense multiple access with collision detection so let's go take a closer look at how that operates just like any link layer protocol ethernet receives the datagram from the network layer encapsulates that datagram into a link layer ethernet frame and then gets ready to transmit that frame ethernet is going to
use csmacd to determine when exactly to transmit that frame first ethernet performs carrier sensing the channel sensed idle begins transmitting the frame immediately on the other hand if the channel sensed busy ethernet continues to sense the channel and then waits until the channel goes idle and then transmits the frame immediately the entire frame is transmitted without a collision great we're done but if another interfering transmission is detected while this ethernet is sending its frame frame transmission is going to be stopped and a short jam signal will be sent after a boarding frame transmission ethernet then
enters the binary back-off phase it's also called the exponential back-off phase and this is how ethernet performs randomization following a collision after a given frames experience the collision for the nth time ethernet's going to choose a value k at random from the interval 0 to 2 to the n minus 1 and then wait k times 512 bit times and then return to step 2 the carrier sensing step so there you have it that's how ethernet performs multiple access and in closing it's worth saying just a few more words about this exponential back off phase note
that as the frame experiences more and more collisions it's more and more likely that there are a larger number of nodes with frames to send and in this case the randomization interval is going to grow longer and longer as well so we don't have a large number of nodes trying to randomize over just a short period of time and that's exactly what you want randomizing over a short period of time when there may be just a few senders and randomizing over a longer period of time when there are likely to be a larger number of
senders i've always thought that that's a really nice really well-designed property of ethernet's binary back-off algorithm well we've now covered two forms of multiple access protocols taking turns protocols and random access protocols the third and final broad class of protocols are what we might call taking turn protocols used in bluetooth for example relatively simple let's take a look but before diving into taking turn protocols let's first motivate the need for yet a third class of multiple access protocols well we've seen with channel partitioning mac protocols that they're both pros and cons channel partitioning protocols are
able to share the channel efficiently and fairly at high loads the channel can be run at 100 utilization but at low loads they're inefficient there's delay in accessing the channel and if only one node has data to send it's only going to be able to access one end of the bandwidth over a period of time we've seen that random access mac protocols have pros and cons as well and these pros and cons are sort of the opposite the dual of what we saw with channel partitioning protocols they're very efficient at low loads there are a
few collisions and a single node can fully utilize the channel all by itself if it's the only one that has data to send but at high loads there's going to be collision and that's overhead we can now turn to taking turn protocols which are trying to take the best from both of these worlds the channel is going to be allocated explicitly so there'll be no collisions but on the other hand nodes won't hold the channel for long if they have nothing to send and as we'll see there are basically two approaches taken polling and token
passing a polling protocol has a centralized controller which coordinates access to the channel the controller sends polling messages to clients explicitly allocating the channel to a specific client if that client has something to send when a client's done transmitting its frame or if it has nothing to send the centralized controller will then pull the next node in the polling sequence and so on and you can see the polling protocol in operation here node 1 is first polled and sends its message to the controller so note here that frame transmissions come from the nodes to the
central controller and then would go to the central controller back out to the nodes after node 1 is polled and since its message node 2 is pulled it's got nothing to send so it returns a short sort of an i have nothing to send message to the controller node 3 is pulled and so on so it's conceptually simple of course we're going to need a protocol for a client node to join and leave the polling sequence we'll see how that's done when we cover bluetooth in the next chapter so on the plus side there's no
collisions and each node can get one end of the channel but if a note has nothing to send it won't use the channel for long and there's just a short polling delay until another node gets its chance well there's a bit of control overhead associated with polling and of course the centralized controller represents a single point of failure in token passing nodes are typically arranged into a ring topology and a control token message is explicitly passed from one node to the next in some order if you're a node holding the token that means it's your
turn to transmit your message when you're done or when you have nothing to send you pass on the token message to the next node in the token passing sequence and so on so in this example here node one has nothing to send and so immediately passes the token onto node 2 node 2 holds the token transmits its message and then passes the token message on when it's done and so on well token passing has both the advantages and the disadvantages that we saw before with polling there's a joint leave protocol that's going to be needed
some control overhead some access latency and in this case the token represents a single point of failure let's wrap up our study of multiple access protocols by now looking at how the principles we've learned are put into practice and we'll use the cable access network as an example since it uses tdma fdma and random access all in one system and you may remember that we briefly looked at cable access networks way back in chapter one there's a cable head end here where we find the cable modem termination system cmts the so called docsis specification that
stands for data over cable service interface specification specifies the cable access data network architecture and its protocols docsis specifies fdm to divide the downstream network that's to say from the head end to homes and the upstream network from homes up to the head and each into multiple frequency channels both the upstream and the downstream channels are broadcast channels the downstream channels are pretty straightforward since it's only the cmts that's transmitting on the downstream channel we've got one sender and many receivers it's the upstream channels that are more interesting since the cable modems in multiple homes
are going to need to share that upstream broadcast channel so let's take a look now at the upstream channel an upstream channel at a given frequency is divided into tdma slots and you might ask yourself well how are these slots assigned well some upstream slots are assigned to nodes that is assigned to homes by the cmts using what's called a downstream map frame that's sent from the cmts to the homes is shown here the map frame explicitly indicates which nodes can transmit in which slots in the next set of upstream slots other upstream slots however
aren't pre-assigned and instead nodes transmit into these unassigned frames in a random access manner using binary back off as shown here well so there you have it the cable network uses fdm to divide the frequency band into sub-channels divides a frequency sub-channel into tdma slots the cmts explicitly assigns some of those slots to specific homes the basis of requests that are being made for the slots but other slots are accessed in a random access manner and actually it's the request for these upstream slots requests that are transmitted on the upstream channel by the homes that
are transmitted using multiple access with binary back off so the cable network's a great example for studying multiple access fdma tdma and random access all in one network well that wraps up our study of multiple access links and protocols and we've covered a lot of ground here we started by talking about the characteristics of the broadcast channel we then studied three broad classes of multiple access protocols channel partitioning protocols random access protocols taking turns protocols finally we looked at putting those protocols into practice by taking a look at the cable access standard known as docsis
coming up next we're going to take a look at switched local area [Applause] networks you