pete > courses > CS 431 Spring 25 > Lecture 10: Transport Layer – UDP & Application – DHCP


Lecture 10: Transport Layer – UDP & Application – DHCP

Goals


we’ve now spent three lectures on IP: one with intuition, one with technical details, and one with support protocols (ARP & ICMP) and wider implications (autonomous systems & routing protocols)

it’s now time to move up another layer

before doing so, however, I want to reinforce an idea that I’ve mentioned previously but will affect us a bit more now in the coming weeks

the diagram I showed on the first day of class identifies several different protocols and gives an idea of how they relate to each other

we have already seen how IP packets can be encaspulated inside Ethernet frames

and ICMP messages inside IP packets

most of the remaining protocols we’re going to be looking at were developed together, and comprise the TCP/IP Protocol Suite

so while, yes, UDP and DHCP and DNS (which are this week’s topics) are "separate" protocols, they were imagined and designed and engineered alongside IP because they all work together

I don’t want to give the impression that these are unrelated technologies, because they are very not


now that’s out of the way, let’s review what IP does

the one sentence description is that it handles the delivery of packets across interconnected networks

one subtlety, apparent from the structure of the IP headers, is that IP handles delivery between machines (hosts)

it does not provide any way to identify a process running on a host

this is limiting: as established on the first day, our goal here is to allow processes to communicate

there may be many processes running on a single host, all of which want to send and receive data to processes running on other hosts

so the addressing provided by IP is insufficient: we need a way to identify a process in addition to the host addressing that IP already provides

(one could imagine assigning a different IP address to each process on a single machine, but that would exacerbate the fact that we’ve already run out of IP addresses)


there are some other things worth noting that IP does not do

while, as we saw, there is a mechanism for a router to tell the sender when the TTL reaches zero, there is not a mechanism for a router to send back a notification when a packet is dropped due to failed checksum

so if a cosmic ray hits the wire and scrambles some bits, the next router in the path will find the checksum no longer checks out and silently drop the packet

the original sender of the packet won’t know the packet didn’t arrive at its destination, and the destination won’t know there was even a packet in flight in the first place


nor does IP enforce any particular path of routers

one could imagine a network topology with enough redundant interconnections that there are multiple routes from A to B

IP provides no guarantee that two packets sent from A to B will take the same route

this is perhaps more of a bug than a feature, because it gives individual routers the autonomy to choose the best route at any given instant

this means a router could choose to avoid links that are especially crowded or that have been severed, and thus increase the likelihood the packet will arrive at its destination


relatedly, however, IP provides no guarantees about the ordering of packets

that is, if host A sends packets 1, 2, and 3 to host B, in that order

there is no guarantee that the packets will arrive at host B in the same order

this may or may not be a bad thing: it depends entirely on what data the packets carry

if they’re carrying, say, the successive parts of an image, the order definitely matters because otherwise the bottom of the image might be shown first, which would just look silly

if they’re carrying sound in a voice conversation, it might be acceptable that they arrive out of order, because it’ll just mean some slightly garbled audio for a moment


of the shortcomings of IP described above, only one really needs to be solved for every usage: the ability to identify individual processes running on a host

the other problems might need to be solved depending on the needs of the particular application

to that end, then, there are two different transport-layer protocols build on top of IP

UDP, which we will cover this week, only fixes the process-identification problem

TCP, which we will spend a few subsequent weeks on, aims to fix the others

application writers can then choose which transport protocol to use based on their individual needs


note also that we are now talking about applications

while UDP and TCP are (usually) implemented in the kernel, they are directly used by processes

so everything "above" UDP and TCP in the protocol diagram is code in user-written programs (as opposed to kernel code)

put another way, a process desiring to send/receive data using UDP or TCP will hand that data to the kernel (using system calls) and that data will become the payload of a UDP or TCP data unit


so: UDP

the User Datagram Protocol, defined in RFC 768

the RFC is quite brief, but its difficult to gain a full understanding from reading it on its own

the one thing to note first, though, is that the unit of communication in UDP is called a datagram

so we’ll have a UDP datagram encapsulated in an IP packet which is itself encapsulated in an Ethernet frame


but the big thing: identifying processes

UDP uses the notion of ports

a port is a 16-bit number: 0 through 65535

a process can claim a port number by binding (a socket) to it

then, when a UDP datagram arrives with a destination port number that matches the bound port number, the kernel knows to deliver the datagram to that process

(recall that a socket, represented by a file descriptor, is the process-level abstraction used to represent one end of a network communication)


example: Process A running on Host A wants to send a chunk of data to Process B running on Host B

Process B first picks a port (ex: 12345) and binds to it, and the kernel on Host B records the association between UDP port 12345 and (the socket within) Process B

Process A then takes the data it wants to send and hands it to the kernel on Host A, saying "please deliver this data using UDP to port 12345 on Host B"

the kernel on Host A composes the UDP headers (which we’ll see shortly) and the data, stuffs that inside an appropriately-addressed IP packet, stuffs that inside an appropriately-addressed link-layer frame, and passes it down to the physical layer

once all the physical layers and link layers and network layers (routers) between Host A and Host B do their thing, the datagram-within-packet-within-frame arrives at Host B

the kernel on Host B looks at the "type" field (or equivalent) of the link layer frame, which identifies the payload as an IP packet, extracts the IP packet, and delivers it to the IP code

the IP code in the kernel on Host B looks at the "protocol" field of the IP headers, which identifies the payload as a UDP datagram, extracts the UDP datagram, and delivers it to the UDP code

the UDP code in the kernel on Host B looks at the "destination port number" of the UDP headers and consults its table that records which port number corresponds to which socket in which process and enqueues it thusly

then, when Process B on Host B next calls recvmsg(2) on that socket, the payload of the UDP datagram is copied into the process’ memory

(recvmsg(2) is new: it’s the equivalent of read(2) but for UDP)

(you may recall using recv(2) in Systems Programming: that served the same purpose for TCP)


if a particular port number is already bound to a socket (process), the kernel will not permit a second socket (process) to bind to it

this prevents ambiguity: otherwise a datagram might arrive and the kernel wouldn’t know which process to deliver it to

exception: a process binds to a port and then forks, creating a child with the same open file descriptors, which include sockets

then there are two processes which each have a socket bound to that one port

in that case, if an appropriately-addressed UDP datagram arrives, there are no rules governing which process should get it

this kind of non-determinism is generally frowned upon (it can make debugging a nightmare) and so writing programs that do this is discouraged

I believe the standards are intentionally and explicitly vague on what is "supposed" to happen

upshot: don’t do this


back to our example, how does Host A know to send to port 12345?

two answers to this

first, there exist many protocols, central to the operation of the Internet, to which port numbers have been assigned

so DHCP, which we’ll talk about later today, gets to use UDP ports 67 and 68

the intention is that a process that doesn’t want anything to do with DHCP will avoid those ports

this intention is only moderately enforced: only processes running as the root user are permitted to bind to port numbers below 1024

the full list of reserved ports is shipped in the file /etc/services, with the name on the left, and the port number and protocol on the right

second, for random protocols that don’t merit codification in Internet standards, it’s usually the case that both processes involved will be written by the same group

and they can just hard-code the port number in the sending process and in the receiving process


this pattern of one process making itself available for communication (ie, binding to a port), at a later time, another process choosing to communicate to the first (ie, sending a datagram to that port) is extremely common and we will see it over and over again throughout the rest of the semester

and so, unsurprisingly, proper names for these roles have been adopted

a server is a process that makes itself available for communication, usually by binding to a port

a client is a process that chooses to communicate with a server, usually by sending a datagram/packet to a particular port on a particular host

the word "server" suggests (accurately!) that this process is often providing a service

the word "client" suggests (again, accurately) that this process is often requesting a service to be performed

examples!

you’ve used ssh to connect to weathertop: there is a process running on weathertop that is the "ssh server", whose job is to bind to port 22, accept packets that arrive on that port, and provide an encrypted shell session to clients

on your computer, you’ve run an "ssh client", whose job is to send packets to the ssh server running on weathertop

likewise, a web browser like Firefox, Chrome, or Safari is a client that requests a service ("deliver this webpage") of a web server (ie, a process running on a computer owned/operated by Google/Facebook/whomever)

we will see many more instances of this pattern over the coming weeks


there is a somewhat annoying ambiguity with the word "server"

I hold to the claim that "server" refers to a process that provides a (network) service

that process runs on a computer

the somewhat annoying ambiguity is that the computer on which the server process runs is often also itself called a server

so "Google operates a bunch of servers" usually means "Google operates a bunch of computers that run processes that provide services"

but one could also say "my computer at home runs a bunch of servers", which means that there are several processes running on a single machine, each of which provides a different (network) service

for example, I have a single computer at home that simultaneously runs an ssh server (process), a web server (process), a file server (process), and maybe some other things that I can’t remember right now

when talking about that machine and its duties, I wouldn’t use the word "process", hence the parentheticals: the context would imply that I’m talking about individual processes rather than the machine as a whole

and, of course, I also refer to that single machine as "a server"

it’s also possible that a single machine runs some processes that behave as servers and other processes that behave as clients


Last modified: