summaryrefslogtreecommitdiff
path: root/2001/networktour-birmingham2001
diff options
context:
space:
mode:
Diffstat (limited to '2001/networktour-birmingham2001')
-rw-r--r--2001/networktour-birmingham2001/abstract9
-rw-r--r--2001/networktour-birmingham2001/packet-journey-2.4.sgml116
2 files changed, 125 insertions, 0 deletions
diff --git a/2001/networktour-birmingham2001/abstract b/2001/networktour-birmingham2001/abstract
new file mode 100644
index 0000000..3ab7f81
--- /dev/null
+++ b/2001/networktour-birmingham2001/abstract
@@ -0,0 +1,9 @@
+Technical Presentation: A tour through the Linux 2.4 network stack
+
+Linux based systems are known for performance and realiability in the area of
+networking. This presentation will give a tour through the Linux 2.4 kernel
+network stack, it's structure and implementation. Some of the topics covered
+are: Network hardware drivers, core network functions, IPv4 protocol stack,
+sockets implementation, zero-copy TCP.
+
+The Author of this Presentation is Harald Welte <laforge@gnumonks.org>
diff --git a/2001/networktour-birmingham2001/packet-journey-2.4.sgml b/2001/networktour-birmingham2001/packet-journey-2.4.sgml
new file mode 100644
index 0000000..94e09d9
--- /dev/null
+++ b/2001/networktour-birmingham2001/packet-journey-2.4.sgml
@@ -0,0 +1,116 @@
+<!doctype linuxdoc system>
+
+<article>
+
+<title>The journey of a packet through the linux 2.4 network stack</title>
+<author>Harald Welte <tt>laforge@gnumonks.org</tt>
+<date>$Revision: 537 $, $Date: 2004-10-10 15:04:54 +0200 (Sun, 10 Oct 2004) $</date>
+
+<!-- $Id: packet-journey-2.4.sgml 537 2004-10-10 13:04:54Z laforge $ -->
+
+<abstract>
+This document describes the journey of a network packet inside the linux kernel 2.4.x. This has changed drastically since 2.2 because the globally serialized bottom half was abandoned in favor of the new softirq system.
+
+<toc>
+
+<sect>Preface
+<p>
+I have to excuse for my ignorance, but this document has a strong focus on the "default case": x86 architecture and ip packets which get forwarded.
+
+<p>
+I am definitely no kernel guru and the information provided by this document may be wrong. So don't expect too much, I'll always appreciate Your comments and bugfixes.
+
+<sect>Receiving the packet
+
+<sect1>The receive interrupt
+<p>
+If the network card receives an ethernet frame which matches the local MAC address or is a linklayer broadcast, it issues an interrupt.
+The network driver for this particular card handles the interrupt, fetches the packet data via DMA / PIO / whatever into RAM. It then allocates a skb and calls a function of the protocol independent device support routines: <tt>net/core/dev.c:netif_rx(skb)</tt>.
+<p>
+If the driver didn't already timestamp the skb, it is timestamped now. Afterwards the skb gets enqueued in the apropriate queue for the processor handling this packet. If the queue backlog is full the packet is dropped at this place. After enqueuing the skb the receive softinterrupt is marked for execution via <tt>include/linux/interrupt.h:__cpu_raise_softirq()</tt>.
+<p>
+The interrupt handler exits and all interrupts are reenabled.
+
+<sect1>The network RX softirq
+<p>
+Now we encounter one of the big changes between 2.2 and 2.4: The whole network stack is no longer a bottom half, but a softirq. Softirqs have the major advantage, that they may run on more than one CPU simultaneously. bh's were guaranteed to run only on one CPU at a time.
+<p>
+Our network receive softirq is registered in <tt>net/core/dev.c:net_init()</tt> using the function <tt>kernel/softirq.c:open_softirq()</tt> provided by the softirq subsystem.
+<p>
+Further handling of our packet is done in the network receive softirq (NET_RX_SOFTIRQ) which is called from <tt>kernel/softirq.c:do_softirq()</tt>. do_softirq() itself is called from three places within the kernel:
+<enum>
+<item>from <tt>arch/i386/kernel/irq.c:do_IRQ()</tt>, which is the generic IRQ handler
+<item>from <tt>arch/i386/kernel/entry.S</tt> in case the kernel just returned from a syscall
+<item>inside the main process scheduler in <tt>kernel/sched.c:schedule()</tt>
+</enum>
+<p>
+So if execution passes one of these points, do_softirq() is called, it detects the NET_RX_SOFTIRQ marked an calls <tt>net/core/dev.c:net_rx_action()</tt>. Here the sbk is dequeued from this cpu's receive queue and afterwards handled to the apropriate packet handler. In case of IPv4 this is the IPv4 packet handler.
+
+<sect1>The IPv4 packet handler
+<p>
+The IP packet handler is registered via <tt>net/core/dev.c:dev_add_pack()</tt> called from <tt>net/ipv4/ip_output.c:ip_init()</tt>.
+<p>
+The IPv4 packet handling function is <tt>net/ipv4/ip_input.c:ip_rcv()</tt>. After some initial checks (if the packet is for this host, ...) the ip checksum is calculated. Additional checks are done on the length and IP protocol version 4.
+<p>
+Every packet failing one of the sanity checks is dropped at this point.
+<p>
+If the packet passes the tests, we determine the size of the ip packet and trim the skb in case the transport medium has appended some padding.
+<p>
+Now it is the first time one of the netfilter hooks is called.
+<p>
+Netfilter provides an generict and abstract interface to the standard routing code. This is currently used for packet filtering, mangling, NAT and queuing packets to userspace. For further reference see my conference paper 'The netfilter subsystem in Linux 2.4' or one of Rustys unreliable guides, i.e the netfilter-hacking-guide.
+<p>
+After successful traversal the netfilter hook, <tt>net/ipv4/ipv_input.c:ip_rcv_finish()</tt> is called.
+<p>
+Inside ip_rcv_finish(), the packet's destination is determined by calling the routing function <tt>net/ipv4/route.c:ip_route_input()</tt>. Furthermore, if our IP packet has IP options, they are processed now. Depending on the routing decision made by <tt>net/ipv4/route.c:ip_route_input_slow()</tt>, the journey of our packet continues in one of the following functions:
+
+<descrip>
+<tag>net/ipv4/ip_input.c:ip_local_deliver()</tag>
+The packet's destination is local, we have to process the layer 4 protocol and pass it to an userspace process.
+
+<tag>net/ipv4/ip_forward.c:ip_forward()</tag>
+The packet's destination is not local, we have to forward it to another network
+
+<tag>net/ipv4/route.c:ip_error()</tag>
+An error occurred, we are unable to find an apropriate routing table entry for this packet.
+
+<tag>net/ipv4/ipmr.c:ip_mr_input()</tag>
+It is a Multicast packet and we have to do some multicast routing.
+</descrip>
+
+<sect>Packet forwarding to another device
+
+<p>
+If the routing decided that this packet has to be forwarded to another device, the function <tt>net/ipv4/ip_forward.c:ip_forward()</tt> is called.
+
+<p>
+The first task of this function is to check the ip header's TTL. If it is &lt;= 1 we drop the packet and return an ICMP time exceeded message to the sender.
+<p>
+We check the header's tailroom if we have enough tailroom for the destination device's link layer header and expand the skb if neccessary.
+<p>
+Next the TTL is decremented by one.
+<p>
+If our new packet is bigger than the MTU of the destination device and the don't fragment bit in the IP header is set, we drop the packet and send a ICMP frag needed message to the sender.
+
+<p>
+Finally it is time to call another one of the netfilter hooks - this time it is the NF_IP_FORWARD hook.
+
+<p>
+Assuming that the netfilter hooks is returning a NF_ACCEPT verdict, the function <tt>net/ipv4/ip_forward.c:ip_forward_finish()</tt> is the next step in our packet's journey.
+
+<p>
+ip_forward_finish() itself checks if we need to set any additional options in the IP header, and has and has <tt>net/ipv4/ip_options.c:ip_forward_options()</tt> doing this. Afterwards it calls <tt>include/net/ip.h:ip_send()</tt>.
+
+<p>
+If we need some fragmentation, <tt>net/ipv4/output.c:ip_fragment()</tt> gets called, otherwise we continue in <tt>net/ipv4/ip_forward:ip_finish_output()</tt>.
+
+<p>
+ip_finish_output() again does nothing else than calling the netfilter postrouting hook NF_IP_POST_ROUTING and calling ip_finish_output2() on successful traversal of this hook.
+
+<p>
+ip_finish_output2() calls prepends the hardware (link layer) header to our skb and calls <tt>dst->hh->hh_output()</tt> which seems to usually be <tt>net/core/dev.c:dev_queue_transmit()</tt>.
+<p>
+dev_queue_xmit() enqueues the packet for transmission by the network device.
+
+</article>
+
personal git repositories of Harald Welte. Your mileage may vary