Next: Chapter 18
Up: Notes on ``TCP/IP Illustrated''
Previous: Chapter 16
- p. 224
- It should be noted that the acknowledgement described i
the third bullet point is an acknowledgement that the receiving TCP
has the segment of data, not that the application has it. When we look
at protocols such as SMTP which are built on top of TCP, we will see
that there are application-level ``acknowledgements'' as well to
indicate that the application has successfully received and
processed the data (which may be many segments in the case of a long
mail message). The TCP acknowledgement must not be used as a
substitute for this. This mistake is illustrated in the following
posting46.
Hi, everyone
I'm having a question about possible duplicate message delivery in
TCP.
Here's the scene:
Time 1: 3-way handshake has been successfully finished. Client
sends a packet with data to server.
Time 2: Server receives the packet, sends the data to application
level.
Time 3: Before server sends out ACK, it crashes and reboots.
Time 4: Client times out and resends the packet with data.
Time 5: Server starts up, receives the resent packet and responds
with RST.
Time 6: Client tears down connection.
Attention: the data has been accepted by the server (such as picking
out 1000 USD from your account) and delivered to the application level
but client doesn't know it. If the client tries again (say, picking
out another 1000 USD from your account), what a mess?
- p. 224
- As described here, TCP provides protection against
reordering of packets in the network. This was thought to be a very
rare occurrence, but currently [2] it is not. However Vernon
Schryver (vjs@rhyolite.com) writes on 2 July 2001
TCP does not work very well with out of order segments. Only a little
re-ordering can cause a 20% retransmission rate as well as a large
reduction of the congestion window. In practice, routes are fairly
stable during the life time of TCP connections or burst of activity
on long lived TCP connections. Brand name routers that support load
sharing (using multiple next-hops to a single destination) go to
lengths to cause any single stream to use a single route.
Barry Margolin (barmar@genuity.net) writes on 25 September 2001
Although TCP is able to handle packets that
arrive out of order, its performance is much better if this doesn't
happen. Sending packets along different paths is likely to result in out
of order delivery, so modern routers take steps to avoid this. The oldest
mechanism they used for this is destination-based route caching: the first
time a router needs to send to a particular destination address it selects
one of the possible paths and then remembers this for all future packets to
that address. Modern, high-performance routers do flow-based route
caching; they peek into the TCP layer's header47 to get the port numbers, and
cache a specific path based on the IP addresses and ports.
Next: Chapter 18
Up: Notes on ``TCP/IP Illustrated''
Previous: Chapter 16
James Davenport
2004-03-09