A Paper Review for “A Protocol for Packet Network Intercommunication” by Vinton G. Cerf and Robert E. Kahn.
During the time when the paper was written, packet switching networks are being widely studied and researched for the purpose of sharing computer resources within the same network. To communicate, a common set of protocols has been agreed upon between two endpoints. However, these existing protocols only address communication within the same network.
The paper describes and discusses a protocol design that would allow communication between computers in two different networks. Protocols exist in typical packet switched networks but differ from network to network. Different networks may also have different addressing schemes, maximum sizes accepted, timings, and fault detection mechanisms, and restoration mechanisms. Thus, to allow for internetwork communication, these differences should be addressed and be standardized (ie. uniform addressing format, ability to separate data into smaller chunks, and end-to-end coordination and communication).
First, the term gateway was used to introduce an interface between networks and the one who’ll be responsible for routing data to their destinations based on addresses. To achieve this, addresses, which will then be stored in internetwork packet headers, should come in a standard format and should uniquely identify a host in the network.
Since data is not universally restricted to be small, various data sizes should be handled by chunking data into smaller bits or fragments. But this fragmentation should still allow re-assembling of data at the destination.
With these differences, it may be natural to think that gateways should be the one responsible to compensate for this but the authors are suggested that only minimal work should be done at the interface because this could cause timing delays and even deadlocks due to the operations’ complexity.
With the gateway not being responsible for this, it is also noted that interconnection should not affect the internal working of a system and that as much as possible the need for central external administration and control among networks should be avoided.
To achieve this, the paper suggests the existence of a transmission control program (TCP) that transmits and receives messages in behalf of the many processes of the system. It is also the one responsible in breaking up outgoing packets to smaller bits and re-assembling incoming packets for delivery to their destination processes (determined through their process headers).
To reassemble fragmented pieces of data, it is important to have a sense of their original sequence. And this is was done through sequence numbers which are unique to each source-destination port and would allow proper placement of the particular fragment even if the whole packet is not yet complete.
Acknowledging that no transmission can be 100% reliable, the authors also proposed retransmission for recovering unreceived bits, and as a consequence, the need for duplicate detection. For this purpose, both sender and receiver are equipped with a window system to allow them to synchronize – know which packets to send and know which to receive. Acknowledgments are also sent by the receiver upon receipt of packet, and at the absence of none after a certain timeout, the sender resends the packet.
Queueing, discarding, and fault detection were also discussed along with several solution mechanisms such as checksums and flags.
With the concept being new at the time of writing and with the describing nature of the paper, the illustrations and graphs were very helpful to visualize the protocol being described. In example, Figure 3 makes it easier to picture out gateways as interfaces between networks and where networks could connect to eventually be connected to another. This is especially helpful in their time when computers are just usually connected to the same network the idea of a interconnected network is still fairly new.
In describing the different aspects of the protocol being discussed (i.e. addressing, fragmentation, responsibility ownership, etc), the scenario and problems were presented first followed by the initial solution/natural inclination which will then be discussed with its pros and cons. And then a suggested solution/authors’ inclination is presented and thoroughly discussed. This manner of writing is helpful to a reader as there is much greater appreciation of the solution as an omniscient view of the situation is given. There is also collective involvement, together with the reader, in arriving at a mutual opinion. The explanation of the authors’ positions were also more of the suggestive type, not imposing but effectively persuading.
The paper is a truly comprehensive description of the protocol being discussed and multiple aspects were explored. It is also worth noting that the paper is also aware of the limitations and future improvements of the protocol being described. Edge cases, fault detection, security possibilities were also not overlooked and are actually thoroughly discussed. The paper also did not come off as intimidating even if it was new technology at that time. It is also amazing that security possibilities and points for improvement were briefly described even if they were not part of the main priorities of the system.