A Paper Review for “The Design Philosophy of the DARPA Internet Protocols” by David D. Clark.
The paper comprehensively looks into the motivation, reasoning, and set of priorities upon which the internet protocol suite was built. An understanding of the guiding principles helps provide context and alignment to developers of future extensions.
The most fundamental goal upon which the internet came about was an effective way to multiplex utilization of existing networks which also came from the original mother goal of connecting the ARPANET with ARPA packet radio networks to provide packet radio users access to larger machines.
An alternative to this fundamental goal was to have a unified network system thus removing the need to further incorporate a variety of networks. But with this solution, incorporating existing interconnected networks will still be necessary. Having a unified system may also fade away the administrative boundaries of control that are present in separate interconnected networks.
To further elaborate the most fundamental goal stated above, the paper discusses key sub-goals that help define what is expected by the main goal especially on describing how “effectivity” could be achieved. These subgoals include survivability even with loss of connectivity, ability of handling different kinds of services and different varieties of network, distributed network management, cost-effectivity, and accountability.
It is worth noting that survivability, support for multi-services and network types came on top of the list. Accountability and cost effectivity, on the other hand, were some of the goals that appeared later in the list. The paper makes a good point to remind the reader that this is so because the original purpose was for military use and that the priorities were listed in the order which the initial purpose need the most.
Survivability was of primary importance as connection in the military field may be intermittent and error in network connection may be frequent. Communication must be restored (i.e. handled at the backend to appear smoothly in the application’s perspective) at the state where it was interrupted once connection is back. The fate-sharing concept was introduced which gave the responsibility of tracking connection states to end hosts (rather than on the network) so that the only time these states could be lost is when the entity, too, is destroyed.
Support for different services came in next and was achieved by the datagram’s flexibility as a building block upon which additional services can be built on top (i.e. to to make it reliable, fast even if not reliable, etc) depending on the need. Though it was also acknowledged that for most networks already built with a given assumption and service, its behaviour will be the same all throughout and features can’t be turned off anytime.
Next came multi-network flexibility and was provided by having assumptions that networks can send reasonably-sized packets within a reasonable amount of time within a network equipped with an addressing scheme. Features outside these assumptions (low delay, high reliability, etc) are more likely to be done at the transport layer and not on the network as to prevent the need of re-implementing the feature for every single network of the host.
It was also mentioned that preceding top sub-goals were achieved but the ones at the bottom of the list are yet to be fully achieved such as distributed management which is somewhat already achieved as different networks and interfaces are already managed by different control centers.
Cost, on the other hand, is not yet fully realized with 40 byte packet headers consuming a huge overhead for 1 byte messages. Retransmission would also add cost especially when retransmitted packets pass by the same interfaces where it has already been successful prior to network loss.
One of the paper’s resounding opinion on the things that could be improved and one of the reasons as to why the lower level goals are yet to be achieved is the lack of tools during that time. Tools for the purposes of distributed resources management and network evaluation would help answer the guiding questions in developing an effective network such as expected bandwith and identification of necessary redundancies to allow alternative paths in case one breaks.
Tools like protocol verifier exists to check the logical correctness of a given protocol but as the paper argues this is not enough as a network could be logically correct but could suffer from poor performance. This is aggravated by the fact network that performance can’t be formalized (i.e. due to lack of tools and structure’s inclination for variability and flexibility) but should appear in contracts so contractors are required to perform and meet them. But since these can’t be handled by the system, it is the end hosts’ responsibility to ensure them and procurer’s obligation to include them in the contracts.
Implementation details and design decisions (i.e. packets vs byte regulation) were also thoroughly discussed together with their current implications, and future potential possibilities.
The paper was effective in showing that there are definitely priorities and motivation behind every design decisions done in building the internet structure and those are the reasons why it was moulded that way. Even if only most of the top priorities were achieved and the others are yet to be achieved in full, it is remarkable that these priorities have already been recognized and been identified as important. In achieving this, the paper first gave context of the goals importance, how it was achieved, what was done, problems encountered during implementation, solutions applied, and possible extensions and improvements. The details of the solution also included an expectation vs reality perspective as well as a proposal vs implementation comparison. This style provides the reader an involvement in the problem solving and solution discovery processes.
It is also evident that during the time of writing that even if the internet structure was already one amazing working structure, there are still things that are visibly lacking such as tools for assessment and design guidance. But it is worth noting that the authors were aware of the need for it and that they were not entirely blind spots.
Present assumptions and notions about the internet (i.e. datagrams were thought to be provided because those are what the system exactly needs but actually it was provided as a building block on which additional services could be built depending on need) were also challenged and this was successful in making the reader take part in a mental debate.
It is also worthy to acknowledge that the paper came in a neutral tone with no bias. Advantages and disadvantages were fairly discussed, as well as suggestions and recommendations. It is also aware of the scope and limitation at which its subject operates (i.e. TCP not being for XNET and digitized speech).
The paper could be more comprehensive if it also discussed the present state of network security during the time of writing. Security and its many possibilities with network were briefly discussed in the paper of Cerf and Kahn and is probably one of the features that readers would also be looking out for in this assessment.