A CS 255 Paper Review for A Generalized Processor Sharing to Flow COntrol in Integrated Services Networks: The Single Node Case by Abhay K. Parekh
In our tech-savvy world today, we see and use a variety of network traffic types (realtime games, video, audio, chat, etc) in our every day lives. Because of this, network flexibility is becoming a primary concern to network providers and administrators. With the needed flexibility, other important network aspects such as performance guarantees and fairness should still not be compromised. Fairness such that other users’ experience should not be degraded just to serve a more demanding traffic on another. Flexibility such that the network can give different performance guarantees to different customers depending on their needs and demands in the network.
These concepts, concerns, and solutions were widely tackled in the paper A Generalized Processor Sharing to Flow Control in Integrated Services Networks: The Single Node Case by Abhay K. Parekh. The use of the Generalized Processor Sharing (GPS) together with leaky buckets was given focused especially on its ability to provide flexibility as well as worst case performance guarantees. The paper assumes that rate admission control is done in the leaky buckets as it shapes incoming traffic.
GPS, in its vanilla version, does not transmit packets as entities but instead assumes that packets are infinitely divisible. With this limitation came another variant: the GPS with leaky bucket that operates on packets called the Weighted Fair Queueing or PGPS (Packet by packet Generalized Processor Sharing).
Proofs and lemmas were shown where appropriate to better strengthen the concepts being discussed. The paper is also effective in uniting and connecting two separate proofs to arrive at another final conclusion.
A comparison of PGPS to other schemes such as the virtual clock multiplexing by Zhang was tackled next. The virtual clock multiplexing provides a guaranteed rate as well but what makes it different from PGPS is that it has a bias against bursts even if network load is light. A helpful numerical example was given for this concept:
At time 0 to 900, only session 1 is using the network but on t = 901 onwards, another session, session 2, also started transmitting in the network. In this setup, at 901, only connection 2 will be served with virtual clock multiplexing. Only until all of its packets have been transmitted will connection 1 be served again. Connection 1 was “punished” for having solely used the network during 0 – 900 even if it was not in the expense of session 2 or any other sessions. In contrast to this mechanism, PGPS uses a round robin fashion in determining who it will serve next in the event that multiple sessions are needing network throughput ensuring that everyone get’s their network share in a limited amount of time.
Leaky buckets were widely discussed next and how it is able to bound busy system periods. Sessions are provided with tokens which they can use to transmit. As long as tokens are still available, sessions are free to transmit in the network.
Overall, the paper provided a very comprehensive and technical discussion on generalized processing systems (and its per packet counterpart) as well as on leaky buckets. Proofs, lemmas, graphs, and illustrations were very helpful to prove and elaborate on different major points and concepts. The paper could have also capped off its technicality with a brief conclusion summarizing its main points but this is only minor as the introduction had already given an overview of what the paper is all about. 🙂