Principles of Communications, Digital Communication II Past Paper
Readings,
- Textbook: An Engineering Approach to Computer Networking: ATM Networks, the Internet, and the Telephone Network, by S. Keshav, 1997, Addison-Wesley.
- Useful chapters: Ch 5 layering, Ch6 System Design, Ch 9 Scheduling, Ch 11 Routing, Ch 13 Flow control, Ch 14 Traffic management [One-to-one mapping with Lecture slides].
- Fibbing paper, Optimization paper.
- Data center, QJump paper.
Routing
Packet size
- y2010p7q9 (a)
- min: to detect a collision (i.e. duration of transmission from two extreme stations at max length of cable extent, must overlap by enough bits to detect collision and therefore trigger CD part of CSMA/CD)
- max: to prevent capture and to make sure delay before eventual access is not too high.
TCP
DV vs LS
General
Cell switching (ATM) vs. packet switching (IP)
Inter-domain
Intra-domain
Fibbing
Multi-protocol label switching
Segment
Multicast
Mobile and telephone
Mobile
Telephone circuits: dynamic alternative routing
Flow / congestion control
Scheduling
The scheduler allocates the delay, bandwidth and loss rate to the packets in the output queues.
work-conserving: whenever there is a packet to send, the scheduler will send it.
- the sum of mean queuing delay q [seconds] to clear all the packets weighted by link load utilization ρ is independent of the scheduling algorithm,
i=1∑nρiqi=C,
- where mean link utilization ρi=piλi, i.e. the ratio between the mean arrival pkt rate λi and the mean pkt service rate pi [packets/s].
- Alternatively, μi=pi1 is the mean per-packet service time [s/packet].
non-work-conserving: the scheduler may idle even if there are packets to send.
- allows smoothing of packet flows, or less jitter; downstream traffic more predictable since the output flow is controlled, i.e. less bursty traffic; less buffer space required.
- the sum > C, causing higher end-to-end delay; complex time synchronization required.
Past papers
Fairness
max-min fairness
Given the list of demands, each flow is assigned mi resource, which neither exceeds its demand xi, nor exceeds an equal share of the remaining capacity (for unsatisfied flows), in the order of increasing flow demand,
m0=0,mi=min(xi,n−(i−1)C−∑j=0i−1mj).
It allocates the small demanding flows what they want, and then evenly distributes the remaining capacity among the larger demanding flows. The process is repeated until all flows are satisfied.
- after admission control, the minimum bandwidth may provide the inelastic traffic with a guaranteed service.
- it regulates the elastic traffic to a fair share of the remaining capacity.
Past papers
vs. proportional fairness
fairness in multipath routing
4G
Protection/Isolation
misbehavior by one connection should not affect the performance received by other connections (or flows), where misbehavior includes sending packets at a rate faster than its fair share.
relationship: fair → protection, i.e., a fair scheduler automatically provides protection. However, the converse need not be true.
- because the fairness limits a misbehaving connection to its fair share. On the other hand, if connections are policed at the entrance to the network (conforming to a predeclared traffic pattern) they are protected from each other, but their resource shares may not be fair.
Scheduling algorithms
-
FIFO O(1)
- only per packet, not per flow (
src
/dst
IP/port, protocol).
- ✘ no fairness and protection.
- ✓ easy to implement, and no extra state stored in RAM.
-
Priority queueing
- ✓ guaranteed-service.
- ✘ starve low-priority flows, unless proper admission control and policing are used.
-
Generalized Processor Sharing (GPS)
- "bitwise round-robin" benchmark (Flow A: g(A) = 1 bit → Flow B: 1 bit → ... → Flow A: 1 bit → ...).
- For N connections with positive real weights ϕi, and
- S(i,t1,t2) is the amount of service received by connection i in the interval [t1,t2],
- for any connection i with data in queue, for any connection j,
S(j,t1,t2)S(i,t1,t2)≥ϕjϕi.
-
✓ fair as each flow shares the remaining bandwidth in proportion to its weight.
-
✘ not implementable since packets are not infinitesimal small, but is used as a reference.
- Relative Fairness Bound: RFB=∣g(i)S(i,t1,t2)−g(j)S(j,t1,t2)∣; Absolute Fairness Bound: AFB=∣g(i)S(i,t1,t2)−g(i)GPS(i,t1,t2)∣,
- where g(i)=minkg(i,k) and the service rate in switch k is r(k),
g(i,k)=∑jω(j,k)ω(i,k)r(k).
-
Weighted Round Robin (WRR) O(1)
- use case: high-speed ATM networks, with fixed-size packets and short round times.
- normalized weight ω^=μω, where μ is the mean packet size of the flow.
- ✘ must know mean packet size in advance.
- ✘ huge state stored in RAM; new/short-lived flows need default average.
- ☐ fair only over time scales longer than a round time.
- At a shorter time scale, some connections may get more service than others (100%, ..., 0%, 0%).
- If a connection has a small weight, or the number of connections is large, this may lead to long periods of unfairness.
-
Deficit Round Robin (DRR) O(1)
- associate each flow with a deficit counter (dc), initialized to 0.
- for non-empty queue, packet at the head is
- served and
dc = dc + quantum - pkt_size
, if pkt_size
≤ dc + quantum
;
- kept in the queue and serve one quantum
dc += quantum
, otherwise. [build up credit]
- quantum choice: at least the largest packet size of the flow (work-conserving).
- reset
dc
to 0 when the queue is empty.
- ✓ no need to know the mean packet size in advance, hence no problem for new/short-lived flows.
- ✘ huge state stored in RAM for
dc
for each flow.
- ☐ fair only over time scales longer than a frame time.
- use case: fair longer, given small packet size and small number of flows.
-
Weighted Fair Queuing (WFQ) O(logn)
- smallest finish-number F(i,k,t) from GPS across queues served first,
- where flow i with weight ωi, packet k at time t with size P(i,k,t),
- the length of a round, that is, the time taken to serve one bit from each active connection, is proportional to the number of active connections.
- define the round number R(t) to be the number of rounds that service GPS has completed.
- increases at a rate inversely proportional to the number of active connections.
F(i,k,t)={R(t)+ωiP(i,k,t),F(i,k−1,t)+ωiP(i,k,t),if connection i is inactiveif connection i is active=max{F(i,k−1,t),R(t)}+ωiP(i,k,t).
- ✓ no need to know the mean packet size in advance.
- ✘ storage for finish numbers F(i,k,t) for each flow
- ✘ processing time to find the smallest finish number.
algo. |
work-conserving? |
max-min fair? |
protection? |
w/o mean pkg size? |
FIFO/FCFS |
✓ |
✘ |
✘ |
✓ |
Priority |
✓ |
✘ |
✘ starve |
✓ |
GPS |
✓ |
✓ |
✓ |
✓ |
WRR |
✓ |
long time scale ✓ |
long time scale ✓ |
✘ |
Deficit RR |
✓ |
medium and |
long time scales ✓ |
✓ |
WFQ |
✓ |
✓ |
✓ |
✓ |
Past papers
Queue management
drop from T/H, smart/random drop, flush, Active Queue Management (e.g. Random Early Detection), ECN.
- prioritize newly arrival vs. existing packets in the queue.
- packet drop is the implicit signal to TCP, but not for UDP.
- ECN signals TCP pkg to slow down, without loss of packets.
- For UDP, we need extra hard dampening (policing) to prevent congestion.
- global synchronization (underutilization issue),
- all flows send simultaneously and experience packet loss,
- bad for TCP protocol, then all flows backoff and the delay curve is no longer smooth,
- when all flows send again, it causes bursty traffic.
|
fair for ? |
pros |
cons |
global syn.? |
flush |
✘ |
aggressive clear |
severely penalize all |
✓ |
tail-drop |
steady & small |
easy to implement |
drop burst arrival |
✓ |
head-drop |
burst, real-time |
prevent hogging |
drop long-latency |
✘ |
smart-drop |
✓ |
per-flow, adaptive |
complex classifier |
✘ |
random |
long time ✓ |
treat flows evenly |
non-deterministic |
✘ |
RED |
✓ |
adaptive on length |
complex to tune |
✘ |
Past papers
Data centers
QJump, a switch priority queueing, combined with end systems traffic regulation, by exploiting its regular topology & traffic matrix (src-dst) & source behaviour.
Optimization
In general, we have two coupled problems, i.e.,
- Traffic engineering (routing) problem: given fixed rate {xi}, minimize the network congestion with variable routing path Rli, i.e., the fraction of flow i on link l,
Rminl∈L∑D(ul), where ul=cl∑iRlixi.(routing)
- Congestion control (system) problem: given fixed path R, maximize the utility of the system with adaptive source rates {xs},
xsmaxs∈S∑Us(xs), s.t. s∈l∑xs≤Cl,∀l∈L.(system)
Routing
- W: set of src-dst pairs.
- rw is the fixed rate of sd pair w [bps].
- Pw is the set of path between the sd pair w.
- xp is the pkt rate on path p [bps].
- Cl is the capacity of link l [bps].
- Fl=∑all p with lxp.
/ p_1 with rate x_1
/
- rw --> p_2 with rate x_2
\
\ p_3 with rate x_3
Given (routing) on link l=ij, it's equivalent to the following,
all links l∑Dl(all p with l∑xp), s.t.p∈Pw∑xp=rw,∀w∈W;xp≥0,
where Dl is the congestion function of link l, which is a non-decreasing function of the total flow Fl through it,
Dl(Fl)=Cl−FlFl,
- as Fl approaches the capacity Cl of the link, the congestion function Dl approaches infinity.
For each sd pair w and each kw paths, evaluate marginal utilities ∂pi∂D(x) equal, where i∈[1,...,kw].
- Because if one path has a lower marginal cost, you could reduce total cost by shifting some traffic to that path.
- The only time no improvement is possible is when all paths have the same marginal cost.
Congestion control
Proportional fairness
For adaptive-rate flows, the vector of rates Xs is proportional fair if for any other vector of rates Xs′,
s∈S∑xsxs′−xs≤0.
the sum of relative improvements possible for each individual flow is non-positive.
- It keeps a balance between the utilitarian ∑s∈S(xs′−xs)≤0 (efficiency) and egalitarian (mins∈Sxs′−mins∈Sxs)≤0 (fairness).
- When the demand exceeds the capacity, high demanding flows are flavored due to a smaller proportional loss.
- It's aligned with elastic traffic, where the aggregated utility is prioritized.
(A) For the network problem, we are trying to maximize the revenue, which is in diminishing returns w.r.t the rate xs,
xs≥0maxωslogxs, s.t. s∈l∑xs≤Cl,(network)
is the proportional fairness of rates xs per unit charge, via aggregated rate xr by replacing single user with wr users.
- The more source pays ωs [$/s] (willingness-to-pay), the more bandwidth xs [bps] is allocated to it.
- ωs [$/s] =ps [$/bit] ⋅ xs [bits/s].
(B) For the user problem, we are trying to maximize the user's utility, while decreasing the cost involved, for each source s,
maxUs(xs=ωsps)−ωs, s.t. ωs≥0.(user)
Together with (A) and (B), we are solving the intractable system problem, since the source utility functions are not known by the network in advance,
maxs∈S∑Us(xs), s.t. s∈l∑xs≤Cl,∀l∈L.(system)
Under the decomposition of (system) problem into (network) and (user), we derive xr=∑j∈rpjωr from the (network) Lagrangian.
- The Lagrange multipliers μ=pj are the shadow prices of all the sources s that use it. (See Optimization paper for more details.)
The dual problem is reduced to differential equation, the change in bandwidth allocation at source s includes linear increase of ωs and the multiplicative decrease based on the shadow price,
dtdxs(t)=k[ωs−xs(t)l∈L(s)∑pl(t)],
where shadow prices (congestion signals) pl(t)=gl[∑sxs(t)] is accumulated along the path L(s) from the source s to the destination,
- gl is the price charged by link l per unit flow through it [$/bps].
- The total flow through link l is ∑l∈L(s)xs(t), i.e. the sum of the flows from all sources s passing through link l.
- It's a distance-related (RTT dependence) cost.
Past papers
vs. max-min
Traffic management
QoS + Connectivity.
Elastic vs inelastic
Traffic classes and synchrony (see Flow control section)
- Guaranteed-service, or inelastic (delay sensitive and unusable for low bandwidth, i.e. utility is 0)
- Synchronous, or real-time, e.g. gaming, video conferencing.
- open-loop flow control with admission control (RTP over UDP).
- Best-effort, or elastic (benefit from throughput, but not delay-sensitive)
- Asynchronous, or non-real-time, e.g. file transfer, email, web browsing.
- closed-loop flow control with congestion control (TCP), with rate adaptive to network conditions.
Utility function, Traffic model (measure or math), time scale, renegotiation, admission control, capacity planning and macroscopic QoS.
Peak load pricing
Signalling
Related: multicast (RSVP : Resource reSerVation Protocol)
General
IntServ vs. DiffServ
System design
randomness
Layering