```
```

```
```

```
```

```
```
```
```

```
```

```
```

```
```
```
```

```
```

```
```

```
```
```
```

```
```

```
```

```
```
```
```

```
```

```
```

```
```

```
```

```
```
PI2 Parameters
Internet Inter-Domain Traffic
Arbor Networks
Arbor Networks
Arbor Networks
Uni Michigan
Uni Michigan
SCReAM
Enabling time-critical applications over 5G with rate
adaptation
BRTCP BBR v2 Alpha/Preview Release
L4S Tests
Validating the Sharing Behavior and Latency Characteristics
of the L4S Architecture
Karlstad Uni
Karlstad Uni
Karlstad Uni
Karlstad Uni

```
```
As a first concrete example, the pseudocode below gives the DualPI2
algorithm. DualPI2 follows the structure of the DualQ Coupled AQM
framework in . A simple ramp
function (configured in units of queuing time) with unsmoothed ECN
marking is used for the Native L4S AQM. The ramp can also be configured
as a step function. The PI2 algorithm is used
for the Classic AQM. PI2 is an improved variant of the PIE
AQM .
The pseudocode will be introduced in two passes. The first pass
explains the core concepts, deferring handling of edge-cases like
overload to the second pass. To aid comparison, line numbers are kept in
step between the two passes by using letter suffixes where the longer
code needs extra lines.
All variables are assumed to be floating point in their basic units
(size in bytes, time in seconds, rates in bytes/second, alpha and beta
in Hz, and probabilities from 0 to 1. Constants expressed in k (kilo), M
(mega), G (giga), u (micro), m (milli) , %, ... are assumed to be
converted to their appropriate multiple or fraction to represent the
basic units. A real implementation that wants to use integer values
needs to handle appropriate scaling factors and allow accordingly
appropriate resolution of its integer types (including temporary
internal values during calculations).
A full open source implementation for Linux is available at:
https://github.com/L4STeam/sch_dualpi2_upstream and explained in . The specification of the DualQ Coupled AQM for
DOCSIS cable modems and CMTSs is available in
and explained in .
The pseudocode manipulates three main structures of variables: the
packet (pkt), the L4S queue (lq) and the Classic queue (cq). The
pseudocode consists of the following six functions:
The initialization function dualpi2_params_init(...) () that sets parameter
defaults (the API for setting non-default values is omitted for
brevity)
The enqueue function dualpi2_enqueue(lq, cq, pkt) ()
The dequeue function dualpi2_dequeue(lq, cq, pkt) ()
The recurrence function recur(q, likelihood) for de-randomized
ECN marking (shown at the end of ).
The L4S AQM function laqm(qdelay) () used to calculate the
ECN-marking probability for the L4S queue
The base AQM function that implements the PI algorithm
dualpi2_update(lq, cq) ()
used to regularly update the base probability (p'), which is
squared for the Classic AQM as well as being coupled across to the
L4S queue.

It also uses the following functions that are not shown in
full here:
scheduler(), which selects between the head packets of the two
queues; the choice of scheduler technology is discussed later;
cq.byt() or lq.byt() returns the current length
(aka. backlog) of the relevant queue in bytes;
cq.len() or lq.len() returns the current length of the relevant
queue in packets;
cq.time() or lq.time() returns the current queuing delay of the
relevant queue in units of time (see Note a);
mark(pkt) and drop(pkt) for ECN-marking and dropping a
packet;

In experiments so far (building on experiments with PIE) on
broadband access links ranging from 4 Mb/s to 200 Mb/s with base RTTs
from 5 ms to 100 ms, DualPI2 achieves good results with the default
parameters in . The
parameters are categorised by whether they relate to the Base PI2 AQM,
the L4S AQM or the framework coupling them together. Constants and
variables derived from these parameters are also included at the end
of each category. Each parameter is explained as it is encountered in
the walk-through of the pseudocode below, and the rationale for the
chosen defaults are given so that sensible values can be used in
scenarios other than the regular public Internet.
The overall goal of the code is to apply the marking and dropping
probabilities for L4S and Classic traffic (p_L and p_C). These are
derived from the underlying base probabilities p'_L and p' driven
respectively by the traffic in the L and C queues. The marking
probability for the L queue (p_L) depends on both the base probability
in its own queue (p'_L) and a probability called p_CL, which is
coupled across from p' in the C queue (see for the derivation of the specific
equations and dependencies).
The probabilities p_CL and p_C are derived in lines 4 and 5 of the
dualpi2_update() function ()
then used in the dualpi2_dequeue() function where p_L is also derived
from p_CL at line 6 (). The
code walk-through below builds up to explaining that part of the code
eventually, but it starts from packet arrival.
When packets arrive, first a common queue limit is checked as shown
in line 2 of the enqueuing pseudocode in . This assumes a shared buffer
for the two queues (Note b discusses the merits of separate buffers).
In order to avoid any bias against larger packets, 1 MTU of space is
always allowed, and the limit is deliberately tested before
enqueue.
If limit is not exceeded, the packet is timestamped in line 4 (only
if the sojourn time technique is being used to measure queue delay;
see Note a for alternatives).
At lines 5-9, the packet is classified and enqueued to the Classic
or L4S queue dependent on the least significant bit of the ECN field
in the IP header (line 6). Packets with a codepoint having an LSB of 0
(Not-ECT and ECT(0)) will be enqueued in the Classic queue. Otherwise,
ECT(1) and CE packets will be enqueued in the L4S queue. Optional
additional packet classification flexibility is omitted for brevity
(see the L4S ECN protocol ).
The dequeue pseudocode () is repeatedly called whenever
the lower layer is ready to forward a packet. It schedules one packet
for dequeuing (or zero if the queue is empty) then returns control to
the caller, so that it does not block while that packet is being
forwarded. While making this dequeue decision, it also makes the
necessary AQM decisions on dropping or marking. The alternative of
applying the AQMs at enqueue would shift some processing from the
critical time when each packet is dequeued. However, it would also add
a whole queue of delay to the control signals, making the control loop
sloppier (for a typical RTT it would double the Classic queue's
feedback delay).
All the dequeue code is contained within a large while loop so that
if it decides to drop a packet, it will continue until it selects a
packet to schedule. Line 3 of the dequeue pseudocode is where the
scheduler chooses between the L4S queue (lq) and the Classic queue
(cq). Detailed implementation of the scheduler is not shown (see
discussion later).
If an L4S packet is scheduled, in lines 7 and 8 the packet is
ECN-marked with likelihood p_L. The recur() function at the end of
is used, which is
preferred over random marking because it avoids delay due to
randomization when interpreting congestion signals, but it still
desynchronizes the saw-teeth of the flows. Line 6 calculates p_L
as the maximum of the coupled L4S probability p_CL and the
probability from the native L4S AQM p'_L. This implements the
max() function shown in to
couple the outputs of the two AQMs together. Of the two
probabilities input to p_L in line 6:
p'_L is calculated per packet in line 5 by the laqm()
function (see ),
Whereas p_CL is maintained by the dualpi2_update() function
which runs every Tupdate (Tupdate is set in line 12 of ).

If a Classic packet is scheduled, lines 10 to 17 drop or mark
the packet with probability p_C.

The Native L4S AQM algorithm () is a ramp function, similar to
the RED algorithm, but simplified as follows:
The extent of the ramp is defined in units of queuing delay,
not bytes, so that configuration remains invariant as the queue
departure rate varies.
It uses instantaneous queueing delay, which avoids the
complexity of smoothing, but also avoids embedding a worst-case
RTT of smoothing delay in the network (see ).
The ramp rises linearly directly from 0 to 1, not to an
intermediate value of p'_L as RED would, because there is no need
to keep ECN marking probability low.
Marking does not have to be randomized. Determinism is used
instead of randomness; to reduce the delay necessary to smooth out
the noise of randomness from the signal.

The ramp function requires two configuration parameters, the
minimum threshold (minTh) and the width of the ramp (range), both in
units of queuing time, as shown in lines 17 & 18 of the
initialization function in . The ramp function can be
configured as a step (see Note c).
Although the DCTCP paper
recommends an ECN marking threshold of 0.17*RTT_typ, it also shows
that the threshold can be much shallower with hardly any worse
under-utilization of the link (because the amplitude of DCTCP's
sawteeth is so small). Based on extensive experiments, for the public
Internet the default minimum ECN marking threshold (target) in is considered a good
compromise, even though it is significantly smaller fraction of
RTT_typ.
The coupled marking probability, p_CL depends on the base
probability (p'), which is kept up to date by the core PI algorithm in
executed every Tupdate.
Note that p' solely depends on the queuing time in the Classic
queue. In line 2, the current queuing delay (curq) is evaluated from
how long the head packet was in the Classic queue (cq). The function
cq.time() (not shown) subtracts the time stamped at enqueue from the
current time (see Note a) and implicitly takes the current queuing
delay as 0 if the queue is empty.
The algorithm centres on line 3, which is a classical
Proportional-Integral (PI) controller that alters p' dependent on: a)
the error between the current queuing delay (curq) and the target
queuing delay, 'target'; and b) the change in queuing delay since the
last sample. The name 'PI' represents the fact that the second factor
(how fast the queue is growing) is P roportional
to load while the first is the I ntegral of
the load (so it removes any standing queue in excess of the
target).
The target parameter can be set based on local knowledge, but the
aim is for the default to be a good compromise for anywhere in the
intended deployment environment — the public Internet. According
to , the target queuing delay on line 9 of
is related to the
typical base RTT worldwide, RTT_typ, by two factors: target = RTT_typ
* g * f. Below we summarize the rationale behind these factors and
introduce a further adjustment. The two factors ensure that, in a
large proportion of cases (say 90%), the sawtooth variations in RTT of
a single flow will fit within the buffer without underutilizing the
link. Frankly, these factors are educated guesses, but with the
emphasis closer to 'educated' than to 'guess' (see for full background):
RTT_typ is taken as 25 ms. This is based on an average CDN
latency measured in each country weighted by the number of
Internet users in that country to produce an overall weighted
average for the Internet . Countries
were ranked by number of Internet users, and once 90% of Internet
users were covered, smaller countries were excluded to avoid
unrepresentatively small sample sizes. Also, importantly, the data
for the average CDN latency in China (with the largest number of
Internet users) has been removed, because the CDN latency was a
significant outlier and, on reflection, the experimental technique
seemed inappropriate to the CDN market in China.
g is taken as 0.38. The factor g is a geometry factor that
characterizes the shape of the sawteeth of prevalent Classic
congestion controllers. The geometry factor is the fraction of the
amplitude of the sawtooth variability in queue delay that lies
below the AQM's target. For instance, at low bit rate, the
geometry factor of standard Reno is 0.5, but at higher rates it
tends to just under 1. According to the census of congestion
controllers conducted by Mishra et al. in Jul–Oct
2019 , most Classic TCP traffic
uses Cubic. And, according to the analysis in , if running over a PI2 AQM, a large proportion
of this Cubic traffic would be in its Reno-Friendly mode, which
has a geometry factor of ~0.39 (all known implementations). The
rest of the Cubic traffic would be in true Cubic mode, which has a
geometry factor of ~0.36. Without modelling the sawtooth profiles
from all the other less prevalent congestion controllers, we
estimate a 7:3 weighted average of these two, resulting in an
average geometry factor of 0.38.
f is taken as 2. The factor f is a safety factor that increases
the target queue to allow for the distribution of RTT_typ around
its mean. Otherwise, the target queue would only avoid
underutilization for those users below the mean. It also provides
a safety margin for the proportion of paths in use that span
beyond the distance between a user and their local CDN. Currently,
no data is available on the variance of queue delay around the
mean in each region, so there is plenty of room for this guess to
become more educated.
recommends target = RTT_typ * g * f =
25ms * 0.38 * 2 = 19 ms. However, a further adjustment is
warranted, because target is moving year-on-year. The paper is
based on data collected in 2019, and it mentions evidence from
speedtest.net that suggests RTT_typ reduced by 17% (fixed) or 12%
(mobile) between 2020 and 2021. Therefore, we recommend a default
of target = 15 ms at the time of writing (2021).

Operators can always use the data and discussion in to configure a more appropriate target for their
environment. For instance, an operator might wish to question the
assumptions called out in that paper, such as the goal of no
underutilization for a large majority of single flow transfers (given
many large transfers use multiple flows to avoid the scaling
limitations of Classic flows).
The two 'gain factors' in line 3 of , alpha and beta, respectively
weight how strongly each of the two elements (Integral and
Proportional) alters p'. They are in units of 'per second of delay' or
Hz, because they transform differences in queueing delay into changes
in probability (assuming probability has a value from 0 to 1).
Alpha and beta determine how much p' ought to change after each
update interval (Tupdate). For smaller Tupdate, p' should change by
the same amount per second, but in finer more frequent steps. So alpha
depends on Tupdate (see line 13 of the initialization function in
). It is best to update
p' as frequently as possible, but Tupdate will probably be constrained
by hardware performance. As shown in line 13, the update interval
should be frequent enough to update at least once in the time taken
for the target queue to drain ('target') as long as it updates at
least three times per maximum RTT. Tupdate defaults to 16 ms in the
reference Linux implementation because it has to be rounded to a
multiple of 4 ms. For link rates from 4 to 200 Mb/s and a maximum RTT
of 100ms, it has been verified through extensive testing that
Tupdate=16ms (as also recommended in the PIE spec ) is sufficient.
The choice of alpha and beta also determines the AQM's stable
operating range. The AQM ought to change p' as fast as possible in
response to changes in load without over-compensating and therefore
causing oscillations in the queue. Therefore, the values of alpha and
beta also depend on the RTT of the expected worst-case flow
(RTT_max).
The maximum RTT of a PI controller (RTT_max in line 10 of ) is not an absolute maximum,
but more instability (more queue variability) sets in for long-running
flows with an RTT above this value. The propagation delay halfway
round the planet and back in glass fibre is 200 ms. However, hardly
any traffic traverses such extreme paths and, since the significant
consolidation of Internet traffic between 2007 and 2009 , a high and growing proportion of all Internet
traffic (roughly two-thirds at the time of writing) has been served
from content distribution networks (CDNs) or 'cloud' services
distributed close to end-users. The Internet might change again, but
for now, designing for a maximum RTT of 100ms is a good compromise
between faster queue control at low RTT and some instability on the
occasions when a longer path is necessary.
Recommended derivations of the gain constants alpha and beta can be
approximated for Reno over a PI2 AQM as: alpha = 0.1 * Tupdate /
RTT_max^2; beta = 0.3 / RTT_max, as shown in lines 14 & 15 of
. These are derived
from the stability analysis in . For the default
values of Tupdate=16 ms and RTT_max = 100 ms, they result in alpha =
0.16; beta = 3.2 (discrepancies are due to rounding). These defaults
have been verified with a wide range of link rates, target delays and
a range of traffic models with mixed and similar RTTs, short and long
flows, etc.
In corner cases, p' can overflow the range [0,1] so the resulting
value of p' has to be bounded (omitted from the pseudocode). Then, as
already explained, the coupled and Classic probabilities are derived
from the new p' in lines 4 and 5 of as p_CL = k*p' and p_C = p'^2.
Because the coupled L4S marking probability (p_CL) is factored up
by k, the dynamic gain parameters alpha and beta are also inherently
factored up by k for the L4S queue. So, the effective gain factor for
the L4S queue is k*alpha (with defaults alpha = 0.16 Hz and k=2,
effective L4S alpha = 0.32 Hz).
Unlike in PIE , alpha and beta do not
need to be tuned every Tupdate dependent on p'. Instead, in PI2, alpha
and beta are independent of p' because the squaring applied to Classic
traffic tunes them inherently. This is explained in , which also explains why this more principled approach
removes the need for most of the heuristics that had to be added to
PIE.
Nonetheless, an implementer might wish to add selected details to
either AQM. For instance the Linux reference DualPI2 implementation
includes the following (not shown in the pseudocode above):
Classic and coupled marking or dropping (i.e. based on p_C
and p_CL from the PI controller) is not applied to a packet if the
aggregate queue length in bytes is < 2 MTU (prior to enqueuing
the packet or dequeuing it, depending on whether the AQM is
configured to be applied at enqueue or dequeue);
In the WRR scheduler, the 'credit' indicating which queue
should transmit is only changed if there are packets in both
queues (i.e. if there is actual resource contention). This
means that a properly paced L flow might never be delayed by the
WRR. The WRR credit is reset in favour of the L queue when the
link is idle.

An implementer might also wish to add other heuristics,
e.g. burst protection or enhanced
burst protection .
Notes:
The drain rate of the queue can vary
if it is scheduled relative to other queues, or to cater for
fluctuations in a wireless medium. To auto-adjust to changes in
drain rate, the queue needs to be measured in time, not bytes or
packets , .
Queuing delay could be measured directly as the sojourn time (aka.
service time) of the queue, by storing a per-packet time-stamp as
each packet is enqueued, and subtracting this from the system time
when the packet is dequeued. If time-stamping is not easy to
introduce with certain hardware, queuing delay could be predicted
indirectly by dividing the size of the queue by the predicted
departure rate, which might be known precisely for some link
technologies (see for example in DOCSIS PIE [RFC8034]). However, sojourn time is slow to detect bursts.
For instance, if a burst arrives at an empty queue, the sojourn
time only fully measures the burst's delay when its last packet is
dequeued, even though the queue has known the size of the burst
since its last packet was enqueued - so it could have signalled
congestion earlier. To remedy this, each head packet can be marked
when it is dequeued based on the expected delay of the tail packet
behind it, as explained below, rather than based on the head
packet's own delay due to the packets in front of it. identifies a specific scenario where bursty
traffic significantly hits utilization of the L queue. If this
effect proves to be more widely applicable, using the delay behind
the head could improve performance.The
delay behind the head can be implemented by dividing the backlog
at dequeue by the link rate or equivalently multiplying the
backlog by the delay per unit of backlog. The implementation
details will depend on whether the link rate is known; if it is
not, a moving average of the delay per unit backlog can be
maintained. This delay consists of serialization as well as media
acquisition for shared media. So the details will depend strongly
on the specific link technology, This approach should be less
sensitive to timing errors and cost less in operations and memory
than the otherwise equivalent 'scaled sojourn time' metric, which
is the sojourn time of a packet scaled by the ratio of the queue
sizes when the packet departed and arrived .
Line 2 of the dualpi2_enqueue() function () assumes an implementation
where lq and cq share common buffer memory. An alternative
implementation could use separate buffers for each queue, in which
case the arriving packet would have to be classified first to
determine which buffer to check for available space. The choice is
a trade-off; a shared buffer can use less memory whereas separate
buffers isolate the L4S queue from tail-drop due to large bursts
of Classic traffic (e.g. a Classic Reno TCP during slow-start
over a long RTT).
There has been some concern that using the step function of
DCTCP for the Native L4S AQM requires end-systems to smooth the
signal for an unnecessarily large number of round trips to ensure
sufficient fidelity. A ramp is no worse than a step in initial
experiments with existing DCTCP. Therefore, it is recommended that
a ramp is configured in place of a step, which will allow
congestion control algorithms to investigate faster smoothing
algorithms.A ramp is more general that a
step, because an operator can effectively turn the ramp into a
step function, as used by DCTCP, by setting the range to zero.
There will not be a divide by zero problem at line 5 of because, if minTh is equal to
maxTh, the condition for this ramp calculation cannot arise.

This section takes a second pass through the pseudocode adding
details of two edge-cases: low link rate and overload. repeats the dequeue
function of , but with
details of both edge-cases added. Similarly, repeats the core PI algorithm
of , but with overload details
added. The initialization, enqueue, L4S AQM and recur functions are
unchanged.
The link rate can be so low that it takes a single packet queue
longer to serialize than the threshold delay at which ECN marking
starts to be applied in the L queue. Therefore, a minimum marking
threshold parameter in units of packets rather than time is necessary
(Th_len, default 1 packet in line 19 of ) to ensure that the ramp
does not trigger excessive marking on slow links. Where an
implementation knows the link rate, it can set up this minimum at the
time it is configured. For instance, it would divide 1 MTU by the link
rate to convert it into a serialization time, then if the lower
threshold of the Native L AQM ramp was lower than this serialization
time, it could increase the thresholds to shift the bottom of the ramp
to 2 MTU. This is the approach used in DOCSIS , because the configured link rate is dedicated to
the DualQ.
The pseudocode given here applies where the link rate is unknown,
which is more common for software implementations that might be
deployed in scenarios where the link is shared with other queues. In
lines 5a to 5d in the
native L4S marking probability, p'_L, is zeroed if the queue is only 1
packet (in the default configuration).
Linux implementation note:
In Linux, the check that the queue exceeds Th_len before
marking with the native L4S AQM is actually at enqueue, not
dequeue, otherwise it would exempt the last packet of a burst from
being marked. The result of the check is conveyed from enqueue to
the dequeue function via a boolean in the packet metadata.

Persistent overload is deemed to have occurred when Classic
drop/marking probability reaches p_Cmax. Above this point, the Classic
drop probability is applied to both L and C queues, irrespective of
whether any packet is ECN-capable. ECT packets that are not dropped
can still be ECN-marked.
In line 10 of the initialization function (), the maximum Classic drop
probability p_Cmax = min(1/k^2, 1) or 1/4 for the default coupling
factor k=2. In practice, 25% has been found to be a good threshold to
preserve fairness between ECN capable and non ECN capable traffic.
This protects the queues against both temporary overload from
responsive flows and more persistent overload from any unresponsive
traffic that falsely claims to be responsive to ECN.
When the Classic ECN marking probability reaches the p_Cmax
threshold (1/k^2), the marking probability coupled to the L4S queue,
p_CL will always be 100% for any k (by equation (1) in ). So, for readability, the constant p_Lmax is
defined as 1 in line 22 of the initialization function (). This is intended to ensure
that the L4S queue starts to introduce dropping once ECN-marking
saturates at 100% and can rise no further. The 'Prague L4S'
requirements state
that, when an L4S congestion control detects a drop, it falls back to
a response that coexists with 'Classic' Reno congestion control. So it
is correct that, when the L4S queue drops packets, it drops them
proportional to p'^2, as if they are Classic packets.
The two queues each test for overload in lines 4b and 12b of the
dequeue function ().
Lines 8c to 8g drop L4S packets with probability p'^2. Lines 8h to 8i
mark the remaining packets with probability p_CL. Given p_Lmax = 1,
all remaining packets will be marked because, to have reached the else
block at line 8b, p_CL >= 1.
Line 2a in the core PI algorithm () deals with overload of the
L4S queue when there is little or no Classic traffic. This is
necessary, because the core PI algorithm maintains the appropriate
drop probability to regulate overload, but it depends on the length of
the Classic queue. If there is little or no Classic queue the naive PI
update function in would drop
nothing, even if the L4S queue were overloaded - so tail drop would
have to take over (lines 2 and 3 of ).
Instead, line 2a of the full PI update function in ensures that the base PI AQM
in line 3 is driven by whichever of the two queue delays is greater,
but line 3 still always uses the same Classic target (default 15 ms).
If L queue delay is greater just because there is little or no Classic
traffic, normally it will still be well below the base AQM target.
This is because L4S traffic is also governed by the shallow threshold
of its own native AQM (lines 5 and 6 of the dequeue algorithm in ). So the base AQM will be
driven to zero and not contribute. However, if the L queue is
overloaded by traffic that is unresponsive to its marking, the max()
in line 2 enables the L queue to smoothly take over driving the base
AQM into overload mode even if there is little or no Classic traffic.
Then the base AQM will keep the L queue to the Classic target (default
15 ms) by shedding L packets.
The choice of scheduler technology is critical to overload
protection (see ).
A well-understood weighted scheduler such as weighted
round-robin (WRR) is recommended. As long as the scheduler weight
for Classic is small (e.g. 1/16), its exact value is
unimportant because it does not normally determine capacity
shares. The weight is only important to prevent unresponsive L4S
traffic starving Classic traffic in the short term (see ). This is because capacity
sharing between the queues is normally determined by the coupled
congestion signal, which overrides the scheduler, by making L4S
sources leave roughly equal per-flow capacity available for
Classic flows.
Alternatively, a time-shifted FIFO (TS-FIFO) could be used. It
works by selecting the head packet that has waited the longest,
biased against the Classic traffic by a time-shift of tshift. To
implement time-shifted FIFO, the scheduler() function in line 3 of
the dequeue code would simply be implemented as the scheduler()
function at the bottom of in
. For the public Internet a good
value for tshift is 50ms. For private networks with smaller
diameter, about 4*target would be reasonable. TS-FIFO is a very
simple scheduler, but complexity might need to be added to address
some deficiencies (which is why it is not recommended over
WRR):
TS-FIFO does not fully isolate latency in the L4S queue
from uncontrolled bursts in the Classic queue;
Using sojourn time for TS-FIFO is only appropriate if
time-stamping of packets is feasible;
Even if time-stamping is supported, the sojourn time of the
head packet is always stale, so a more instantaneous measure
of queue delay could be used (see Note a in ).

A strict priority scheduler would be inappropriate as discussed
in .

As another example of a DualQ Coupled AQM algorithm, the pseudocode
below gives the Curvy RED based algorithm. Although the AQM was designed
to be efficient in integer arithmetic, to aid understanding it is first
given using floating point arithmetic (). Then, one possible optimization for
integer arithmetic is given, also in pseudocode (). To aid comparison, the line numbers are
kept in step between the two by using letter suffixes where the longer
code needs extra lines.
The pseudocode manipulates three main structures of variables: the
packet (pkt), the L4S queue (lq) and the Classic queue (cq) and
consists of the following five functions:
The initialization function cred_params_init(...) () that sets parameter
defaults (the API for setting non-default values is omitted for
brevity);
The dequeue function cred_dequeue(lq, cq, pkt) ();
The scheduling function scheduler(), which selects between the
head packets of the two queues.

It also uses the following functions that are either shown
elsewhere, or not shown in full here:
The enqueue function, which is identical to that used for
DualPI2, dualpi2_enqueue(lq, cq, pkt) in ;
mark(pkt) and drop(pkt) for ECN-marking and dropping a
packet;
cq.byt() or lq.byt() returns the current length
(aka. backlog) of the relevant queue in bytes;
cq.time() or lq.time() returns the current queuing delay of the
relevant queue in units of time (see Note a in ).

Because Curvy RED was evaluated before DualPI2, certain
improvements introduced for DualPI2 were not evaluated for Curvy RED.
In the pseudocode below, the straightforward improvements have been
added on the assumption they will provide similar benefits, but that
has not been proven experimentally. They are: i) a conditional
priority scheduler instead of strict priority ii) a time-based
threshold for the native L4S AQM; iii) ECN support for the Classic
AQM. A recent evaluation has proved that a minimum ECN-marking
threshold (minTh) greatly improves performance, so this is also
included in the pseudocode.
Overload protection has not been added to the Curvy RED pseudocode
below so as not to detract from the main features. It would be added
in exactly the same way as in for
the DualPI2 pseudocode. The native L4S AQM uses a step threshold, but
a ramp like that described for DualPI2 could be used instead. The
scheduler uses the simple TS-FIFO algorithm, but it could be replaced
with WRR.
The Curvy RED algorithm has not been maintained or evaluated to the
same degree as the DualPI2 algorithm. In initial experiments on
broadband access links ranging from 4 Mb/s to 200 Mb/s with base RTTs
from 5 ms to 100 ms, Curvy RED achieved good results with the default
parameters in .
The parameters are categorised by whether they relate to the
Classic AQM, the L4S AQM or the framework coupling them together.
Constants and variables derived from these parameters are also
included at the end of each category. These are the raw input
parameters for the algorithm. A configuration front-end could accept
more meaningful parameters (e.g. RTT_max and RTT_typ) and convert
them into these raw parameters, as has been done for DualPI2 in . Where necessary, parameters are
explained further in the walk-through of the pseudocode below.
The dequeue pseudocode () is
repeatedly called whenever the lower layer is ready to forward a
packet. It schedules one packet for dequeuing (or zero if the queue is
empty) then returns control to the caller, so that it does not block
while that packet is being forwarded. While making this dequeue
decision, it also makes the necessary AQM decisions on dropping or
marking. The alternative of applying the AQMs at enqueue would shift
some processing from the critical time when each packet is dequeued.
However, it would also add a whole queue of delay to the control
signals, making the control loop very sloppy.
The code is written assuming the AQMs are applied on dequeue (Note
). All the dequeue
code is contained within a large while loop so that if it decides to
drop a packet, it will continue until it selects a packet to schedule.
If both queues are empty, the routine returns NULL at line 20. Line 3
of the dequeue pseudocode is where the conditional priority scheduler
chooses between the L4S queue (lq) and the Classic queue (cq). The
time-shifted FIFO scheduler is shown at lines 28-33, which would be
suitable if simplicity is paramount (see Note ).
Within each queue, the decision whether to forward, drop or mark is
taken as follows (to simplify the explanation, it is assumed that
U=1):
If the test at line 3 determines there is an
L4S packet to dequeue, the tests at lines 5b and 5c determine
whether to mark it. The first is a simple test of whether the L4S
queue delay (lq.time()) is greater than a step threshold T (Note
). The second
test is similar to the random ECN marking in RED, but with the
following differences: i) marking depends on queuing time, not
bytes, in order to scale for any link rate without being
reconfigured; ii) marking of the L4S queue depends on a logical OR
of two tests; one against its own queuing time and one against the
queuing time of the other (Classic)
queue; iii) the tests are against the instantaneous queuing time
of the L4S queue, but a smoothed average of the other (Classic)
queue; iv) the queue is compared with the maximum of U random
numbers (but if U=1, this is the same as the single random number
used in RED).Specifically, in line 5a the
coupled marking probability p_CL is set to the amount by which the
averaged Classic queueing delay Q_C exceeds the minimum queuing
delay threshold (minTh) all divided by the L4S scaling parameter
range_L. range_L represents the queuing delay (in seconds) added
to minTh at which marking probability would hit 100%. Then in line
5c (if U=1) the result is compared with a uniformly distributed
random number between 0 and 1, which ensures that, over range_L,
marking probability will linearly increase with queueing time.
If the scheduler at line 3 chooses to
dequeue a Classic packet and jumps to line 7, the test at line 10b
determines whether to drop or mark it. But before that, line 9a
updates Q_C, which is an exponentially weighted moving average
(Note ) of
the queuing time of the Classic queue, where cq.time() is the
current instantaneous queueing time of the packet at the head of
the Classic queue (zero if empty) and gamma is the EWMA constant
(default 1/32, see line 12 of the initialization function).
Lines 10a and 10b implement the Classic
AQM. In line 10a the averaged queuing time Q_C is divided by the
Classic scaling parameter range_C, in the same way that queuing
time was scaled for L4S marking. This scaled queuing time will be
squared to compute Classic drop probability so, before it is
squared, it is effectively the square root of the drop
probability, hence it is given the variable name sqrt_p_C. The
squaring is done by comparing it with the maximum out of two
random numbers (assuming U=1). Comparing it with the maximum out
of two is the same as the logical `AND' of two tests, which
ensures drop probability rises with the square of queuing
time.

The AQM functions in each queue (lines 5c & 10b) are two cases
of a new generalization of RED called Curvy RED, motivated as follows.
When the performance of this AQM was compared with FQ-CoDel and PIE,
their goal of holding queuing delay to a fixed target seemed
misguided . As the number of flows
increases, if the AQM does not allow host congestion controllers to
increase queuing delay, it has to introduce abnormally high levels of
loss. Then loss rather than queuing becomes the dominant cause of
delay for short flows, due to timeouts and tail losses.
Curvy RED constrains delay with a softened target that allows some
increase in delay as load increases. This is achieved by increasing
drop probability on a convex curve relative to queue growth (the
square curve in the Classic queue, if U=1). Like RED, the curve hugs
the zero axis while the queue is shallow. Then, as load increases, it
introduces a growing barrier to higher delay. But, unlike RED, it
requires only two parameters, not three. The disadvantage of Curvy RED
(compared to a PI controller for example) is that it is not adapted to
a wide range of RTTs. Curvy RED can be used as is when the RTT range
to be supported is limited, otherwise an adaptation mechanism is
needed.
From our limited experiments with Curvy RED so far, recommended
values of these parameters are: S_C = -1; g_C = 5; T = 5 * MTU at the
link rate (about 1ms at 60Mb/s) for the range of base RTTs typical on
the public Internet. explains why these
parameters are applicable whatever rate link this AQM implementation
is deployed on and how the parameters would need to be adjusted for a
scenario with a different range of RTTs (e.g. a data centre). The
setting of k depends on policy (see
and respectively for its recommended
setting and guidance on alternatives).
There is also a cUrviness parameter, U, which is a small positive
integer. It is likely to take the same hard-coded value for all
implementations, once experiments have determined a good value. Only
U=1 has been used in experiments so far, but results might be even
better with U=2 or higher.
Notes:
The alternative of applying the
AQMs at enqueue would shift some processing from the critical time
when each packet is dequeued. However, it would also add a whole
queue of delay to the control signals, making the control loop
sloppier (for a typical RTT it would double the Classic queue's
feedback delay). On a platform where packet timestamping is
feasible, e.g. Linux, it is also easiest to apply the AQMs at
dequeue because that is where queuing time is also measured.
WRR better isolates
the L4S queue from large delay bursts in the Classic queue, but it
is slightly less simple than TS-FIFO. If WRR were used, a low
default Classic weight (e.g. 1/16) would need to be
configured in place of the time shift in line 5 of the
initialization function ().
A step function is shown for
simplicity. A ramp function (see and the discussion around it
in ) is recommended, because
it is more general than a step and has the potential to enable L4S
congestion controls to converge more rapidly.
An EWMA is only one possible way
to filter bursts; other more adaptive smoothing methods could be
valid and it might be appropriate to decrease the EWMA faster than
it increases, e.g. by using the minimum of the smoothed and
instantaneous queue delays, min(Q_C, qc.time()).

Although code optimization depends on the platform, the following
notes explain where the design of Curvy RED was particularly motivated
by efficient implementation.
The Classic AQM at line 10b calls maxrand(2*U), which gives twice
as much curviness as the call to maxrand(U) in the marking function at
line 5c. This is the trick that implements the square rule in equation
(1) (). This is based on the fact that,
given a number X from 1 to 6, the probability that two dice throws
will both be less than X is the square of the probability that one
throw will be less than X. So, when U=1, the L4S marking function is
linear and the Classic dropping function is squared. If U=2, L4S would
be a square function and Classic would be quartic. And so on.
The maxrand(u) function in lines 16-21 simply generates u random
numbers and returns the maximum. Typically, maxrand(u) could be run in
parallel out of band. For instance, if U=1, the Classic queue would
require the maximum of two random numbers. So, instead of calling
maxrand(2*U) in-band, the maximum of every pair of values from a
pseudorandom number generator could be generated out-of-band, and held
in a buffer ready for the Classic queue to consume.
The two ranges, range_L and range_C are expressed as powers of 2 so
that division can be implemented as a right bit-shift (>>) in
lines 5 and 10 of the integer variant of the pseudocode ().
For the integer variant of the pseudocode, an integer version of
the rand() function used at line 25 of the maxrand(function) in would be arranged to return an integer
in the range 0 <= maxrand() < 2^32 (not shown). This would scale
up all the floating point probabilities in the range [0,1] by
2^32.
Queuing delays are also scaled up by 2^32, but in two stages: i) In
line 9 queuing time qc.ns() is returned in integer nanoseconds, making
the value about 2^30 times larger than when the units were seconds,
ii) then in lines 5 and 10 an adjustment of -2 to the right bit-shift
multiplies the result by 2^2, to complete the scaling by 2^32.
In line 8 of the initialization function, the EWMA constant gamma
is represented as an integer power of 2, g_C, so that in line 9 of the
integer code the division needed to weight the moving average can be
implemented by a right bit-shift (>> g_C).
Where Classic flows compete for the same capacity, their relative
flow rates depend not only on the congestion probability, but also on
their end-to-end RTT (= base RTT + queue delay). The rates of
Reno flows competing over an AQM are
roughly inversely proportional to their RTTs. Cubic exhibits similar
RTT-dependence when in Reno-compatibility mode, but it is less
RTT-dependent otherwise.
Until the early experiments with the DualQ Coupled AQM, the
importance of the reasonably large Classic queue in mitigating
RTT-dependence when the base RTT is low had not been appreciated.
Appendix A.1.6 of the L4S ECN protocol uses numerical examples to
explain why bloated buffers had concealed the RTT-dependence of
Classic congestion controls before that time. Then it explains why,
the more that queuing delays have reduced, the more that
RTT-dependence has surfaced as a potential starvation problem for long
RTT flows, when competing against very short RTT flows.
Given that congestion control on end-systems is voluntary, there is
no reason why it has to be voluntarily RTT-dependent. The
RTT-dependence of existing Classic traffic cannot be 'undeployed'.
Therefore, requires L4S
congestion controls to be significantly less RTT-dependent than the
standard Reno congestion control , at
least at low RTT. Then RTT-dependence ought to be no worse than it is
with appropriately sized Classic buffers. Following this approach
means there is no need for network devices to address RTT-dependence,
although there would be no harm if they did, which per-flow queuing
inherently does.
The coupling factor, k, determines the balance between L4S and
Classic flow rates (see and equation
(1)).
For the public Internet, a coupling factor of k=2 is recommended,
and justified below. For scenarios other than the public Internet, a
good coupling factor can be derived by plugging the appropriate
numbers into the same working.
To summarize the maths below, from equation (7) it can be seen that
choosing k=1.64 would theoretically make L4S throughput roughly the
same as Classic, if their actual end-to-end RTTs were the same .
However, even if the base RTTs are the same, the actual RTTs are
unlikely to be the same, because Classic traffic needs a fairly large
queue to avoid under-utilization and excess drop. Whereas L4S does
not.
Therefore, to determine the appropriate coupling factor policy, the
operator needs to decide at what base RTT it wants L4S and Classic
flows to have roughly equal throughput, once the effect of the
additional Classic queue on Classic throughput has been taken into
account. With this approach, a network operator can determine a good
coupling factor without knowing the precise L4S algorithm for reducing
RTT-dependence - or even in the absence of any algorithm.
The following additional terminology will be used, with appropriate
subscripts:
Packet rate [pkt/s]
RTT [s/round]
ECN marking probability []

On the Classic side, we consider Reno as the most sensitive and
therefore worst-case Classic congestion control. We will also consider
Cubic in its Reno-friendly mode ('CReno'), as the most prevalent
congestion control, according to the references and analysis in . In either case, the Classic packet rate in steady
state is given by the well-known square root formula for Reno
congestion control:
On the L4S side, we consider the Prague congestion
control as the
reference for steady-state dependence on congestion. Prague conforms
to the same equation as DCTCP, but we do not use the equation derived
in the DCTCP paper, which is only appropriate for step marking. The
coupled marking, p_CL, is the appropriate one when considering
throughput equivalence with Classic flows. Unlike step marking,
coupled markings are inherently spaced out, so we use the formula for
DCTCP packet rate with probabilistic marking derived in Appendix A of
. We use the equation without RTT-independence
enabled, which will be explained later.
For packet rate equivalence, we equate the two packet rates and
rearrange into the same form as Equation (1), so the two can be
equated and simplified to produce a formula for a theoretical coupling
factor, which we shall call k*:
We say that this coupling factor is theoretical, because it is in
terms of two RTTs, which raises two practical questions: i) for
multiple flows with different RTTs, the RTT for each traffic class
would have to be derived from the RTTs of all the flows in that class
(actually the harmonic mean would be needed); ii) a network node
cannot easily know the RTT of the flows anyway.
RTT-dependence is caused by window-based congestion control, so it
ought to be reversed there, not in the network. Therefore, we use a
fixed coupling factor in the network, and reduce RTT-dependence in L4S
senders. We cannot expect Classic senders to all be updated to reduce
their RTT-dependence. But solely addressing the problem in L4S senders
at least makes RTT-dependence no worse - not just between L4S senders,
but also between L4S and Classic senders.
Traditionally, throughput equivalence has been defined for flows
under comparable conditions, including with the same base
RTT . So if we assume the same base RTT,
R_b, for comparable flows, we can put both R_C and R_L in terms of
R_b.
We can approximate the L4S RTT to be hardly greater than the base
RTT, i.e. R_L ~= R_b. And we can replace R_C with (R_b + q_C),
where the Classic queue, q_C, depends on the target queue delay that
the operator has configured for the Classic AQM.
Taking PI2 as an example Classic AQM, it seems that we could just
take R_C = R_b + target (recommended 15 ms by default in ). However, target is roughly the queue
depth reached by the tips of the sawteeth of a congestion control, not
the average . That is R_max = R_b +
target.
The position of the average in relation to the max depends on the
amplitude and geometry of the sawteeth. We consider two examples:
Reno , as the most sensitive worst-case,
and Cubic in its Reno-friendly mode
('CReno') as the most prevalent congestion control algorithm on the
Internet according to the references in .
Both are AIMD, so we will generalize using b as the multiplicative
decrease factor (b_r = 0.5 for Reno, b_c = 0.7 for CReno). Then:
Plugging all this into equation (7) we get a fixed coupling factor
for each:
An operator can then choose the base RTT at which it wants
throughput to be equivalent. For instance, if we recommend that the
operator chooses R_b = 25 ms, as a typical base RTT between Internet
users and CDNs , then these coupling
factors become:
The approximation is relevant to any of the above example DualQ
Coupled algorithms, which use a coupling factor that is an integer
power of 2 to aid efficient implementation. It also fits best to the
worst case (Reno).
To check the outcome of this coupling factor, we can express the
ratio of L4S to Classic throughput by substituting from their rate
equations (5) and (6), then also substituting for p_C in terms of
p_CL, using equation (1) with k=2 as just determined for the
Internet:
As an example, we can then consider single competing CReno and
Prague flows, by expressing both their RTTs in (10) in terms of their
base RTTs, R_bC and R_bL. So R_C is replaced by equation (8) for
CReno. And R_L is replaced by the max() function below, which
represents the effective RTT of the current Prague congestion
control in its
(default) RTT-independent mode, because it sets a floor to the
effective RTT that it uses for additive increase:
It can be seen that, for base RTTs below target (15 ms), both the
numerator and the denominator plateau, which has the desired effect of
limiting RTT-dependence.
At the start of the above derivations, an explanation was promised
for why the L4S throughput equation in equation (6) did not need to
model RTT-independence. This is because we only use one point - at the
typical base RTT where the operator chooses to calculate the coupling
factor. Then, throughput equivalence will at least hold at that chosen
point. Nonetheless, assuming Prague senders implement RTT-independence
over a range of RTTs below this, the throughput equivalence will then
extend over that range as well.
Congestion control designers can choose different ways to reduce
RTT-dependence. And each operator can make a policy choice to decide
on a different base RTT, and therefore a different k, at which it
wants throughput equivalence. Nonetheless, for the Internet, it makes
sense to choose what is believed to be the typical RTT most users
experience, because a Classic AQM's target queuing delay is also
derived from a typical RTT for the Internet.
As a non-Internet example, for localized traffic from a particular
ISP's data centre, using the measured RTTs, it was calculated that a
value of k = 8 would achieve throughput equivalence, and experiments
verified the formula very closely.
But, for a typical mix of RTTs across the general Internet, a value
of k=2 is recommended as a good workable compromise.
Thanks to Anil Agarwal, Sowmini Varadhan, Gabi Bracha, Nicolas Kuhn,
Greg Skinner, Tom Henderson, David Pullen, Mirja Kühlewind, Gorry
Fairhurst, Pete Heist, Ermin Sakic and Martin Duke for detailed review
comments particularly of the appendices and suggestions on how to make
the explanations clearer. Thanks also to Tom Henderson for insights on
the choice of schedulers and queue delay measurement techniques. And
thanks to the area reviewers Christer Holmberg, Lars Eggert and Roman
Danyliw.
The early contributions of Koen De Schepper, Bob Briscoe, Olga
Bondarenko and Inton Tsang were part-funded by the European Community
under its Seventh Framework Programme through the Reducing Internet
Transport Latency (RITE) project (ICT-317700). Contributions of Koen De
Schepper and Olivier Tilmans were also part-funded by the 5Growth and
DAEMON EU H2020 projects. Bob Briscoe's contribution was also
part-funded by the Comcast Innovation Fund and the Research Council of
Norway through the TimeIn project. The views expressed here are solely
those of the authors.
The following contributed implementations and evaluations that
validated and helped to improve this specification:
Olga Albisser <olga@albisser.org> of Simula Research Lab,
Norway (Olga Bondarenko during early drafts) implemented the
prototype DualPI2 AQM for Linux with Koen De Schepper and conducted
extensive evaluations as well as implementing the live performance
visualization GUI .
Olivier Tilmans <olivier.tilmans@nokia-bell-labs.com> of
Nokia Bell Labs, Belgium prepared and maintains the Linux
implementation of DualPI2 for upstreaming.
Shravya K.S. wrote a model for the ns-3 simulator based on the
-01 version of this Internet-Draft. Based on this initial work, Tom
Henderson <tomh@tomh.org> updated that earlier model and
created a model for the DualQ variant specified as part of the Low
Latency DOCSIS specification, as well as conducting extensive
evaluations.
Ing Jyh (Inton) Tsang of Nokia, Belgium built the End-to-End Data
Centre to the Home broadband testbed on which DualQ Coupled AQM
implementations were tested.

```
```

```
```