"Adaptive Quality of Service Architecture"
(for the Linux kernel)

Comparison with the Linux scheduler

Experiment n.1: Real-Time C Application

We use a C real-time application, rt-app, whose only purpose is emulating behavior of many multimedia or control applications, in that it activates a thread periodically that makes some computations then sleeps until the next activation.

Actually, rt-app measures the finishing time of each periodic job, relatively to its activation instant, measured in microseconds. The program displays the average relative finishing time experienced since its start, and the average and maximum relative finishing time experienced during the last 16 activations (jobs).

This is the output when only one instance of the program is running indisturbed:

[tommaso@mobiletom:~/tests] sudo rt-app -P 10000 --rt Counted in 1 period: 8719 avg dt=3200, avg qdt=3168, max qdt=3183 avg dt=3140, avg qdt=3016, max qdt=3028 [...]

The program, after measuring and outputting the number of iterations it does in one period (8719, in the example), starts executing periodically iterating, at each activation, the 25% of the measured maximum iterations (the default load ratio may be changed with the "-r" option). The "--rt" option ensures the program runs under SCHED_RR real-time policy on Linux, so to achieve a precise measurement of the maximum number of iterations in a period, as well as to not get disturbed from other non-RT tasks in the system. The "-P" option specifies the task period, in microseconds.

The application does not experience fluctuations of the finishing instant above the millisecond. Of course, activating other Linux activities at default priority (nice 0) does not affect the running rt-app instance.

On the other hand, this is the output while activating a second instance of rt-app, emulating e.g. a second RT application in the system, with a period of 400ms, corresponding to maximum count of roughly 348616 iterations in the period, and the same load ratio (25%):

$ rt-app -P 10000 --rt -d 8719 [...] avg qdt=2722, max qdt=2915 avg qdt=22413, max qdt=75470 avg qdt=2443, max qdt=2669 avg qdt=22478, max qdt=75128 [...]

$ rt-app -P 400000 --rt -d 348616 [...] avg qdt=100864, max qdt=100961 avg qdt=100796, max qdt=100961 avg qdt=100735, max qdt=100961 [...]

As shown by the experiment, fluctuations in the activation instant of the first application may be higher than seven times the application period itself, causing possible malfunctions of the application, unless appropriate techniques are used (for instance, for a multimedia application, we would need at least 7 buffer elements at the output of the periodic thread, what would increase the application latency).

When using AQuoSA rsource reservations, it is possible to isolate temporally each instance of rt-app so that it is not disturbed by other Linux applications, and the interference due to other rt-app instances is kept in check through appropriate CPU reservation parameters.

This is the output of two rt-app instances running concurrently, both of which have been launched using the "--qos" option, which enables the program to reserve itself 25 percent of the cpu with a granularity (server period) appropriate for the supplied task period.

$ rt-app -P 10000 --qos -d 8719 [...] avg dt=3891, avg qdt=3385, max qdt=3628 avg dt=3924, avg qdt=4179, max qdt=4480 avg dt=3998, avg qdt=4040, max qdt=4324 [...]

$ rt-app -P 400000 --qos -d 348616 [...] avg dt=103931, avg qdt=124807, max qdt=124390 avg dt=109290, avg qdt=124993, max qdt=126346 avg dt=112322, avg qdt=124857, max qdt=126346 [...]

As it can be seen, the fluctuations in the finishing time of both rt-app instances are kept in check by the scheduler and remain below the respective periods, thus the operating system provides to the application the promised timeliness guarantees.

Improving the Linux result

The above shown example is of course very simple and it deals with a scenario which is not complex at all, and actually it could be faced with as efficiently by using the Linux SCHED__FIFO real-time priorities in a smart way, i.e. by assigning a higher priority to the lower period rt-app instance. Though, this technique, also known with the name of Rate Monotonic priority assignment, raises a couple of issues:
  • from a theoretical standpoint, it has been proved that the maximum system load for which it works is at most about 69% (with a sufficient high number of tasks);
  • from a practical standpoint, the deployment RT priority for one application needs to be computed by considering the task periods of all other RT applications, which is not impossible but certainly it creates unneeded dependencies among possibly unrelated applications
  • there is no temporal isolation among the applications: if a job of the highest priority application has a temporary peak of workload, this causes immediately delays on the lower priority ones.
By using the AQuoSA framework, on the other hand, each application has just to ask the system what it needs and it does not need to know if other possibly unrelated applications are running or not, unless this creates a run-time overload condition, of course. Once the application is admitted, AQuoSA provides scheduling guarantees and temporal isolation: if other applications experience higher workloads than allowed by the respective budgets, this affects those applications only, without introducing unforeseen delays in applications that are instead behaving well.

Latest news

2010-08-04
The IRMOS Real-Time Scheduler on lwn.net

2010-07-03
New real-time SMP scheduler (IRMOS)

2010-05-30
About page added to website

2009-11-25
Papers page added to website





Last update:
August 4, 2010