A Reasoning Framework for Mixed Hard and Soft Real-Time Tasks

In many real-time systems there is a need to address a mix of hard and soft real-time tasks. For example, hard real-time tasks may include tasks for monitoring power lines and shutting them down when faults are detected, in which missing a deadline can result in catastrophic failure. Soft real-time tasks, such as the data for a frame in a real-time video stream, may still have a deadline (the time at which the frame must be displayed), but do not sufferer catastrophic failure in the event of a small number of missed deadlines. For these applications missed deadlines merely correspond to reduced QoS. In a mixed system, we would like to ensure that the hard real-time periodic tasks never miss a deadline while minimizing the miss rate (i.e., the fraction of jobs that miss their deadline) of the soft real-time tasks.

Toward this goal, we have been pursuing the application Real-Time Queueing Theory (RTQT) to predict lateness of the soft real-time tasks in mixed systems. We consider two classes of systems:

  • Straight static priority systems in which the hard real-time periodic tasks are assumed to have the highest priority, and aperiodic background tasks which run at low priority.
  • Static priority systems with a sporadic server to improve the response time of the aperiodic background tasks.

Experiments on these classes of systems can be categorized into three types: Two-Flow Static Priority, Two-Flow Sporadic Server and Multi-Flow. The two-flow static priority experiments are the simplest of these and set the baseline for expectations in static priority systems. The two-flow sporadic server experiments focused on evaluating the benefit of using a sporadic server on the background task. The multi-flow experiments investigate the effects when there are a large number of periodic tasks and a single background task (or an aggregate of background tasks). In all experiments we assume that the high-priority periodic tasks are scheduled using RMA priorities and never miss deadlines.

Two Flow Static Priority Experiments

The goal of the two flow static priority experiments was to quantify the factors that affect the performance against deadlines of a low-priority task sharing the processor with a high priority task, and to develop a theory for predicting lateness in such systems. Simulation results were compared with two proposed analytical models, an RTQT-based model and a simpler "degraded processor" model in which the low-priority job is considered to be running on a system with a slowed clock. We found that the "degraded processor" model failed to accurately predict lateness of the background task, except when the job size of the high-priority task was very small compared to the background task. The RTQT-based formula, on the other hand, fit well over a wide range of conditions in which all combinations of M/M/1, M/D/1, D/M/1 and D/D/1 models were used for both the high-priority and the low-priority tasks.

The RTQT-based formula we used was:

Pr[low-priority task is late] = e-θPD

where θ is a workload parameter computed from the mean and standard-deviation of the service time and inter-arrival time for both tasks, P is the fraction of the CPU demand that is due to the low-priority task, and D is the deadline of the low-priority task. Figure 1 shows a representative curve from these experiments comparing the simulation results with the RTQT and degraded processor theory curves.

Comparison of Simulation to Two Proposed Theory Curves

Figure 1: Comparison of Simulation to Two Proposed Theory Curves

Two Flow Sporadic Server Experiments

The goal of the two-flow sporadic server experiments was to evaluate the effectiveness of using a sporadic server to decrease lateness of a low-priority aperiodic background task. A sporadic server is characterized by a budget and a replenishment period. As long a budget is available, the background task can execute at a temporary priority higher than the high priority task, but when the budget is depleted, it must revert to low-priority. The replenishment period controls how long the task must wait for the budget to be replenished.

In these experiments we assumed a high-priority periodic task, and a low-priority aperiodic background task. We compared the performance of a system with and without a sporadic server under heavy-traffic conditions. Figure 2 shows the results for a representative experiment comparing use of a sporadic server with not using a sporadic server. In this example, the red lines indicate cases where a sporadic server is not used, while the blue lines indicate cases where a sporadic server is used. In all cases, the mean inter-arrival of the background task was 1, and only the period (Pp) of the periodic task was varied while keeping the workload of all tasks constant. Note that when the period of the high-priority task is one (Pp=1), there is no performance difference between using or not using the sporadic server.

Effect of Sporadic Server on Lateness

Figure 2: Effect of Sporadic Server on Lateness

The major findings of these experiments were:

  • Miss rate of the background task is minimized when the replenishment period of the sporadic server is as long as possible (i.e., equal to the period of the periodic task). This is because a larger budget can be used which makes it less likely that a background task will waste budget. Wasted budget can occur when there is not enough remaining budget to complete a background task. In these cases, the background task consumes the budget, but is then forced to wait anyway resulting in a negligible reduction in latency.
  • While holding the utilization of all tasks constant, as the period (Pp) of the high-priority task increases relative to the mean inter-arrival-time of the background task, the miss rate of the background task decreases when a sporadic server is used and increases when a sporadic server is not used. The increase in miss rate when the sporadic server is not used is due to the longer blocking periods causing all background arrivals to wait longer for the CPU. The decrease in miss rate when the sporadic server is used is likely due to the drop in wasted budget that occurs as the sporadic server replenishment period is increased (which is made possible by the increased period of the high-priority task).
  • The sporadic sever is more effective when most of the traffic is periodic. This is because the utilization of the sporadic server goes down as fraction of traffic that is background goes down. For example, if high-priority utilization is 0.8 and the background utilization is 0.1, the utilization of the sporadic server will be 0.1/0.2=0.5, since 0.2 is available for sporadic server. On the other hand, if high-priority utilization is 0.1 and the background utilization is 0.8, the utilization of the sporadic server will be 0.8/0.9=0.89. Even though a utilization of 0.9 is available for the sporadic server, there is much higher contention for the server in this case.

Multi-Flow Experiments

In our multi-flow experiments, we considered cases where there are multiple periodic tasks, and a single aperiodic background task (or equivalently, and aggregate of aperiodic tasks). We also considered non-heavy traffic cases. We found that interactions between the periods and computation times of the periodic tasks can have a significant effect on the performance of the background task.

To simplify the problem, we first address the problem of determining lateness for background tasks with zero execution time. This decouples the effects of the blocking periods caused by the higher priority periodic tasks, and the queueing effects of the background task. In this simplified case, we found that the miss rate can be expressed by the piece-wise linear equation:

Pr[miss] = (Σ N ni  max(bi-D,0))/H

where H is the length of the hyper-period, ni and bi are the number of occurrences and length of a busy period occurring in the hyper-period, and D is the deadline of the background task. The main idea of this equation is that the probability of missing a deadline D is the probability of arriving in a busy period with more than D time units until the end of the busy period. Figure 3 shows this theory curve plotted against a simulation showing near perfect agreement. The next steps in this work are to relax the zero-execution-time assumption of the background task, and to consider the case with a sporadic server.

Multi-Flow Example

Figure 3: Multi-Flow Example