In many real-time systems there is a need to address a mix of hard and soft real-time tasks. For example, hard real-time tasks may include tasks for monitoring power lines and shutting them down when faults are detected, in which missing a deadline can result in catastrophic failure. Soft real-time tasks, such as the data for a frame in a real-time video stream, may still have a deadline (the time at which the frame must be displayed), but do not sufferer catastrophic failure in the event of a small number of missed deadlines. For these applications missed deadlines merely correspond to reduced QoS. In a mixed system, we would like to ensure that the hard real-time periodic tasks never miss a deadline while minimizing the miss rate (i.e., the fraction of jobs that miss their deadline) of the soft real-time tasks.
Toward this goal, we have been pursuing the application Real-Time Queueing Theory (RTQT) to predict lateness of the soft real-time tasks in mixed systems. We consider two classes of systems:
Experiments on these classes of systems can be categorized into three types: Two-Flow Static Priority, Two-Flow Sporadic Server and Multi-Flow. The two-flow static priority experiments are the simplest of these and set the baseline for expectations in static priority systems. The two-flow sporadic server experiments focused on evaluating the benefit of using a sporadic server on the background task. The multi-flow experiments investigate the effects when there are a large number of periodic tasks and a single background task (or an aggregate of background tasks). In all experiments we assume that the high-priority periodic tasks are scheduled using RMA priorities and never miss deadlines.
The goal of the two flow static priority experiments was to quantify the factors that affect the performance against deadlines of a low-priority task sharing the processor with a high priority task, and to develop a theory for predicting lateness in such systems. Simulation results were compared with two proposed analytical models, an RTQT-based model and a simpler "degraded processor" model in which the low-priority job is considered to be running on a system with a slowed clock. We found that the "degraded processor" model failed to accurately predict lateness of the background task, except when the job size of the high-priority task was very small compared to the background task. The RTQT-based formula, on the other hand, fit well over a wide range of conditions in which all combinations of M/M/1, M/D/1, D/M/1 and D/D/1 models were used for both the high-priority and the low-priority tasks.
The RTQT-based formula we used was:
Pr[low-priority task is late] = e-θPD
where θ is a workload parameter computed from the mean and standard-deviation of the service time and inter-arrival time for both tasks, P is the fraction of the CPU demand that is due to the low-priority task, and D is the deadline of the low-priority task. Figure 1 shows a representative curve from these experiments comparing the simulation results with the RTQT and degraded processor theory curves.
The goal of the two-flow sporadic server experiments was to evaluate the effectiveness of using a sporadic server to decrease lateness of a low-priority aperiodic background task. A sporadic server is characterized by a budget and a replenishment period. As long a budget is available, the background task can execute at a temporary priority higher than the high priority task, but when the budget is depleted, it must revert to low-priority. The replenishment period controls how long the task must wait for the budget to be replenished.
In these experiments we assumed a high-priority periodic task, and a low-priority aperiodic background task. We compared the performance of a system with and without a sporadic server under heavy-traffic conditions. Figure 2 shows the results for a representative experiment comparing use of a sporadic server with not using a sporadic server. In this example, the red lines indicate cases where a sporadic server is not used, while the blue lines indicate cases where a sporadic server is used. In all cases, the mean inter-arrival of the background task was 1, and only the period (Pp) of the periodic task was varied while keeping the workload of all tasks constant. Note that when the period of the high-priority task is one (Pp=1), there is no performance difference between using or not using the sporadic server.
The major findings of these experiments were:
In our multi-flow experiments, we considered cases where there are multiple periodic tasks, and a single aperiodic background task (or equivalently, and aggregate of aperiodic tasks). We also considered non-heavy traffic cases. We found that interactions between the periods and computation times of the periodic tasks can have a significant effect on the performance of the background task.
To simplify the problem, we first address the problem of determining lateness for background tasks with zero execution time. This decouples the effects of the blocking periods caused by the higher priority periodic tasks, and the queueing effects of the background task. In this simplified case, we found that the miss rate can be expressed by the piece-wise linear equation:
|Pr[miss] = (Σ||N||ni max(bi-D,0))/H|
where H is the length of the hyper-period, ni and bi are the number of occurrences and length of a busy period occurring in the hyper-period, and D is the deadline of the background task. The main idea of this equation is that the probability of missing a deadline D is the probability of arriving in a busy period with more than D time units until the end of the busy period. Figure 3 shows this theory curve plotted against a simulation showing near perfect agreement. The next steps in this work are to relax the zero-execution-time assumption of the background task, and to consider the case with a sporadic server.
For more information
Please tell us what you
think with this short
(< 5 minute) survey.