Status of the Bound-T time and stack analyser

by Niklas Holsti, Tidorum Ltd

Bound-T from Tidorum Ltd was, and to some extent still is, a software tool that uses static analysis of the machine code to compute upper bounds on the execution time (WCET) and stack usage of embedded programs.

At present, Tidorum is not actively developing Bound-T. There are both commercial and technical reasons for this (see below for further explanation). The tool, such as it is, is currently open-source, no-cost SW that can be downloaded from the web in source-code form, and (for some host platforms) also in binary executable form. Click here to download.

The rest of this page explains the current technical status of Bound-T, with focus on the problems that made Tidorum stop (or, hopefully, only pause) its development.

For a more positive description of Bound-T, see this introduction fron earlier times. The full details can be found in the Bound-T documentation.

WCET analysis: a brief introduction

The problem of finding an upper bound on the execution time of a target program is usually divided into three sub-problems:

The WCET analysis field and tools are currently divided into three main approaches to this problem and its sub-problems:

Bound-T uses the fully static approach. The tool aiT, from AbsInt GmbH also uses this approach. The tool RapiTime, from Rapita Systems Ltd, uses the measurement-based approach and can be used also with the probabilistic approach, given suitable randomization methods.

Current and future scope of fully static WCET analysis

The increasing complexity and increasingly dynamic behaviour of processors is a serious obstacle to fully static WCET analysis tools. I may be a little behind the times, as I have not actively worked in the field for a few years, but as I understand the state of the art, fully static WCET analysis is practical, and can give reasonably good results, for in-order, single-core processors with separate instruction and data caches, on one or a few cache levels.

Processors and applications that rely heavily on data caches, or on unified I+D caches, can be analysed, but the WCET bounds are likely to be quite pessimistic and may even be unusably pessimistic. Some kinds of branch prediction and speculation can be analysed, but current processors are exceeding the limits of analysis there, too.

Fully static WCET analysis methods for out-of-order processors and multi-core processors are subjects of research but are, as I understand it, not yet practical tools for SW development. In particular, analysis methods for multi-core systems usually assume a static, time-based arbitration of accesses to shared resources, such as buses, shared cache levels, or global memories, which, I believe, would be quite constraining for most applications.

Technical problems in Bound-T

Moving from the generic processor-complexity problem to the specific problems of Bound-T, unfortunately Bound-T is currently in a kind of blind alley, or dead end, with respect to its analysis methods, for three main reasons:

These problems have slowly accumulated in Bound-T, over the years, as Bound-T has been extended to base more and more deductions on such unsound analyses. Originally, Bound-T was meant to do a local, opportunistic, simple but safe analysis of control flow, and to be supported by user-written assertions on loop bounds and the like, when the analysis failed to find such bounds. However, to make the analysis "work" for more programs, partly in response to requests from users, I extended Bound-T to use the analyses more and more extensively, which unfortunately also means that it became more and more likely that their unsoundness manifests itself in wrong deductions. For example, when Bound-T is analysing a SPARC program, it is not uncommon for Bound-T to report that it can find no feasible execution path! This problem usually results from a conflict between the constant-propagation step of the analysis, which treats variables as bounded-size binary numbers, and the PA-based analysis, in which variables are of unbounded size.

It became clear to me that further development of Bound-T must abandon the PA analysis, or at least use some new form of preliminary analysis to ensure that the PA analysis can be applied soundly. Unfortunately, the PA analysis is currently the only way Bound-T has to find loop bounds, which is an essential function for WCET analysis, so I was stuck, and was also running out of money. Hence, development of Bound-T stopped.

In contrast, the aiT tool from AbsInt is based on a sound method (abstract interpretation) that also provides pointer analysis and detects possible aliasing (or so I believe).

Recommendations

Assuming that you, dear reader, need a WCET analysis tool, what should you do? Here are my suggestions.

For fully static WCET analysis of the more complex processors, I recommend that you contact AbsInt GmbH. Their tool, aiT, is the best in this area, I believe, and is commercially supported and widely used. However, while aiT supports many different target processors, your favourite processor may not be one of them.

For measurement-based WCET analysis (more correctly called "measurement-based timing analysis", MBTA), I recommend the RapiTime tool from Rapita Systems Ltd.

RapiTime is a "hybrid" WCET tool which combines measurement and analysis. An analysis of the program structure divides it into small pieces (basic blocks). The execution times of the small pieces are measured on the actual target processor, and the measured times are combined with the analysis of the program structure to determine the longest execution path, as an approximate WCET bound. RapiTime can thus be applied to any target processor, with no need to model the detailed architecture of the processor, as long as there is some way to measure execution times of the basic blocks -- for example, a time-tagged instruction-tracing facility. The drawback is that the WCET numbers from RapiTime are not, in principle, upper bounds on the WCET, but some estimates of the WCET, and the quality of these estimates depends both on the nature of the test suite that was used to measure the execution times, and on the nature of the target processor. For example, if the target processor has caches, the fraction of cache misses in the test suite may have been smaller than can occur in the actual application, and therefore the WCET estimate may be smaller than the actual WCET.

Conclusion

I hope this has clarified the current state of Bound-T and given you some alternatives. I will be glad to answer any questions on these issues. See the contact information.


Valid HTML 4.01 Transitional