[Spark2014-discuss] [M122-037] specifying SPARK sources for analysis

Tucker Taft taft at adacore.com
Thu Mar 14 15:30:13 CET 2013


On 3/14/13 9:41 AM, Arnaud Charlet wrote:
>> Our general conclusion was to use a mechanism more like
>> suppress/unsuppress or assertion policy or warning off/on than using an
>> aspect to control which features are permitted at any point. The aspect
>> approach is too tied to declarations, whereas something which can be turned
>> off and on at various levels of granularity.
>
> Can you clarify what use cases correspond to this need?

For example, you want to have a call on a logging function,
but you don't want to have to analyze the "correctness" of
the call.  Or you want to have an exception handler but you
don't want the presence of the exception handler to prevent
the rest of the subprogram from being analyzed.

The other issue has to do with how there might be multiple
levels, not just a binary "in" vs. "out" of SPARK, where
projects might impose stricter requirements than those
associated with SPARK 2014, perhaps returning to a more
"classic" SPARK with no recursion, no dynamic-sized
arrays, etc.

>
> Also, what levels of granularity have been identified?
>
> Certainly, we do not want to allow a hide-like capability, and we do not want
> to start having to do proof on parts of subprograms only, so we need something
> more "robust/controlled" than a general suppress/unsuppress mechanism.

Apparently the current users of SPARK make use of the "hide" capability.
It seems feasible to "skip over" some statements during the analysis
of a subprogram, which isn't quite the same as doing a proof on "parts"
of a subprogram.

As far as controlling where the tool would *attempt* to do any proof
at all, I thought that would be at a higher level, perhaps more in
a configuration file than in the source code.  The pragmas in the
source code were more about controlling what features the programmer
is allowed to use, much like pragma Restrictions, but with a more
flexible local override capability.

>
> In other words, we should keep the granularity at the package or subprogram
> level (with I guess some provision to handle elaboration code/procedure),
> and potentially allow granularity at the type level, although this part isn't
> clear to me.

I agree as far as specifying whether an attempt at a proof should be
made at all, but I saw that as being perhaps a separate configuration
file.  As far as controlling what features are allowed in any piece
of source code, that seems appropriate for a more text-based control
mechanism based on a pragma.
>
> Arno
>

This feels like a topic deserving of a "SPARK Issue" as we don't
seem to be converging just through e-mail back and forth.

-Tuck


More information about the Spark2014-discuss mailing list