[Spark2014-discuss] [M122-037] specifying SPARK sources for analysis

Arnaud Charlet charlet at adacore.com
Thu Mar 14 15:45:42 CET 2013


> For example, you want to have a call on a logging function,
> but you don't want to have to analyze the "correctness" of
> the call.

So, what the problem here? logging function would be outside spark, and
the other procedure would be in.

> Or you want to have an exception handler but you
> don't want the presence of the exception handler to prevent
> the rest of the subprogram from being analyzed.

That's a mistake and is inherently unsafe, since you can do all kinds
of weird stuff in an exception handler.

Much better and cleaner to have a wrapper if you want to add an
exception handler, and have the wrapper out spark and the other
subprogram in spark.

> The other issue has to do with how there might be multiple
> levels, not just a binary "in" vs. "out" of SPARK, where
> projects might impose stricter requirements than those
> associated with SPARK 2014, perhaps returning to a more
> "classic" SPARK with no recursion, no dynamic-sized
> arrays, etc.

That's also a mistake. It's already hard enough to be able to have a clean
model for mixing Ada and SPARK, but trying to have a model for fixing
SPARK1 with SPARK2 with SPARK3 is just too much complexity, unrealistic
and out of scope at this stage.

We want one language: SPARK 2014. If people want to have SPARK 2014
subsetted to come closer to SPARK 2005 (the term "Classic SPARK" is
really a badly chosen term IMO, Classic SPARK will be SPARK 2014 a few
years from now, so better avoiding this term now), that's fine, this
can be achieved via a regular restrictions mechanism, which does not
create a complete new language with complete new rules.

In other words, we need only SPARK 2014 and Ada, with clear boundaries
and clear interactions between both.

Then if users want to subset SPARK 2014 that's fine, it won't change
the semantics which will still be those of SPARK 2014 for the subset
chosen.

> Apparently the current users of SPARK make use of the "hide" capability.
> It seems feasible to "skip over" some statements during the analysis
> of a subprogram, which isn't quite the same as doing a proof on "parts"
> of a subprogram.

And that's what we do NOT want in SPARK 2014.

Actually the hide mechanism is pretty restricted to what you can do, and
my understanding (tlaking to Jon this morning) is that hide mainly allows
you to hide:
- subprogram
- exception handlers
- elaboration code

and NOT some random piece of code.

So with this in mind, we have:

- subprogram: OK, we want to be able to handle in/out spark at the subp level
- exception handlers: not OK, use two subprograms instead to keep the
  boundaries and semantic clear
- elaboration code: the handling of the special "elaboration procedure"
  may need to special casing indeed, not sure what amount of thoughts has
  been put into elaboration yet, but to me that's mainly orthogonal and
  elaboration can be viewed as a special case of a subprogram (it's an
  implicit subprogram).

> As far as controlling where the tool would *attempt* to do any proof
> at all, I thought that would be at a higher level, perhaps more in
> a configuration file than in the source code.  The pragmas in the
> source code were more about controlling what features the programmer
> is allowed to use, much like pragma Restrictions, but with a more
> flexible local override capability.

Here I suspect there's a discrepency of intent between people, because some
people do want to be able to specify when to do proof or not, and that's
what we mean by in/out SPARK, so that's a major discrepency and this
explains why we seem to be in disagreement while we probably aren't actually.

In other words, different people seem to be widely different views on what
we are trying to achieve here, and there's as a result a discrepency in terms
of scope and usability.

> This feels like a topic deserving of a "SPARK Issue" as we don't
> seem to be converging just through e-mail back and forth.

I don't see how a "SPARK Issue" will solve anything. Instead we need to
resolve this during a concall IMO.

I won't be able on March 19, but will be available on March 26.

I'd suggest I take part of the SPARK concall on March 26 and we put
the two following topics on the agenda for this concall:

- M122-037 Specifying what should be in SPARK and what not, and clarify         
  that we do not need a hide mechanism
- M226-014 Restrictions/Profiles

These two topics should have been kept orthogonal, since they
are quite different in scope, but apparently they have been mixed together,
leading to the current apparently confused situation.

Arno


More information about the Spark2014-discuss mailing list