[Spark2014-discuss] [M122-037] specifying SPARK sources for analysis
moy at adacore.com
Thu Mar 14 16:42:28 CET 2013
On 03/14/2013 04:31 PM, Tucker Taft wrote:
> Perhaps if we dropped the terms "SPARK" and "non SPARK" and focused
> on whether we want to "attempt proof" or not, then this would clarify
> things. "SPARK" means too many things at this point, and using it
> as a way to turn on or off proof seems like it has pushed things to
> a breaking point. How about "Proof => On" or "Proof => Off". This
> would be to some degree orthogonal to the "Code Profile" notion.
> I think this might help to defuse the conflict.
I would find it harder to explain to people, because:
- as you say, which analysis you attempt to do on which code is
ultimately a tool configuration issue. For example, do you want to do
flow analysis, proof of absence of run-time errors, proof of contracts?
This depends on the switches passed to gnatprove. So a code marked Proof
=> On might be simply flow analyzed.
- as you also said, we don't want these code annotations to vary with
the tool versions. The SPARK/non SPARK boundary is clearly set by the
SPARK 2014 LRM, of which there will be only one version. I don't
understand what you mean by "SPARK means too many things", to me it's
only the language defined in the LRM. It should not mean the various
levels defined through code policies.
I don't really understand why the simple SPARK / non SPARK distinction
is not relevant. I see there are issues with tool configuration and code
profiles, but to me they only add to the SPARK / non SPARK distinction.
Yannick Moy, Senior Software Engineer, AdaCore
More information about the Spark2014-discuss