[Spark2014-discuss] [M122-037] specifying SPARK sources for analysis

Cyrille Comar comar at adacore.com
Thu Mar 14 15:17:11 CET 2013


Le 14/03/2013 15:04, Yannick Moy a écrit :
> On 03/14/2013 02:41 PM, Cyrille Comar wrote:
> 
>> just for the record, I fundamentally disagree with this conclusion. A
>> lexical on/off mechanism doesn't make any sense to me.
> 
> The main interest of using SPARK instead of Ada is for proofs, and the
> finest grain at which we know how to specify/verify properties is the
> subprogram. So I don't see what would be the use of a mechanism that
> allows stating "these statements are in SPARK".

we are in agreement.

> To me, two solutions make sense:
> - an aspect on declarations
> - a restricted on/off mechanism (for example, a specific pragma
> SPARK_Policy) which appears either as configuration pragma, or at
> library level in a package, with the intent that it applies to the
> declarations that follow.

but the latest is not an on/off mechanism. a config pragma applying to
library-level unit is pretty much equivalent to an aspect on the
corresponding package entity.

>> Using an aspect
>> could make more sense to me but doesn't seem very urgent. I still think
>> that defining the scope of the formal analysis is mainly an issue of
>> providing the right input files and parameters to the toolset.
> 
> I'm not happy currently with the "detect" mode in which GNATprove works.
> I think users need a way to state exactly which parts of a unit should
> be in SPARK (and should be analyzed by GNATprove), and which are not.
> And I'd rather fix this in the coming weeks than at the end of the year.

why aren't you happy? That will help making progress...

>> Furthermore, I don't understand what relation there is between "control
>> which features are permitted" and "specifying SPARK sources for
>> analysis".
> 
> This is a generalization of the SPARK/not SPARK approach, which I don't
> like. To me, the feature-control and the identification of the SPARK
> subset should be kept separate.

but we should not confuse

  1. specifying that a given (subprogram) entity is intended to be in SPARK
  2. specifying the scope of a given analysis.

Sure we need a subprogram body to be in SPARK to be able to analyse it
for proof but we don't necessarily wish to analyze all the time all
entities that are in the SPARK subset.

Are we talking of 1. or 2. in this TN? the subject implies 2. but
everything that has been said makes me think that we mostly discuss 1.


More information about the Spark2014-discuss mailing list