[Spark2014-discuss] [M122-037] specifying SPARK sources for analysis

Yannick Moy moy at adacore.com
Thu Mar 14 15:04:38 CET 2013


On 03/14/2013 02:41 PM, Cyrille Comar wrote:

> just for the record, I fundamentally disagree with this conclusion. A
> lexical on/off mechanism doesn't make any sense to me.

The main interest of using SPARK instead of Ada is for proofs, and the 
finest grain at which we know how to specify/verify properties is the 
subprogram. So I don't see what would be the use of a mechanism that 
allows stating "these statements are in SPARK".

To me, two solutions make sense:
- an aspect on declarations
- a restricted on/off mechanism (for example, a specific pragma 
SPARK_Policy) which appears either as configuration pragma, or at 
library level in a package, with the intent that it applies to the 
declarations that follow.

> Using an aspect
> could make more sense to me but doesn't seem very urgent. I still think
> that defining the scope of the formal analysis is mainly an issue of
> providing the right input files and parameters to the toolset.

I'm not happy currently with the "detect" mode in which GNATprove works. 
I think users need a way to state exactly which parts of a unit should 
be in SPARK (and should be analyzed by GNATprove), and which are not. 
And I'd rather fix this in the coming weeks than at the end of the year.

> Furthermore, I don't understand what relation there is between "control
> which features are permitted" and "specifying SPARK sources for analysis".

This is a generalization of the SPARK/not SPARK approach, which I don't 
like. To me, the feature-control and the identification of the SPARK 
subset should be kept separate.
-- 
Yannick Moy, Senior Software Engineer, AdaCore


More information about the Spark2014-discuss mailing list