[Spark2014-discuss] [M122-037] [Spark2014] aspect to identify code that is in/out of SPARK 2014

Yannick Moy moy at adacore.com
Wed Jan 23 10:46:16 CET 2013


Hello Patrice, thanks for your insights. I am moving this conversation 
to spark2014-discuss@, as I think it's interesting to others.

On 01/22/2013 07:01 PM, Patrice Chalin wrote:
> Hi Yannick,
>
> I did not want to add more to the issue yet, but figured that I would
> mention the following to you now as you proceed to open a TN on the
> topic.  It is a bit beyond the capability that you had in mind, but it
> is related.
>
> In JML, we felt the need to be able to hide whole annotations or even
> parts from various tools (because their analysis capabilities differ so
> much).  In the case of JML the two main tools being the JML RAC (Runtime
> Assertion Checker which compiles assertions into code so that they can
> be tested at runtime) and ESC/Java2 (which performs VCGen based static
> checking).

This is similar indeed to our current situation, with the compiler and 
GNATprove, plus soon Sireum! :)

> Here is an example taken from a Map specification (the full details of
> the spec are not relevant at this point):
>
>          /*+@ also
>            @   public normal_behavior
>            @     requires o instanceof Entry;
>            @     ensures \result ==
>            @      (    JMLNullSafe.equals(((Entry)o).abstractKey,
> abstractKey)
>            @        && JMLNullSafe.equals(((Entry)o).abstractValue,
>            @                              abstractValue) );
>            @+*/
>          /*@  also
>            @   public normal_behavior
>            @     requires !(o instanceof Entry);
>            @     ensures \result == false;
>            @*/
>          /*-@  also
>            @   public normal_behavior
>            @     requires o instanceof Entry;
>            @     ensures \result ==
>            @      (    nullequals(((Entry)o).abstractKey, abstractKey)
>            @        && nullequals(((Entry)o).abstractValue,
> abstractValue) );
>            @-*/
>          /*@ pure @*/ boolean equals(/*@ nullable @*/ Object o);
>
>
> This example has three contract cases for the equals() method. Only the
> first two cases will be process by the JML RAC and ESC/Java2 will only
> see the last two cases.  The reason for excluding spec cases (or any
> other assertion for assertion subexpression) from analysis from any
> given tool is because the tool cannot cope with it elegantly enough
> (e.g. impractical to execute at runtime or to difficult to analyze
> statically).

We will also have non-executable contracts in SPARK 2014, but we have 
discussed so far a much simpler way to handle them: the user should 
never compile with assertions enabled if some of the assertions are not 
executable! This maps well to our users like Secunet who are interested 
in non-executable contracts, as they don't care about execution of 
contracts in general.

As for the difficulty to prove a given contract, that's not an input 
problem, but rather an output one: these parts of the contracts won't be 
proved automatically, and the tool or platform will need to provide a 
way for the user to validate those (manually, through mix of test&proof, 
by manual proof).

The one language, one semantics, one contract approach of SPARK 2014 
provides, in my view, enough benefits in terms of simplicity/readability 
that I would be very careful in changing it.

> As we considered moving JML to the Java 5 annotations syntax we had
> actually generalized this capability so that markers for more than two
> tools could be given.  I don't have the notes about that handy, but I
> can dig them up and send them to you if/when the time comes.
>
> As Spark matures and has more and more tools capable of analyzing its
> specification constructs, IMHO, "conditional" inclusion of specification
> constructs will become inevitable so that the same source files can be
> processed by the largest set of tools possible.

I understand that this approach offers more flexibility, but clearly 
this has the potential to harm the simplicity/readability of the 
language and the understandability of what the tools achieve.

The same flexibility can be provided on the tool side by
- ignoring some annotations that the tool cannot handle (and record that 
fact for users to inspect)
- having a way in the platform to account for some tool incapacity to 
handle some annotations (so that some assertions are not proved by a 
tool, and that's fine)

> Another use case is when a new feature (such as a new aspect) is
> introduced but not supported in all tools. This allows the same source
> to be used by all tools, without breaking the analysis capabilities of
> tools that have not yet caught up.

All the tools we are discussing are based on GNAT frontend, so that 
should not be a problem: the frontend will be updated to deal with the 
new aspect, and each tool can then decide whether to handle or not this 
aspect (by default, it will just not handle it!)

> Also with our push to Java 5 annotations we considered precisely the
> capability that you mentioned during the call today.  I.e., annotating a
> unit as being compliant with a given language subset.

That's what I proposed yesterday, a SPARK aspect on every declaration, 
but most people felt it was not appropriate. Maybe we can discuss this 
further.

> I know that I should probably be putting the above somewhere on the wiki
> so as to make it available to the team if they wish to discuss such
> features at one point, but I wasn't sure where it should go … hence this
> e-mail to you.

I think spark2014-discuss@ is a better way to have this discussion 
between all those interested.

Any other opinions on that subject?
-- 
Yannick Moy, Senior Software Engineer, AdaCore


More information about the Spark2014-discuss mailing list