[Spark2014-discuss] [M503-001] SPARK 2014: proof rules for type invariants

Yannick Moy moy at adacore.com
Fri May 3 09:51:32 CEST 2013


Thanks for raising this important question Steve. The issue you mention has been faced by other contract-based annotation languages with type invariants (JML and Spec#). 

I believe that the situation you describe is ignored in JLM tools (see http://www.eecs.ucf.edu/~leavens/JML/jmlrefman/jmlrefman_8.html which describes the "ideal" JML logic, and the simplification used in tools, when "ignoring the problems with object ownership and alias control"): invariants are assumed on entry, and checked on exit. This is the root of the well-known reentrance problem, where F calls G (both being methods of the same class) in a state where the invariant is not established.

Spec# has defined a precise ownership methodology (see http://www.jot.fm/issues/issue_2004_06/article2/article2.pdf), where an object can be closed (it verified its invariant) or open (it can be updated), and the user tracks the value of a ghost field of each object stating whether the object is closed or open. So there is the same burden as the one you described, except it only concerns one ghost field.

I believe SPARK should do better, and aim at what the JML RM gives as a goal: "Of course, one would like to enforce invariants in a more modular way. By a modular enforcement of invariants, we mean that one could verify each type independently of the types that it does not use, and that a well-formed program put together from such verified types would still satisfy the semantics for invariants given above. That is, each type would be responsible for the enforcement of the invariants it declares and would be able to assume, without checking, the invariants of other types it uses."

A simple way to do this is to add checks (even ones only proved) at every point where an object of the type escapes the scope of the unit (typically: a call to a function outside the unit taking the object in parameter). Regarding the checks on subprogram entry (for subprograms defined in the public part of the unit), these should be added only for subprogram calls inside the unit itself. Calls from outside the unit should always pass in objects whose invariant holds, by the meta reasoning that no object with an invariant violated can have escaped the unit scope.

Now, answering your questions:

-- Steve Baird (2013-05-03)
> The runtime checks associated with type invariants are performed
> a the end of a call to a "boundary" subprogram which updates
> an object whose type is subject to an invariant.
> 
> It was observed that the associated proof obligation would
> be difficult to discharge if we didn't also have the invariant as
> a precondition.
> 
> And so we do. An additional proof obligation, not corresponding
> to any Ada-defined runtime check, is defined (see Spark RM
> for details). Otherwise the user would have had to explicitly
> repeat the invariant as a precondition and that would not be good.

The important point is that the invariant of all input objects should be assumed on entry when proving a subprogram. It remains to decide where we should add checks to make this possible.

> After discussing this with Trevor, it sees like this is only
> a partial solution to the problem. In some sense, we have
> only moved the lump in the rug.

I agree. Simply adding the invariant in precondition is not a good way to ensure the important point above. The better way is to add checks everywhere the object escapes the unit, or when we call back from within the unit a subprogram which expects the invariant to hold.

[snip]
> To prove this, does Zz need to repeat the precondition as
> a precondition of its own, thereby shifting the burden to Yy?

If we stay with simple preconditions, that's indeed the case.

> If we repeat this process until we get back to Aa, this
> will work. At the point where Aa is called, the invariant
> is known to be true and all is well.
> 
> Except that the precondition has now been explicitly
> copied a large number of times. This is not good, even
> if the invariant is just a call to a function named
> Is_Valid. And it gets worse when we pass around
> objects with components of subject-to-invariant types.

yes!

> Am I right in thinking that this is the current
> situation? That is, you can use type invariants, but it can
> rapidly become very burdensome in some cases because
> the invariant will have to be kept track of explicitly.

That's the current situation.

> Consider, in contrast, how subtypes (including subtypes
> with predicates) work. We have an implicit precondition
> on every subprogram that all parts of all of its inputs
> belong to their respective subtypes and an implicit postcondition
> that the same holds for outputs.

That's what we should aim at for type invariant in SPARK, I agree.

> We'd like to use the same approach for invariants, but there are
> some differences between subtype constraints/predicates and
> invariants:
> 1) it is perfectly ok to violate the
>    invariant while one is inside the package.

yes, so we should check it: 
- when calling a subprogram which expects the invariant to hold
- when escaping from the unit (say, copying the object in some global object)

> 2) The infinite-recursion problem (this can occur with
>    subtypes if a predicate is imposed on a first named
>    subtype and involves using the current instance in
>    way that involves a predicate check, typically
>    passing it as a parameter in a call; for subtypes,
>    the answer is "don't do that".). Perhaps this
>    problem could somehow be finessed because we are
>    only talking about proofs and proof obligations here.

We should restrict the way the type invariant can be defined, so that this issue does not arise. For example, saying that type invariant can only be defined through function calls, not procedure calls (at any depth, so type invariant can call F, which can only call functions, which can only call functions, etc.). And giving the rule that no invariant check is performed in a function call (which is the current rule in Ada, so we can keep it for SPARK). Since SPARK enforces that function calls cannot have side-effect, the invariant only needs to be checked prior to the top-most function call, which prevents the recursion problem you had in mind I believe.

> Could we have a different implicit contract for a subprogram
> depending on where the subprogram is declared (i.e., on
> whether it is allowed to break the invariant)?

That's already the case I believe: publicly visible subprograms of the unit assume the invariant on entry and check it on exit; other subprograms of the unit do neither.

> I think we could get into some complicated messes with
> the LSP requirements we impose for overriding subprograms
> if we have different implicit contracts for a subprogram
> depending on where it is declared, but I doubt this nasty
> interaction would come up in practice.

I'll have to see an example.

> Alternatively, we could tighten up Ada's rules some more
> (we've already tightened them up slightly as described above)
> and treat type invariants just like subtype validity for
> purposes of defining implicit subprogram contracts for
> all subprograms. That would mean that it is ok to violate the
> invariant within a subprogram, but you can't pass an
> invariant-violating-value as a parameter, even to
> a subprogram which is not even visible outside of the
> invariant-maintaining package.

No, I don't think that's necessary. You want to keep helper subprograms which may take objects whose invariant is broken on input. But that's close to the idea I suggested.

What do you think?
--
Yannick Moy, Senior Software Engineer, AdaCore







More information about the Spark2014-discuss mailing list