| Age | Commit message (Collapse) | Author |
|
|
|
|
|
This is needed in rewriter as to avoid hack; indeed it makes sense to
propagate this information to the callers of save.
|
|
We unify information about constants so it is shared among all the
paths [interactive, NI, obligations].
IMHO the current setup looks pretty good, with information split into
a per-constant record `CInfo.t` and variables affecting mutual
definitions at once, which live in `Info.t`.
Main information outside our `Info` record is `opaque`, which is
provided at different moments in several cases.
There are a few nits regarding interactive proofs, which will go away
in the next commits.
|
|
This is in preparation for the next commit which will clean-up the
current API flow in `Declare`.
|
|
|
|
|
|
The module is now a stub. We choose to be explicit on the parameters
for now, this will improve in next commits with the refactoring of
proof / constant information.
|
|
At this point the record in lemmas was just a stub; next commit will
stop exposing the internals of mutual information, and pave the way
for the refactoring of `Info.t` handling in the Declare interface.
|
|
Reviewed-by: Matafou
Ack-by: SkySkimmer
Reviewed-by: gares
|
|
This is extracted from #9710, where we need the environment anyway to compute
iota rules on inductive types with let-bindings. The commit is self-contained,
so I think it could go directly in to save me a few rebases.
Furthermore, this is also related to #11707. Assuming we split cbn from the
other reduction machine, this allows to merge the "local" machine with
the general one, since after this PR they will have the same type. One less
reduction machine should make people happy.
|
|
|
|
This function was used almost everywhere with the wrapper around.
|
|
We place creation and saving of interactive proofs in the same module;
this will allow to make `proof_entry` private, improving invariants
and control over clients, and to reduce the API [for example next
commit will move abstract declaration into this module, removing the
exported ad-hoc `build_constant_by_tactic`]
Next step will be to unify all the common code in the interactive /
non-interactive case; but we need to tweak the handling of obligations
first.
|
|
As part of the proof refactoring work I am doing some modifications to
`funind` and indentation of that code is driving me a bit crazy; I'd
much prefer to delegate it to an automatic tool.
|
|
|
|
This is the easy part of removing unsafe_type_of, as type_of_variable
doesn't return (or even take as argument) an evar map.
|
|
|
|
Beware of 0. = -0. issue for primitive floats
The IEEE 754 declares that 0. and -0. are treated equal but we cannot
say that this is true with Leibniz equality.
Therefore we must patch the equality and the total comparison inside the
kernel to prevent inconsistency.
|
|
Ack-by: SkySkimmer
Reviewed-by: ppedrot
|
|
We remove calls of `Lemmas.Info.make` that where using the default
parameters, as this is mostly dead code now.
This brings into question quite a few things, in particular, the
uneven support of `scope` attributes by different commands / plugins.
We don't attempt to solve that yet, hopefully the ongoing constant
saving path refactoring will be able to take care of these
inconsistencies.
|
|
|
|
|
|
We also attempt a version that may work with `Proofview.tactic` , may
need more work.
|
|
Not pretty, but it had to be done some day, as `Globnames` seems to be
on the way out.
I have taken the opportunity to reduce the number of `open` in the
codebase.
The qualified style would indeed allow us to use a bit nicer names
`GlobRef.Inductive` instead of `IndRef`, etc... once we have the
tooling to do large-scale refactoring that could be tried.
|
|
We remove the special error printing pre-processing in favor of just
calling the standard printers.
Error printing has been a bit complex for a while due to an incomplete
migration to a new printing scheme based on registering exception
printers; this PR should alleviate that by completing the registration
approach.
After this cleanup, it should not be ever necessary for normal
functions to worry a lot about catching errors and re-raising them,
unless they have some very special needs.
This change also allows to consolidate the `explainErr` and `himsg`
modules into one, removing the need to export the error printing
functions. Ideally we would make the contents of `himsg` more
localized, but this can be done in a gradual way.
|
|
We can use logical kind for the same purpose, which is mainly
dumpglob, so `goal_object_kind` was never matched against, making this
transformation safe.
|
|
We move the bulk of `Decl_kinds` to a better place [namely
`interp/decls`] and refactor the use of this information quite a bit.
The information seems to be used almost only for `Dumpglob`, so it
certainly should end there to achieve a cleaner core.
Note the previous commits, as well as the annotations regarding the
dubious use of the "variable" data managed by the `Decls` file.
IMO this needs more work, but this should be a good start.
|
|
We move special vernac-qed handling to a special function, making the
regular vernacular interpretation path uniform.
This is an important step as it paves the way up to export the vernac
DSL to clients, as there are no special vernacs anymore in the regular
interp path, except for Load, which should be handled separately due
to silly reasons, as morally it is a `VtNoProof` command.
|
|
This datatype does belong to this layer.
|
|
We split `{goal,declaration,assumption}_kind` into their
components. This makes sense as each part of this triple is handled by
a different layer, namely:
- `polymorphic` status: necessary for the lower engine layers;
- `locality`: only used in `vernac` top-level constants
- `kind`: merely used for cosmetic purposes [could indeed be removed /
pushed upwards]
We also profit from this refactoring to add some named parameters to
the top-level definition API which is quite parameter-hungry.
More refactoring is possible and will come in further commits, in
particular this is a step towards unifying the definition / lemma save path.
|
|
This information is already present on `Proof.t`, so we extract it
form there.
Moreover, this information is essential to the lower-level proof, as
opposed to the "kind" information which is only relevant to the vernac
layer; we will move it thus to its proper layer in subsequent commits.
|
|
Lemmas.info was a bit out of hand, as well as the parameters to the
`start_*` family. Most of the info is not needed and should hopefully
remain constrained to special cases, most callers only set the hook,
and obligations should be better served by a `start_obligation`
function soon.
|
|
We had to move the private opaque constraints out of the constant declaration
into the opaque table. The API is not very pretty yet due to a pervasive
confusion between monomorphic global constraints and polymorphic local ones,
but once we get rid of futures in the kernel this should be magically solved.
|
|
|
|
The main idea of this PR is to distinguish the types of "proof object"
`Proof_global.t` and the type of "proof object associated to a
constant, the new `Lemmas.t`.
This way, we can move the terminator setup to the higher layer in
`vernac`, which is the one that really knows about constants, paving
the way for further simplification and in particular for a unified
handling of constant saving by removal of the control inversion here.
Terminators are now internal to `Lemmas`, as it is the only part of
the code applying them.
As a consequence, proof nesting is now handled by `Lemmas`, and
`Proof_global.t` is just a single `Proof.t` plus some environmental
meta-data.
We are also enable considerable simplification in a future PR, as this
patch makes `Proof.t` and `Proof_global.t` essentially the same, so we
should expect to handle them under a unified interface.
|
|
Formerly, knowing if a declaration was to be discharged, to be global
but invisible at import, or to be global but visible at import was
obtained by combining the parser-level information (i.e. use of
Variable/Hypothesis/Let vs use of Axiom/Parameter/Definition/..., use
of Local vs Global) with the result of testing whether there were open
sections.
We change the meaning of the Discharge flag: it does not tell anymore
that it was syntactically a Variable/Hypothesis/Let, but tells the
expected semantics of the declaration (issuing a warning in the
parser-to-interpreter step if the semantics is not the one suggested
by the syntax). In particular, the interpretation/command engine
becomes independent of the parser.
The new "semantic" type is:
type import_status = ImportDefaultBehavior | ImportNeedQualified
type locality = Discharge | Global of import_status
In the process, we found a couple of inconsistencies in the treatment
of the locality status. See bug #8722 and test file LocalDefinition.v.
|
|
Typically instead of [start_proof : ontop:Proof_global.t option -> bla ->
Proof_global.t] we have [start_proof : bla -> Proof_global.pstate] and
the pstate is pushed on the stack by a caller around the
vernacentries/mlg level.
Naming can be a bit awkward, hopefully it can be improved (maybe in a
followup PR).
We can see some patterns appear waiting for nicer combinators, eg in
mlg we often only want to work with the current proof, not the stack.
Behaviour should be similar modulo bugs, let's see what CI says.
|
|
We simply pass them as arguments, now that they are not called by the
kernel anymore.
The checker definitely needs to access the opaque proofs. In order not to
touch the API at all, I added a hook there, but it could also be provided
as an additional argument, at the cost of changing all the upwards callers.
|
|
|
|
In that case the terminator and proof object have to be supplied in
the ?proof argument, or else we get an anomaly.
Co-authored-by: Maxime Dénès <mail@maximedenes.fr>
|
|
|
|
This should make https://github.com/coq/coq/pull/9129 easier.
|
|
Kernel should be mostly correct, higher levels do random stuff at
times.
|
|
Now the main functions are unify (solves the problems entirely) and
unify_delay and unify_leq (which might leave some unsolved constraints).
Deprecated the_conv_x and the_conv_x_leq (which were misnommers as they
do unification not conversion).
|
|
This work makes it possible to take advantage of a compact
representation for integers in the entire system, as opposed to only
in some reduction machines. It is useful for heavily computational
applications, where even constructing terms is not possible without such
a representation.
Concretely, it replaces part of the retroknowledge machinery with
a primitive construction for integers in terms, and introduces a kind of
FFI which maps constants to operators (on integers). Properties of these
operators are expressed as explicit axioms, whereas they were hidden in
the retroknowledge-based approach.
This has been presented at the Coq workshop and some Coq Working Groups,
and has been used by various groups for STM trace checking,
computational analysis, etc.
Contributions by Guillaume Bertholon and Pierre Roux <Pierre.Roux@onera.fr>
Co-authored-by: Benjamin Grégoire <Benjamin.Gregoire@inria.fr>
Co-authored-by: Vincent Laporte <Vincent.Laporte@fondation-inria.fr>
|
|
We make `declaration_hook`s optional arguments everywhere, and thus we
avoid some "fake" functions having to be passed.
This identifies positively the code really using hooks [funind,
rewrite, coercions, program, and canonicals] and helps moving toward
some hope of reification.
|
|
We remove the `Proof_types` file which was a trivial stub, we also
cleanup a few layers of aliases.
This is not a lot but every little step helps.
|
|
|
|
|