| Age | Commit message (Collapse) | Author |
|
|
|
Numeral Notations are not well-supported inside functors. We now give a
proper error message rather than an anomaly when this occurs.
|
|
Thanks to Emilio and Pierre-Marie Pédrot for pointers.
|
|
Because that's the sane thing to do.
This will inevitably cause issues for projects which do not `Import
Coq.Strings.Ascii` before trying to use ascii notations.
We also move the syntax plugin for `int31` notations from `Cyclic31` to
`Int31`, so that users (like CompCert) who merely `Require Import
Coq.Numbers.Cyclic.Int31.Int31` get the `int31` numeral syntax. Since
`Cyclic31` `Export`s `Int31`, this should not cause any additional
incompatibilities.
|
|
Some of this code is cargo-culted or kludged to work.
As I understand it, the situation is as follows:
There are two sorts of use-cases that need to be supported:
1. A plugin registers an OCaml function as a numeral interpreter. In
this case, the function registration must be synchronized with the
document state, but the functions should not be marshelled / stored
in the .vo.
2. A vernacular registers a Gallina function as a numeral interpreter.
In this case, the registration must be synchronized, and the function
should be marshelled / stored in the .vo.
In case (1), we can compare functions by pointer equality, and we should
be able to rely on globally unique keys, even across backtracking.
In case (2), we cannot compare functions by pointer equality (because
they must be regenerated on unmarshelling when `Require`ing a .vo file),
and we also cannot rely on any sort of unique key being both unique and
persistent across files.
The solution we use here is that we ask clients to provide "unique"
keys, and that clients tell us whether or not to overwrite existing
registered functions, i.e., to tell us whether or not we should expect
interpreter functions to be globally unique under pointer equality. For
plugins, a simple string suffices, as long as the string does not clash
between different plugins. In the case of vernacular-registered
functions, use marshell a description of all of the data used to
generate the function, and use that string as a unique key which is
expected to persist across files. Because we cannot rely on
function-pointer uniqueness here, we tell the
interpretation-registration to allow overwriting.
----
Some of this code is response to comments on the PR
----
Some code is to fix an issue that bignums revealed:
Both Int31 and bignums registered numeral notations in int31_scope. We
now prepend a globally unique identifier when registering numeral
notations from OCaml plugins. This is permissible because we don't
store the uid information for such notations in .vo files (assuming I'm
understanding the code correctly).
|
|
The first part (e.g. register_bignumeral_interpretation) deals only with
the interp/uninterp closures. It should typically be done as a side
effect during a syntax plugin loading. No prim notation are active yet
after this phase.
The second part (enable_prim_token_interpretation) activates the prim
notation. It is now correctly talking to Summary and to the LibStack.
To avoid "phantom" objects in libstack after a mere Require, this
second part should be done inside a Mltop.declare_cache_obj
The link between the two parts is a prim_token_uid (a string), which
should be unique for each primitive notation. When this primitive
notation is specific to a scope, the scope_name could be used as uid.
Btw, the list of "patterns" for detecting when an uninterpreter should
be considered is now restricted to a list of global_reference
(inductive constructors, or injection functions such as IZR).
The earlier API was accepting a glob_constr list, but was actually
only working well for global_reference.
A minimal compatibility is provided (declare_numeral_interpreter),
but is discouraged, since it is known to store uncessary objects
in the libstack.
|
|
This is prim token notations for inductive *types*, not values.
So we're speaking of a scope where 0 is the type nat, 1 is the type bool, etc.
To my knowledge, this feature hasn't ever been used, and is very unlikely
to be used ever, so let's clean the code a bit by removing it.
|
|
|
|
This cleanup prepares for forthcoming synchronization of prim_token_interpreter
wrt to Summary. These chains of prim_token_interpreter were anyway never used,
only one interpreter was declared per numeral scope.
After sync wrt Summary, we'll anyway be able to backtrack to a previous
interpreter.
|
|
- New command "Declare Custom Entry bar".
- Entries can have levels.
- Printing is done using a notion of coercion between grammar
entries. This typically corresponds to rules of the form
'Notation "[ x ]" := x (x custom myconstr).' but also
'Notation "{ x }" := x (in custom myconstr, x constr).'.
- Rules declaring idents such as 'Notation "x" := x (in custom myconstr, x ident).'
are natively recognized.
- Rules declaring globals such as 'Notation "x" := x (in custom myconstr, x global).'
are natively recognized.
Incidentally merging ETConstr and ETConstrAsBinder.
Noticed in passing that parsing binder as custom was not done as in
constr.
Probably some fine-tuning still to do (priority of notations,
interactions between scopes and entries, ...). To be tested live
further.
|
|
Previously to this patch, `Notation_term` contained information about
both parsing and notation interpretation.
We split notation grammar to a file `parsing/notation_gram` as to make
`interp/` not to depend on some parsing structures such as entries.
|
|
|
|
|
|
|
|
|
|
In #6092, `global_reference` was moved to `kernel`. It makes sense to
go further and use the current kernel style for names.
This has a good effect on the dependency graph, as some core modules
don't depend on library anymore.
A question about providing equality for the GloRef module remains, as
there are two different notions of equality for constants. In that
sense, `KerPair` seems suspicious and at some point it should be
looked at.
|
|
notation to use among several of them"
This reverts commit 9cac9db6446b31294d2413d920db0eaa6dd5d8a6, reversing
changes made to 2f679ec5235257c9fd106c26c15049e04523a307.
|
|
|
|
This commit was motivated by true spurious conversions arising in my
`to_constr` debug branch.
The changes here need careful review as the tradeoffs are subtle and
still a lot of clean up remains to be done in `vernac/*`.
We have opted for penalize [minimally] the few users coming from true
`Constr`-land, but I am sure we can tweak code in a much better way.
In particular, it is not clear if internalization should take an
`evar_map` even in the cases where it is not triggered, see the
changes under `plugins` for a good example.
Also, the new return type of `Pretyping.understand` should undergo
careful review.
We don't touch `Impargs` as it is not clear how to proceed, however,
the current type of `compute_implicits_gen` looks very suspicious as
it is called often with free evars.
Some TODOs are:
- impargs was calling whd_all, the Econstr equivalent can be either
+ Reductionops.whd_all [which does refolding and no sharing]
+ Reductionops.clos_whd_flags with all as a flag.
|
|
|
|
Concretely, we provide "constr as ident", "constr as strict pattern"
and "constr as pattern".
This tells to parse a binder as a constr, restricting to only ident or
to only a strict pattern, or to a pattern which can also be an ident.
The "strict pattern" modifier allows to restrict the use of patterns
in printing rules. This allows e.g. to select the appropriate rule for
printing between {x|P} and {'pat|P}.
|
|
- Avoid dummy use of unit
- Do not decide as early as parsing the default level for pattern
- Prepare to further extensions
|
|
We generalize the possibility to refer to a notation not only by its
"_ U _" form but also using its "a 'U' b".
(Wish from EJGA)
|
|
|
|
This allows in particular to define notations with 'pat style binders.
E.g.:
A non-trivial change in this commit is storing binders and patterns
separately from terms.
This is not strictly necessary but has some advantages.
However, it is relatively common to have binders also used as terms,
or binders parsed as terms. Thus, it is already relatively common to
embed binders into terms (see e.g. notation for ETA in output test
Notations3.v) or to coerce terms to idents (see e.g. the notation for
{x|P} where x is parsed as a constr).
So, it is as simple to always store idents (and eventually patterns)
as terms.
|
|
This makes treatment of recursive binders closer to the one of
recursive terms. It is unclear whether there are interesting notations
liable to use this, but this shall make easier mixing recursive
binders and recursive terms as in next commits.
Example of (artificial) notation that this commit supports:
Notation "! x .. y # A #" :=
(.. (A,(forall x, True)) ..,(forall y, True))
(at level 200, x binder).
|
|
See discussion on coq-club starting on 23 August 2016.
An open question: what priority to give to "abbreviations"?
|
|
We do up to `Term` which is the main bulk of the changes.
|
|
This will allow to merge back `Names` with `API.Names`
|
|
|
|
The internal detype function takes an additional arguments dictating
whether it should be eager or lazy.
We introduce a new type of delayed `DAst.t` AST nodes and use it for
`glob_constr`.
Such type, instead of only containing a value, it can contain a lazy
computation too. We use a GADT to discriminate between both uses
statically, so that no delayed terms ever happen to be
marshalled (which would raise anomalies).
We also fix a regression in the test-suite:
Mixing laziness and effects is a well-known hell. Here, an exception
that was raised for mere control purpose was delayed and raised at a
later time as an anomaly. We make the offending function eager.
|
|
This allows to issue a more appropriate message when a notation with a
{ } cannot be defined because of an incompatible level. E.g.:
Notation "{ A } + B" := (sumbool A B) (at level 20).
|
|
- Formerly, notations such as "{ A } + { B }" were typically split
into "{ _ }" and "_ + _". We keep the split only for parsing, which
is where it is really needed, but not anymore for interpretation,
nor printing.
- As a consequence, one notation string can give rise to several
grammar entries, but still only one printing entry.
- As another consequence, "{ A } + { B }" and "A + { B }" must be
reserved to be used, which is after all the natural expectation,
even if the sublevels are constrained.
- We also now keep the information "is ident", "is binder" in the
"key" characterizing the level of a notation.
|
|
|
|
|
|
|
|
|
|
This new function is similar to declare_numeral_interpreter,
but works on a lower level : instead of bigint, numbers are
represented as string of their decimal digits (plus a boolean sign)
|
|
This string contains the base-10 representation of the number (big endian)
Note that some inner parsing stuff still uses bigints, see egramcoq.ml
|
|
|
|
As per https://github.com/coq/coq/pull/716#issuecomment-305140839
Partially using
```bash
git grep --name-only 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*[^\.!]")' | xargs sed s'/\(anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*\s*[^\.! ]\)\s*")/\1.")/g' -i
```
and
```bash
git grep --name-only ' !"' | xargs sed s'/ !"/!"/g' -i
```
The rest were manually edited by looking at the results of
```bash
git grep anomaly | grep '\.ml' | grep -v 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp\.\)\?(\(\(Pp.\)\?str\)\?\s*".*\(\.\|!\)")' | grep 'anomaly\($\|[^_]\)' | less
```
|
|
|
|
|
|
The type `raw_cases_pattern_expr` is used only in the interpretation
of notation patterns. Indeed, this should be a private type thus we
make it local to `Constrintern`; it makes no sense to expose it in the
public AST.
The patch is routine, except for the case used to interpret primitives
in patterns. We now return a `glob_constr` representing the raw
pattern, instead of the private raw pattern type. This could be
further refactored but have opted to be conservative here.
This patch is a refinement of b2953849b999d1c3b42c0f494b234f2a93ac7754
, see the commentaries there for more information about
`raw_cases_pattern_expr`.
|
|
|
|
|
|
Following @gasche idea, and the original intention of #402, we switch
the main parsing AST of Coq from `'a Loc.located` to `'a CAst.ast`
which is private and record-based.
This provides significantly clearer code for the AST, and is robust
wrt attributes.
|
|
Now it is a private field, locations are optional.
|
|
|
|
We continue the uniformization pass. No big news here, trying to be
minimally invasive.
|