| Age | Commit message (Collapse) | Author |
|
We store bound variable names instead of functions for both branches and
predicate, and we furthermore add the parameters in the node. Let bindings
are not taken into account and require an environment lookup for retrieval.
|
|
This allows to quickly spot the parts of the code that rely on the canonical
ordering. When possible we directly introduce the quotient-aware versions.
|
|
Persistent arrays expose a functional interface but are implemented
using an imperative data structure. The OCaml implementation is based on
Jean-Christophe Filliâtre's.
Co-authored-by: Benjamin Grégoire <Benjamin.Gregoire@inria.fr>
Co-authored-by: Gaëtan Gilbert <gaetan.gilbert@skyskimmer.net>
|
|
|
|
This is extracted from #9710, where we need the environment anyway to compute
iota rules on inductive types with let-bindings. The commit is self-contained,
so I think it could go directly in to save me a few rebases.
Furthermore, this is also related to #11707. Assuming we split cbn from the
other reduction machine, this allows to merge the "local" machine with
the general one, since after this PR they will have the same type. One less
reduction machine should make people happy.
|
|
We use a more robust implementation that does not assume that the type
of the inductive is in ζ-normal form. This code path is not exercised,
because due to the kernel typing algorithm, let-bindings in the type
of a recursor are expanded away.
|
|
Add headers to a few files which were missing them.
|
|
We also remove trailing whitespace.
Script used:
```bash
for i in `find . -name '*.ml' -or -name '*.mli' -or -name '*.mlg'`; do expand -i "$i" | sponge "$i"; sed -e's/[[:space:]]*$//' -i.bak "$i"; done
```
|
|
If you have access to a kernel name you also should have the
environment in which it is defined, barring hacks. In order to
disfavor hacks we make the standard lookups raise anomalies so that
people are forced to admit they rely on the internals of the
environment.
We find that hackers operated on the code for side effects, for
finding inductive schemes, for simpl and for Print Assumptions. They
attempted to operate on funind but the error handling code they wrote
would have raised another Not_found instead of being useful.
All these uses are indeed hacky so I am satisfied that we are not
forcing new hacks on callers.
|
|
Not pretty, but it had to be done some day, as `Globnames` seems to be
on the way out.
I have taken the opportunity to reduce the number of `open` in the
codebase.
The qualified style would indeed allow us to use a bit nicer names
`GlobRef.Inductive` instead of `IndRef`, etc... once we have the
tooling to do large-scale refactoring that could be tried.
|
|
|
|
|
|
|
|
As per the OCaml documentation, the order for maps should be total.
I also remove some circumvolutions that were added around eliminators
and canonical names because of this incorrect order.
|
|
Kernel should be mostly correct, higher levels do random stuff at
times.
|
|
Note currently it's impossible to define inductives in SProp because
indtypes.ml and the pretyper aren't fully plugged.
|
|
|
|
In general, `Nametab` is not a module you want to open globally as it
exposes very generic identifiers such as `push` or `global`.
Thus, we remove all global opens and qualify `Nametab` access. The
patch is small and confirms the hypothesis that `Nametab` access
happens in few places thus it doesn't need a global open.
It is also very convenient to be able to use `grep` to see accesses to
the namespace table.
|
|
All the `evar_map` APIs were deprecated in 8.9, thus we deprecate the
combinators to discourage this style of programming.
Still a few places do use imperative style, but they are pretty
localized and should be cleaned up separately.
As these are the last bits of `e_` API remaining this PR closes #6342.
|
|
We remove sections paths from kernel names. This is a cleanup as most of the times this information was unused. This implies a change in the Kernel API and small user visible changes with regards to tactic qualification. In particular, the removal of "global discharge" implies a large cleanup of code.
Additionally, the change implies that some machinery in `library` and `safe_typing` must now take an `~in_section` parameter, as to provide the information whether a section is open or not.
|
|
This shall eventually allow to call Himsg at any point of the
execution, independently of the exact current global environment.
|
|
The upper layers still need a mapping constant -> projection, which is
provided by Recordops.
|
|
(Universes and Evd)
|
|
Currently, if one of the inductives is non recursive, it defaults to a
case analysis schems taking fewer predicates and methods just for that
inductive. This irregularity prevents doing a combined scheme afterwards
to gather all eliminators into one, as combined scheme expects all the
eliminators to have the same predicates and methods. I have a use case
in building function graphs in Equations where some of the inductives
might not be recursive but I expect many other use cases could exist.
|
|
We address the easy ones, but they should probably be all removed.
|
|
|
|
|
|
|
|
We do up to `Term` which is the main bulk of the changes.
|
|
This will allow to merge back `Names` with `API.Names`
|
|
|
|
Reminder of (some of) the reasons for removal:
- Despite the claim in sigma.mli, it does *not* prevent evar
leaks, something like:
fun env evd ->
let (evd',ev) = new_evar env evd in
(evd,ev)
will typecheck even with Sigma-like type annotations (with a proof of
reflexivity)
- The API stayed embryonic. Even typing functions were not ported to
Sigma.
- Some unsafe combinators (Unsafe.tclEVARS) were replaced with slightly
less unsafe ones (e.g. s_enter), but those ones were not marked unsafe
at all (despite still being so).
- There was no good story for higher order functions manipulating evar
maps. Without higher order, one can most of the time get away with
reusing the same name for the updated evar map.
- Most of the code doing complex things with evar maps was using unsafe
casts to sigma. This code should be fixed, but this is an orthogonal
issue.
Of course, this was showing a nice and elegant use of GADTs, but the
cost/benefit ratio in practice did not seem good.
|
|
As per https://github.com/coq/coq/pull/716#issuecomment-305140839
Partially using
```bash
git grep --name-only 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*[^\.!]")' | xargs sed s'/\(anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*\s*[^\.! ]\)\s*")/\1.")/g' -i
```
and
```bash
git grep --name-only ' !"' | xargs sed s'/ !"/!"/g' -i
```
The rest were manually edited by looking at the results of
```bash
git grep anomaly | grep '\.ml' | grep -v 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp\.\)\?(\(\(Pp.\)\?str\)\?\s*".*\(\.\|!\)")' | grep 'anomaly\($\|[^_]\)' | less
```
|
|
Incidentally, this fixes a printing bug in output/inference.v where the
displayed name of an evar was the wrong one because its type was not
evar-expanded enough.
|
|
This allows the decoupling of the notions of context containing kernel
terms and context containing tactic-level terms.
|
|
|
|
|
|
|
|
Suggested by @ppedrot
|
|
As noted by @ppedrot, the first is redundant. The patch is basically a renaming.
We didn't make the component optional yet, but this could happen in a
future patch.
|
|
|
|
|
|
Now scheme will not try to build ill-typed dependent analyses
on recursive records with primitive projections but report
a proper error.
Minor change of the API (adding one error case to recursion_scheme_error).
|
|
module)
For the moment, there is an Error module in compilers-lib/ocamlbytecomp.cm(x)a
|
|
This is a reimplementation of Hugo's PR#117.
We are trying to address the problem that the name of some reduction functions
was not saying what they were doing (e.g. whd_betadeltaiota was doing let-in
reduction). Like PR#117, we are careful that no function changed semantics
without changing the names. Porting existing ML code should be a matter of
renamings a few function calls.
Also, we introduce more precise reduction flags fMATCH, fFIX, fCOFIX
collectively denominated iota.
We renamed the following functions:
Closure.betadeltaiota -> Closure.all
Closure.betadeltaiotanolet -> Closure.allnolet
Reductionops.beta -> Closure.beta
Reductionops.zeta -> Closure.zeta
Reductionops.betaiota -> Closure.betaiota
Reductionops.betaiotazeta -> Closure.betaiotazeta
Reductionops.delta -> Closure.delta
Reductionops.betalet -> Closure.betazeta
Reductionops.betadelta -> Closure.betadeltazeta
Reductionops.betadeltaiota -> Closure.all
Reductionops.betadeltaiotanolet -> Closure.allnolet
Closure.no_red -> Closure.nored
Reductionops.nored -> Closure.nored
Reductionops.nf_betadeltaiota -> Reductionops.nf_all
Reductionops.whd_betadelta -> Reductionops.whd_betadeltazeta
Reductionops.whd_betadeltaiota -> Reductionops.whd_all
Reductionops.whd_betadeltaiota_nolet -> Reductionops.whd_allnolet
Reductionops.whd_betadelta_stack -> Reductionops.whd_betadeltazeta_stack
Reductionops.whd_betadeltaiota_stack -> Reductionops.whd_all_stack
Reductionops.whd_betadeltaiota_nolet_stack -> Reductionops.whd_allnolet_stack
Reductionops.whd_betadelta_state -> Reductionops.whd_betadeltazeta_state
Reductionops.whd_betadeltaiota_state -> Reductionops.whd_all_state
Reductionops.whd_betadeltaiota_nolet_state -> Reductionops.whd_allnolet_state
Reductionops.whd_eta -> Reductionops.shrink_eta
Tacmach.pf_whd_betadeltaiota -> Tacmach.pf_whd_all
Tacmach.New.pf_whd_betadeltaiota -> Tacmach.New.pf_whd_all
And removed the following ones:
Reductionops.whd_betaetalet
Reductionops.whd_betaetalet_stack
Reductionops.whd_betaetalet_state
Reductionops.whd_betadeltaeta_stack
Reductionops.whd_betadeltaeta_state
Reductionops.whd_betadeltaeta
Reductionops.whd_betadeltaiotaeta_stack
Reductionops.whd_betadeltaiotaeta_state
Reductionops.whd_betadeltaiotaeta
They were unused and having some reduction functions perform eta is confusing
as whd_all and nf_all don't do it.
|
|
On the user side, coqtop and coqc take a list of warning names or categories
after -w. No prefix means activate the warning, a "-" prefix means deactivate
it, and "+" means turn the warning into an error. Special categories include
"all", and "default" which contains the warnings enabled by default.
We also provide a vernacular Set Warnings which takes the same flags as argument.
Note that coqc now prints warnings.
The name and category of a warning are printed with the warning itself.
On the developer side, Feedback.msg_warning is still accessible, but the
recommended way to print a warning is in two steps:
1) create it by:
let warn_my_warning =
CWarnings.create ~name:"my-warning" ~category:"my-category"
(fun args -> Pp.strbrk ...)
2) print it by:
warn_my_warning args
|
|
This patch splits pretty printing representation from IO operations.
- `Pp` is kept in charge of the abstract pretty printing representation.
- The `Feedback` module provides interface for doing printing IO.
The patch continues work initiated for 8.5 and has the following effects:
- The following functions in `Pp`: `pp`, `ppnl`, `pperr`, `pperrnl`,
`pperr_flush`, `pp_flush`, `flush_all`, `msg`, `msgnl`, `msgerr`,
`msgerrnl`, `message` are removed. `Feedback.msg_*` functions must be
used instead.
- Feedback provides different backends to handle output, currently,
`stdout`, `emacs` and CoqIDE backends are provided.
- Clients cannot specify flush policy anymore, thus `pp_flush` et al are
gone.
- `Feedback.feedback` takes an `edit_or_state_id` instead of the old
mix.
Lightly tested: Test-suite passes, Proof General and CoqIDE seem to work.
|
|
|
|
Disclaimer: I have no idea what I am doing.
|
|
Originally, rel-context was represented as:
Context.rel_context = Names.Name.t * Constr.t option * Constr.t
Now it is represented as:
Context.Rel.t = LocalAssum of Names.Name.t * Constr.t
| LocalDef of Names.Name.t * Constr.t * Constr.t
Originally, named-context was represented as:
Context.named_context = Names.Id.t * Constr.t option * Constr.t
Now it is represented as:
Context.Named.t = LocalAssum of Names.Id.t * Constr.t
| LocalDef of Names.Id.t * Constr.t * Constr.t
Motivation:
(1) In "tactics/hipattern.ml4" file we define "test_strict_disjunction"
function which looked like this:
let test_strict_disjunction n lc =
Array.for_all_i (fun i c ->
match (prod_assum (snd (decompose_prod_n_assum n c))) with
| [_,None,c] -> isRel c && Int.equal (destRel c) (n - i)
| _ -> false) 0 lc
Suppose that you do not know about rel-context and named-context.
(that is the case of people who just started to read the source code)
Merlin would tell you that the type of the value you are destructing
by "match" is:
'a * 'b option * Constr.t (* worst-case scenario *)
or
Named.Name.t * Constr.t option * Constr.t (* best-case scenario (?) *)
To me, this is akin to wearing an opaque veil.
It is hard to figure out the meaning of the values you are looking at.
In particular, it is hard to discover the connection between the value
we are destructing above and the datatypes and functions defined
in the "kernel/context.ml" file.
In this case, the connection is there, but it is not visible
(between the function above and the "Context" module).
------------------------------------------------------------------------
Now consider, what happens when the reader see the same function
presented in the following form:
let test_strict_disjunction n lc =
Array.for_all_i (fun i c ->
match (prod_assum (snd (decompose_prod_n_assum n c))) with
| [LocalAssum (_,c)] -> isRel c && Int.equal (destRel c) (n - i)
| _ -> false) 0 lc
If the reader haven't seen "LocalAssum" before, (s)he can use Merlin
to jump to the corresponding definition and learn more.
In this case, the connection is there, and it is directly visible
(between the function above and the "Context" module).
(2) Also, if we already have the concepts such as:
- local declaration
- local assumption
- local definition
and we describe these notions meticulously in the Reference Manual,
then it is a real pity not to reinforce the connection
of the actual code with the abstract description we published.
|