| Age | Commit message (Collapse) | Author |
|
Add headers to a few files which were missing them.
|
|
|
|
The previous system was from before globref was in the kernel.
|
|
We also remove trailing whitespace.
Script used:
```bash
for i in `find . -name '*.ml' -or -name '*.mli' -or -name '*.mlg'`; do expand -i "$i" | sponge "$i"; sed -e's/[[:space:]]*$//' -i.bak "$i"; done
```
|
|
|
|
Replace `option comparison` with `float_comparison` (:= `FEq | FLt |
FGt | FNotComparable`) as suggested by Guillaume Melquiond to avoid
boxing and an extra match when using primitive float comparison.
|
|
Beware of 0. = -0. issue for primitive floats
The IEEE 754 declares that 0. and -0. are treated equal but we cannot
say that this is true with Leibniz equality.
Therefore we must patch the equality and the total comparison inside the
kernel to prevent inconsistency.
|
|
Rather than in typeops
|
|
|
|
This function seems unused.
|
|
Reviewed-by: ppedrot
|
|
Prevent errors when under annotating binders.
|
|
Kernel should be mostly correct, higher levels do random stuff at
times.
|
|
Note currently it's impossible to define inductives in SProp because
indtypes.ml and the pretyper aren't fully plugged.
|
|
|
|
|
|
|
|
This work makes it possible to take advantage of a compact
representation for integers in the entire system, as opposed to only
in some reduction machines. It is useful for heavily computational
applications, where even constructing terms is not possible without such
a representation.
Concretely, it replaces part of the retroknowledge machinery with
a primitive construction for integers in terms, and introduces a kind of
FFI which maps constants to operators (on integers). Properties of these
operators are expressed as explicit axioms, whereas they were hidden in
the retroknowledge-based approach.
This has been presented at the Coq workshop and some Coq Working Groups,
and has been used by various groups for STM trace checking,
computational analysis, etc.
Contributions by Guillaume Bertholon and Pierre Roux <Pierre.Roux@onera.fr>
Co-authored-by: Benjamin Grégoire <Benjamin.Gregoire@inria.fr>
Co-authored-by: Vincent Laporte <Vincent.Laporte@fondation-inria.fr>
|
|
|
|
The use of a term is not needed for the fast typing algorithm of the
application case, so this tweak brings the best of both worlds.
|
|
|
|
|
|
entries
|
|
|
|
It is purely functional, so no need for it to be in global now that
GlobRef.t are in the kernel.
|
|
It is purely functional, so no need for it to be in global now that
GlobRef.t are in the kernel.
Also updated the comments.
|
|
It's basically an occur check so it makes sense to put it in vars
|
|
This is a partial resurrection of #6423 but only for the kernel.
IMHO, we pay a bit of price for this but it is a good safety
measure.
Only warning "4: fragile pattern matching" and "44: open hides a type"
are disabled.
We would like to enable 44 for sure once we do some alias cleanup.
|
|
Keep the universe_levels_of_constr function inside typeops, not
exported.
|
|
The upper layers still need a mapping constant -> projection, which is
provided by Recordops.
|
|
|
|
This is a first step towards the acceptance of mutual record types in the
kernel.
|
|
Instead of having the projection data in the constant data we have it
independently in the environment.
|
|
|
|
|
|
New module introduced in OCaml 4.05 I think, can create problems when
linking with the OCaml toplevel for `Drop`.
|
|
We do up to `Term` which is the main bulk of the changes.
|
|
This will allow to merge back `Names` with `API.Names`
|
|
|
|
The use of template polymorphism in constants was quite limited, as it
only applied to definitions that were exactly inductive types without any
parameter whatsoever. Furthermore, it seems that following the introduction
of polymorphic definitions, the code path enforced regular polymorphism as
soon as the type of a definition was given, which was in practice almost
always.
Removing this feature had no observable effect neither on the test-suite,
nor on any development that we monitor on Travis. I believe it is safe to
assume it was nowadays useless.
|
|
|
|
|
|
As per https://github.com/coq/coq/pull/716#issuecomment-305140839
Partially using
```bash
git grep --name-only 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*[^\.!]")' | xargs sed s'/\(anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*\s*[^\.! ]\)\s*")/\1.")/g' -i
```
and
```bash
git grep --name-only ' !"' | xargs sed s'/ !"/!"/g' -i
```
The rest were manually edited by looking at the results of
```bash
git grep anomaly | grep '\.ml' | grep -v 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp\.\)\?(\(\(Pp.\)\?str\)\?\s*".*\(\.\|!\)")' | grep 'anomaly\($\|[^_]\)' | less
```
|
|
Also remove obvious comments.
|
|
This is really [mv fast_typeops.ml{,i} typeops.ml{,i}] plus trivial
changes in the other files, the real changes are in the parent commit.
|
|
mainly concerning referring to "Context.{Rel,Named}.get_{id,value,type}" functions.
If multiple modules define a function with a same name, e.g.:
Context.{Rel,Named}.get_type
those calls were prefixed with a corresponding prefix
to make sure that it is obvious which function is being called.
|
|
module)
For the moment, there is an Error module in compilers-lib/ocamlbytecomp.cm(x)a
|
|
This is a reimplementation of Hugo's PR#117.
We are trying to address the problem that the name of some reduction functions
was not saying what they were doing (e.g. whd_betadeltaiota was doing let-in
reduction). Like PR#117, we are careful that no function changed semantics
without changing the names. Porting existing ML code should be a matter of
renamings a few function calls.
Also, we introduce more precise reduction flags fMATCH, fFIX, fCOFIX
collectively denominated iota.
We renamed the following functions:
Closure.betadeltaiota -> Closure.all
Closure.betadeltaiotanolet -> Closure.allnolet
Reductionops.beta -> Closure.beta
Reductionops.zeta -> Closure.zeta
Reductionops.betaiota -> Closure.betaiota
Reductionops.betaiotazeta -> Closure.betaiotazeta
Reductionops.delta -> Closure.delta
Reductionops.betalet -> Closure.betazeta
Reductionops.betadelta -> Closure.betadeltazeta
Reductionops.betadeltaiota -> Closure.all
Reductionops.betadeltaiotanolet -> Closure.allnolet
Closure.no_red -> Closure.nored
Reductionops.nored -> Closure.nored
Reductionops.nf_betadeltaiota -> Reductionops.nf_all
Reductionops.whd_betadelta -> Reductionops.whd_betadeltazeta
Reductionops.whd_betadeltaiota -> Reductionops.whd_all
Reductionops.whd_betadeltaiota_nolet -> Reductionops.whd_allnolet
Reductionops.whd_betadelta_stack -> Reductionops.whd_betadeltazeta_stack
Reductionops.whd_betadeltaiota_stack -> Reductionops.whd_all_stack
Reductionops.whd_betadeltaiota_nolet_stack -> Reductionops.whd_allnolet_stack
Reductionops.whd_betadelta_state -> Reductionops.whd_betadeltazeta_state
Reductionops.whd_betadeltaiota_state -> Reductionops.whd_all_state
Reductionops.whd_betadeltaiota_nolet_state -> Reductionops.whd_allnolet_state
Reductionops.whd_eta -> Reductionops.shrink_eta
Tacmach.pf_whd_betadeltaiota -> Tacmach.pf_whd_all
Tacmach.New.pf_whd_betadeltaiota -> Tacmach.New.pf_whd_all
And removed the following ones:
Reductionops.whd_betaetalet
Reductionops.whd_betaetalet_stack
Reductionops.whd_betaetalet_state
Reductionops.whd_betadeltaeta_stack
Reductionops.whd_betadeltaeta_state
Reductionops.whd_betadeltaeta
Reductionops.whd_betadeltaiotaeta_stack
Reductionops.whd_betadeltaiotaeta_state
Reductionops.whd_betadeltaiotaeta
They were unused and having some reduction functions perform eta is confusing
as whd_all and nf_all don't do it.
|
|
|
|
This allows a smooth addition of various unsafe flags without wreaking
havoc in the ML codebase.
|