| Age | Commit message (Collapse) | Author |
|
Add headers to a few files which were missing them.
|
|
It was virtually unused except in ssr, and there is no reason to clutter
the kernel with irrelevant code.
Fixes #9390: What is the purpose of the function "kind_of_type"?
|
|
We also remove trailing whitespace.
Script used:
```bash
for i in `find . -name '*.ml' -or -name '*.mli' -or -name '*.mlg'`; do expand -i "$i" | sponge "$i"; sed -e's/[[:space:]]*$//' -i.bak "$i"; done
```
|
|
Beware of 0. = -0. issue for primitive floats
The IEEE 754 declares that 0. and -0. are treated equal but we cannot
say that this is true with Leibniz equality.
Therefore we must patch the equality and the total comparison inside the
kernel to prevent inconsistency.
|
|
|
|
Kernel should be mostly correct, higher levels do random stuff at
times.
|
|
Note currently it's impossible to define inductives in SProp because
indtypes.ml and the pretyper aren't fully plugged.
|
|
|
|
This work makes it possible to take advantage of a compact
representation for integers in the entire system, as opposed to only
in some reduction machines. It is useful for heavily computational
applications, where even constructing terms is not possible without such
a representation.
Concretely, it replaces part of the retroknowledge machinery with
a primitive construction for integers in terms, and introduces a kind of
FFI which maps constants to operators (on integers). Properties of these
operators are expressed as explicit axioms, whereas they were hidden in
the retroknowledge-based approach.
This has been presented at the Coq workshop and some Coq Working Groups,
and has been used by various groups for STM trace checking,
computational analysis, etc.
Contributions by Guillaume Bertholon and Pierre Roux <Pierre.Roux@onera.fr>
Co-authored-by: Benjamin Grégoire <Benjamin.Gregoire@inria.fr>
Co-authored-by: Vincent Laporte <Vincent.Laporte@fondation-inria.fr>
|
|
This is a partial resurrection of #6423 but only for the kernel.
IMHO, we pay a bit of price for this but it is a good safety
measure.
Only warning "4: fragile pattern matching" and "44: open hides a type"
are disabled.
We would like to enable 44 for sure once we do some alias cleanup.
|
|
This shall eventually allow to use contexts of declarations in the
definition of the "Case" constructor.
Basically, this means that Constr now includes Context and that the
"t" types of Context which were specialized on constr are not defined
in Constr (unfortunately using a heavy boilerplate).
|
|
|
|
We remove most of what was deprecated in `Term`. Now, `intf` and
`kernel` are almost deprecation-free, tho I am not very convinced
about the whole `Term -> Constr` renaming but I'm afraid there is no
way back.
Inconsistencies with the constructor policy (see #6440) remain along
the code-base and I'm afraid I don't see a plan to reconcile them.
The `Sorts` deprecation is hard to finalize, opening `Sorts` is not a
good idea as someone added a `List` module inside it.
|
|
|
|
|
|
|
|
We separate functions dealing with VM values (vmvalues.ml) and
interfaces of the bytecode interpreter (vm.ml). Only the former relies
on untyped constructions.
This also makes the VM architecture closer to the one of native_compute,
another patch could probably try to share more code between the two for
conversion and reification (not trivial, though).
This is also preliminary work for integers and arrays.
|
|
Was introduced in e8c47b65.
|
|
There don't really bring anything, we also correct some minor nits
with the printing function.
|
|
We mirror the structure of EConstr and move the destructors from `Term`
to `Constr`.
This is a step towards having a single module for `Constr`.
|
|
We do up to `Term` which is the main bulk of the changes.
|
|
This will allow to merge back `Names` with `API.Names`
|
|
|
|
|
|
As per https://github.com/coq/coq/pull/716#issuecomment-305140839
Partially using
```bash
git grep --name-only 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*[^\.!]")' | xargs sed s'/\(anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*\s*[^\.! ]\)\s*")/\1.")/g' -i
```
and
```bash
git grep --name-only ' !"' | xargs sed s'/ !"/!"/g' -i
```
The rest were manually edited by looking at the results of
```bash
git grep anomaly | grep '\.ml' | grep -v 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp\.\)\?(\(\(Pp.\)\?str\)\?\s*".*\(\.\|!\)")' | grep 'anomaly\($\|[^_]\)' | less
```
|
|
This is the continuation of #244, we now deprecate `CErrors.error`,
the single entry point in Coq is `user_err`.
The rationale is to allow for easier grepping, and to ease a future
cleanup of error messages. In particular, we would like to
systematically classify all error messages raised by Coq and be sure
they are properly documented.
We restore the two functions removed in #244 to improve compatibility,
but mark them deprecated.
|
|
|
|
|
|
|
|
|
|
Strangely enough, the checker seems to rely on an outdated decompose_app
function which is not the same as the kernel, as the latter is sensitive
to casts. Cast-manipulating functions from the kernel are only used on
upper layers, and thus was moved there.
|
|
Suggested by @ppedrot
|
|
As noted by @ppedrot, the first is redundant. The patch is basically a renaming.
We didn't make the component optional yet, but this could happen in a
future patch.
|
|
module)
For the moment, there is an Error module in compilers-lib/ocamlbytecomp.cm(x)a
|
|
Originally, rel-context was represented as:
Context.rel_context = Names.Name.t * Constr.t option * Constr.t
Now it is represented as:
Context.Rel.t = LocalAssum of Names.Name.t * Constr.t
| LocalDef of Names.Name.t * Constr.t * Constr.t
Originally, named-context was represented as:
Context.named_context = Names.Id.t * Constr.t option * Constr.t
Now it is represented as:
Context.Named.t = LocalAssum of Names.Id.t * Constr.t
| LocalDef of Names.Id.t * Constr.t * Constr.t
Motivation:
(1) In "tactics/hipattern.ml4" file we define "test_strict_disjunction"
function which looked like this:
let test_strict_disjunction n lc =
Array.for_all_i (fun i c ->
match (prod_assum (snd (decompose_prod_n_assum n c))) with
| [_,None,c] -> isRel c && Int.equal (destRel c) (n - i)
| _ -> false) 0 lc
Suppose that you do not know about rel-context and named-context.
(that is the case of people who just started to read the source code)
Merlin would tell you that the type of the value you are destructing
by "match" is:
'a * 'b option * Constr.t (* worst-case scenario *)
or
Named.Name.t * Constr.t option * Constr.t (* best-case scenario (?) *)
To me, this is akin to wearing an opaque veil.
It is hard to figure out the meaning of the values you are looking at.
In particular, it is hard to discover the connection between the value
we are destructing above and the datatypes and functions defined
in the "kernel/context.ml" file.
In this case, the connection is there, but it is not visible
(between the function above and the "Context" module).
------------------------------------------------------------------------
Now consider, what happens when the reader see the same function
presented in the following form:
let test_strict_disjunction n lc =
Array.for_all_i (fun i c ->
match (prod_assum (snd (decompose_prod_n_assum n c))) with
| [LocalAssum (_,c)] -> isRel c && Int.equal (destRel c) (n - i)
| _ -> false) 0 lc
If the reader haven't seen "LocalAssum" before, (s)he can use Merlin
to jump to the corresponding definition and learn more.
In this case, the connection is there, and it is directly visible
(between the function above and the "Context" module).
(2) Also, if we already have the concepts such as:
- local declaration
- local assumption
- local definition
and we describe these notions meticulously in the Reference Manual,
then it is a real pity not to reinforce the connection
of the actual code with the abstract description we published.
|
|
|
|
|
|
|
|
The structure of the Context module was refined in such a way that:
- Types and functions related to rel-context declarations were put into the Context.Rel.Declaration module.
- Types and functions related to rel-context were put into the Context.Rel module.
- Types and functions related to named-context declarations were put into the Context.Named.Declaration module.
- Types and functions related to named-context were put into the Context.Named module.
- Types and functions related to named-list-context declarations were put into Context.NamedList.Declaration module.
- Types and functions related to named-list-context were put into Context.NamedList module.
Some missing comments were added to the *.mli file.
The output of ocamldoc was checked whether it looks in a reasonable way.
"TODO: cleanup" was removed
The order in which are exported functions listed in the *.mli file was changed.
(as in a mature modules, this order usually is not random)
The order of exported functions in Context.{Rel,Named} modules is now consistent.
(as there is no special reason why that order should be different)
The order in which are functions defined in the *.ml file is the same as the order in which they are listed in the *.mli file.
(as there is no special reason to define them in a different order)
The name of the original fold_{rel,named}_context{,_reverse} functions was changed to better indicate what those functions do.
(Now they are called Context.{Rel,Named}.fold_{inside,outside})
The original comments originally attached to the fold_{rel,named}_context{,_reverse} did not full make sense so they were updated.
Thrown exceptions are now documented.
Naming of formal parameters was made more consistent across different functions.
Comments of similar functions in different modules are now consistent.
Comments from *.mli files were copied to *.ml file.
(We need that information in *.mli files because that is were ocamldoc needs it.
It is nice to have it also in *.ml files because when we are using Merlin and jump to the definion of the function,
we can see the comments also there and do not need to open a different file if we want to see it.)
When we invoke ocamldoc, we instruct it to generate UTF-8 HTML instead of (default) ISO-8859-1.
(UTF-8 characters are used in our ocamldoc markup)
"open Context" was removed from all *.mli and *.ml files.
(Originally, it was OK to do that. Now it is not.)
An entry to dev/doc/changes.txt file was added that describes how the names of types and functions have changed.
|
|
Note: they do not even seem to have a debugging purpose, so better remove
them before they bitrot.
|
|
- prod_applist
- prod_applist_assum
- lambda_applist
- lambda_applist_assum
expect an instance matching the quantified context. They are now in
term.ml, with "list" being possibly "vect".
Names are a bit arbitrary. Better propositions are welcome. They are
put in term.ml in that reduction is after all not needed, because the
intent is not to do β or ι on the fly but rather to substitute a λΓ.c
or ∀Γ.c (seen as internalization of a Γ⊢c) into one step,
independently of the idea of reducing.
On the other side:
- beta_applist
- beta_appvect
are seen as optimizations of application doing reduction on the fly
only if possible. They are then kept as functions relevant for
reduction.ml.
|
|
next to each other, waiting for possible integration into a more
uniform API.
|
|
to compensate decompose_lam_n_assum which does not count let-ins.
Any idea on a uniform and clear naming scheme for this kind of
decomposition functions?
|
|
|
|
Such printer is already in Termops
This reverts commit 5d6106a075b79abbb92b03bbca7b13a517cf4925.
|
|
I find it very odd not to have a pretty printer for terms than can
be called from *everywhere*. This commit sticks in Term a long spaghetti
to let Printer install a printing function.
|
|
but the internal representation dropped let-in.
Ideally, the internal representation of the "match" should use
contexts for the predicate and the branches. This would however be a
rather significant change. In the meantime, just a hack.
To do, there is still an extra @ in the constructor name that does not
need to be there.
|
|
using whd_state_gen to handle unfolding. Add an isProj/destProj
in term. Use the proper environment everywhere in unification.ml.
|
|
so as to reproduce correctly the reduction behavior of existing
projections, i.e. delta + iota. Make [projection] an abstract datatype
in Names.ml, most of the patch is about using that abstraction.
Fix unification.ml which tried canonical projections too early in
presence of primitive projections.
|
|
Universes.
Needed to exponse compare_head_gen(_leq) so that it could be reused in Universes.
Remove unused functions from univ as well and refactor a little bit.
Changed the syntax to Type@{} for explicit universe level specs, following the WG decision.
|