| Age | Commit message (Collapse) | Author |
|
We also remove trailing whitespace.
Script used:
```bash
for i in `find . -name '*.ml' -or -name '*.mli' -or -name '*.mlg'`; do expand -i "$i" | sponge "$i"; sed -e's/[[:space:]]*$//' -i.bak "$i"; done
```
|
|
This could be Prop (for compat with usual Coq), Set (for HoTT),
or actually an arbitrary "i".
Take lower bound of universes into account in pretyping/engine
Reinstate proper elaboration of SProp <= l constraints:
replacing is_small with equality with lbound is _not_ semantics preserving!
lbound = Set
Elaborate template polymorphic inductives with lower bound Prop
This will make more constraints explicit
Check univ constraints with Prop as lower bound for template inductives
Restrict template polymorphic universes to those not bounded from below
Fixes #9294
fix suggested by Matthieu
Try second fix suggested by Matthieu
Take care of modifying elaboration for record declarations as well.
Rebase and export functions for debug
Remove exported functions used while debugging
Add a new typing flag "check_template" and option "-no-template-checl"
This parameterizes the new criterion on template polymorphic inductives
to allow bypassing it (necessary for backward compatibility).
Update checker to the new typing flags structure
Switch on the new template_check flag to allow old unsafe behavior in
indTyping.
This is the only change of code really impacting the kernel, together
with the commit implementing unbounded from below and parameterization
by the lower bound on universes.
Add deprecated option `Unset Template Check` allowing to make proof
scripts work with both 8.9 and 8.10 for a while
Fix `Template Check` option name and test it
Add `Unset Template Check` to Coq89.v
Cooking of inductives and template-check tests
Cleanup test-suite file for template check / universes(template) flags
cookind tests
Move test of `Unset Template Check` to the failure/ dir, but comment it
for now
Template test-suite test explanation
Overlays for PR 9918
Overlay for paramcoq
Add overlay for fiat_parsers (-no-template-check)
Add overlay for fiat_crypto_legacy
Update fiat-crypto legacy overlay
Now it points at the version that I plan on merging; I am hoping that doing this will guard against mistakes by adding an extra check that the target tested by Coq's CI on this branch works with the change I made.
Remove overlay that should no longer be necessary
The setting in the compat file should handle it
Remove now-merged fiat-crypto-legacy overlay
Update `Print Assumptions` to reflect the typing flag for template checking
Fix About and Print Assumptions for template poly, giving info on which
variables are actually polymorphic
Fix pretty printing to print global universe levels properly
Fix printing of template polymorphic universes
Fix pretty printing for template polymorphism on no universe
Fix interaction of template check and universes(template) flag
Fix indTyping to really check if there is any point in polymorphism: the
conclusion sort should be parameterized over at least one local universe
Indtyping fixes for template polymorphic Props
Allow explicit template polymorphism again
Adapt to new indTyping interface
Handle the case of template-polymorphic on no universes
correctly (morally Type0m univ represented as Prop).
Fix check of meaningfullness of template polymorphism in the kernel.
It is now done w.r.t the min_univ, the minimal universe inferred for the
inductive/record type, independently of the user-written annotation
which must only be larger than min_univ. This preserves compatibility
with UniMath and template-polymorphism as it has been implemented up-to
now.
Comment on identity non-template-polymorphism
Remove incorrect universes(template) attributes from ssr
simpl_fun can be meaningfully template-poly, as well as
pred_key (although the use is debatable: it could just
as well be in Prop).
Move `fun_of_simpl` coercion declaration out of section to respect
uniform inheritance
Remove incorrect uses of #[universes(template)] from the stdlib
Extraction of micromega changes due to moving an ind decl out of a section
Remove incorrect uses of #[universes(template)] from plugins
Fix test-suite files, removing incorrect #[universes(template)] attributes
Remove incorrect #[universes(template)] attributes in test-suite
Fix test-suite
Remove overlays as they have been merged upstream.
|
|
|
|
Kernel should be mostly correct, higher levels do random stuff at
times.
|
|
This is intended to be separate from handling of implicit binders.
The remaining uses of declare_manual_implicits satisfy a lot of
assertions, giving the possibility of simplifying the interface in the
future.
Two disabled warnings are added for things that currently pass silently.
Currently only Mtac passes non-maximal implicits to
declare_manual_implicits with the force-usage flag set. When implicit
arguments don't have to be named, should move Mtac over to
set_implicits.
|
|
|
|
|
|
This is a pre-requisite to use automated formatting tools such as
`ocamlformat`, also, there were quite a few places where the comments
had basically no effect, thus it was confusing for the developer.
p.s: Reading some comments was a lot of fun :)
|
|
|
|
In favor of a constr_of_monomorphic_global function. When people
move to the new Coqlib interface they will also see this deprecation
message encouraging them to think about the best move.
This commit changes a few references to constr_of_global and replaces
them with a constr_of_monomorphic_global which makes it apparent that
this is not the function to call to globalize polymorphic references.
The remaining parts using constr_of_monomorphic_global are easily
identifiable using this: omega, btauto, ring, funind and auto_ind_decl
mainly (this fixes firstorder). What this means is that the symbols
registered for these tactics have to be monomorphic for now.
|
|
We refactor the `Coqlib` API to locate objects over a namespace
`module.object.property`.
This introduces the vernacular command `Register g as n` to expose the
Coq constant `g` under the name `n` (through the `register_ref`
function). The constant can then be dynamically located using the
`lib_ref` function.
Co-authored-by: Emilio Jesús Gallego Arias <e+git@x80.org>
Co-authored-by: Maxime Dénès <mail@maximedenes.fr>
Co-authored-by: Vincent Laporte <Vincent.Laporte@fondation-inria.fr>
|
|
[Dune](https://github.com/ocaml/dune) is a compositional declarative
build system for OCaml. It provides automatic generation of
`version.ml`, `.merlin`, `META`, `opam`, API documentation; install
management; easy integration with external libraries, test runners,
and modular builds.
In particular, Dune uniformly handles components regardless whether
they live in, or out-of-tree. This greatly simplifies cases where a
plugin [or CoqIde] is checked out in the current working copy but then
distributed separately [and vice-versa]. Dune can thus be used as a
more flexible `coq_makefile` replacement.
For now we provide experimental support for a Dune build. In order to
build Coq + the standard library with Dune type:
```
$ make -f Makefile.dune world
```
This PR includes a preliminary, developer-only preview of Dune for
Coq. There is still ongoing work, see
https://github.com/coq/coq/issues/8052 for tracking status towards
full support.
## Technical description.
Dune works out of the box with Coq, once we have fixed some modularity
issues. The main remaining challenge was to support `.vo` files.
As Dune doesn't support custom build rules yet, to properly build
`.vo` files we provide a small helper script `tools/coq_dune.ml`. The
script will scan the Coq library directories and generate the
corresponding rules for `.v -> .vo` and `.ml4 -> .ml` builds. The
script uses `coqdep` as to correctly output the dependencies of
`.v` files. `coq_dune` is akin to `coq_makefile` and should be able to
be used to build Coq projects in the future.
Due to this pitfall, the build process has to proceed in three stages:
1) build `coqdep` and `coq_dune`; 2) generate `dune` files for
`theories` and `plugins`; 3) perform a regular build with all
targets are in scope.
## FAQ
### Why Dune?
Coq has a moderately complex build system and it is not a secret that
many developer-hours have been spent fighting with `make`.
In particular, the current `make`-based system does offer poor support
to verify that the current build rules and variables are coherent, and
requires significant manual, error-prone. Many variables must be
passed by hand, duplicated, etc... Additionally, our make system
offers poor integration with now standard OCaml ecosystem tools such
as `opam`, `ocamlfind` or `odoc`. Another critical point is build
compositionality. Coq is rich in 3rd party contributions, and a big
shortcoming of the current make system is that it cannot be used to
build these projects; requiring us to maintain a custom tool,
`coq_makefile`, with the corresponding cost.
In the past, there has been some efforts to migrate Coq to more
specialized build systems, however these stalled due to a variety of
reasons. Dune, is a declarative, OCaml-specific build tool that is on
the path to become the standard build system for the OCaml ecosystem.
Dune seems to be a good fit for Coq well: it is well-supported, fast,
compositional, and designed for large projects.
### Does Dune replace the make-based build system?
The current, make-based build system is unmodified by this PR and kept
as the default option. However, Dune has the potential
### Is this PR complete? What does it provide?
This PR is ready for developer preview and feedback. The build system
is functional, however, more work is necessary in order to make Dune
the default for Coq.
The main TODOs are tracked at https://github.com/coq/coq/issues/8052
This PR allows developers to use most of the features of Dune today:
- Modular organization of the codebase; each component is built only
against declared dependencies so components are checked for
containment more strictly.
- Hygienic builds; Dune places all artifacts under `_build`.
- Automatic generation of `.install` files, simplified OPAM workflow.
- `utop` support, `-opaque` in developer mode, etc...
- `ml4` files are handled using `coqp5`, a native-code customized
camlp5 executable which brings much faster `ml4 -> ml` processing.
### What dependencies does Dune require?
Dune doesn't depend on any 3rd party package other than the OCaml compiler.
### Some Benchs:
```
$ /usr/bin/time make DUNEOPT="-j 1000" -f Makefile.dune states
59.50user 18.81system 0:29.83elapsed 262%CPU (0avgtext+0avgdata 302996maxresident)k
0inputs+646632outputs (0major+4893811minor)pagefaults 0swaps
$ /usr/bin/time sh -c "./configure -local -native-compiler no && make -j states"
88.21user 23.65system 0:32.96elapsed 339%CPU (0avgtext+0avgdata 304992maxresident)k
0inputs+1051680outputs (0major+5300680minor)pagefaults 0swaps
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
longer use camlp4.
|
|
We do up to `Term` which is the main bulk of the changes.
|
|
|
|
|
|
|
|
Unluckily, this forces replacing a lot of code in plugins, because the API
defined the type of goals and tactics in Proof_type, and by the no-alias rule,
this was the only one. But Proof_type was already implicitly deprecated, so
that the API should have relied on Tacmach instead.
|
|
|
|
|
|
|
|
|
|
As per https://github.com/coq/coq/pull/716#issuecomment-305140839
Partially using
```bash
git grep --name-only 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*[^\.!]")' | xargs sed s'/\(anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*\s*[^\.! ]\)\s*")/\1.")/g' -i
```
and
```bash
git grep --name-only ' !"' | xargs sed s'/ !"/!"/g' -i
```
The rest were manually edited by looking at the results of
```bash
git grep anomaly | grep '\.ml' | grep -v 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp\.\)\?(\(\(Pp.\)\?str\)\?\s*".*\(\.\|!\)")' | grep 'anomaly\($\|[^_]\)' | less
```
|
|
We remove redundant functions `coq_constant`, `gen_reference`, and
`gen_constant`.
This is a first step towards a lazy binding of libraries references.
We have also chosen to untangle `constr` from `Coqlib`, as how to
instantiate the reference (in particular wrt universes) is a
client-side issue. (The client may want to provide an `evar_map` ?)
c.f. #186
|
|
Inspired by https://coq.inria.fr/bugs/show_bug.cgi?id=5229 , which
this PR solves, I propose to remove support for non-synchronous
options.
It seems the few uses of `optsync = false` we legacy and shouldn't
have any impact.
Moreover, non synchronous options may create particularly tricky
situations as for instance, they won't be propagated to workers.
|
|
|
|
|
|
We replace open/close box commands in favor of the create box ones.
|
|
This is cumbersome, because now code may fail at link time if it's not
referring to the correct module name. Therefore, one has to add corresponding
open statements a the top of every file depending on a Ltac module. This
includes seemingly unrelated files that use EXTEND statements.
|
|
This removes quite a few unsafe casts. Unluckily, I had to reintroduce
the old non-module based names for these data structures, because I could
not reproduce easily the same hierarchy in EConstr.
|
|
|
|
|
|
|
|
|
|
Suggested by @ppedrot
|
|
As noted by @ppedrot, the first is redundant. The patch is basically a renaming.
We didn't make the component optional yet, but this could happen in a
future patch.
|
|
For the moment, there is a Closure module in compiler-libs/ocamloptcomp.cm(x)a
|
|
module)
For the moment, there is an Error module in compilers-lib/ocamlbytecomp.cm(x)a
|
|
This is a reimplementation of Hugo's PR#117.
We are trying to address the problem that the name of some reduction functions
was not saying what they were doing (e.g. whd_betadeltaiota was doing let-in
reduction). Like PR#117, we are careful that no function changed semantics
without changing the names. Porting existing ML code should be a matter of
renamings a few function calls.
Also, we introduce more precise reduction flags fMATCH, fFIX, fCOFIX
collectively denominated iota.
We renamed the following functions:
Closure.betadeltaiota -> Closure.all
Closure.betadeltaiotanolet -> Closure.allnolet
Reductionops.beta -> Closure.beta
Reductionops.zeta -> Closure.zeta
Reductionops.betaiota -> Closure.betaiota
Reductionops.betaiotazeta -> Closure.betaiotazeta
Reductionops.delta -> Closure.delta
Reductionops.betalet -> Closure.betazeta
Reductionops.betadelta -> Closure.betadeltazeta
Reductionops.betadeltaiota -> Closure.all
Reductionops.betadeltaiotanolet -> Closure.allnolet
Closure.no_red -> Closure.nored
Reductionops.nored -> Closure.nored
Reductionops.nf_betadeltaiota -> Reductionops.nf_all
Reductionops.whd_betadelta -> Reductionops.whd_betadeltazeta
Reductionops.whd_betadeltaiota -> Reductionops.whd_all
Reductionops.whd_betadeltaiota_nolet -> Reductionops.whd_allnolet
Reductionops.whd_betadelta_stack -> Reductionops.whd_betadeltazeta_stack
Reductionops.whd_betadeltaiota_stack -> Reductionops.whd_all_stack
Reductionops.whd_betadeltaiota_nolet_stack -> Reductionops.whd_allnolet_stack
Reductionops.whd_betadelta_state -> Reductionops.whd_betadeltazeta_state
Reductionops.whd_betadeltaiota_state -> Reductionops.whd_all_state
Reductionops.whd_betadeltaiota_nolet_state -> Reductionops.whd_allnolet_state
Reductionops.whd_eta -> Reductionops.shrink_eta
Tacmach.pf_whd_betadeltaiota -> Tacmach.pf_whd_all
Tacmach.New.pf_whd_betadeltaiota -> Tacmach.New.pf_whd_all
And removed the following ones:
Reductionops.whd_betaetalet
Reductionops.whd_betaetalet_stack
Reductionops.whd_betaetalet_state
Reductionops.whd_betadeltaeta_stack
Reductionops.whd_betadeltaeta_state
Reductionops.whd_betadeltaeta
Reductionops.whd_betadeltaiotaeta_stack
Reductionops.whd_betadeltaiotaeta_state
Reductionops.whd_betadeltaiotaeta
They were unused and having some reduction functions perform eta is confusing
as whd_all and nf_all don't do it.
|
|
There were three versions of injection:
1. "injection term" without "as" clause:
was leaving hypotheses on the goal in reverse order
2. "injection term as ipat", first version:
was introduction hypotheses using ipat in reverse order without
checking that the number of ipat was the size of the injection
(activated with "Unset Injection L2R Pattern Order")
3. "injection term as ipat", second version:
was introduction hypotheses using ipat in left-to-right order
checking that the number of ipat was the size of the injection
and clearing the injecting term by default if an hypothesis
(activated with "Set Injection L2R Pattern Order", default one from 8.5)
There is now:
4. "injection term" without "as" clause, new version:
introducing the components of the injection in the context in
left-to-right order using default intro-patterns "?"
and clearing the injecting term by default if an hypothesis
(activated with "Set Structural Injection")
The new versions 3. and 4. are the "expected" ones in the sense that
they have the following good properties:
- introduction in the context is in the natural left-to-right order
- "injection" behaves the same with and without "as", always
introducing the hypotheses in the goal what corresponds to the
natural expectation as the changes I made in the proof scripts for
adaptation confirm
- clear the "injection" hypothesis when an hypothesis which is the
natural expectation as the changes I made in the proof scripts for
adaptation confirm
The compatibility can be preserved by "Unset Structural Injection" or
by calling "simple injection".
The flag is currently off.
|
|
For now, the pack name reuse the previous .cma name of the plugin,
(extraction_plugin, etc).
The earlier .mllib files in plugins are now named .mlpack.
They are also handled by bin/ocamllibdep, just as .mllib.
We've slightly modified ocamllibdep to help setting the -for-pack
options: in *.mlpack.d files, there are some extra variables such as
foo/bar_FORPACK := -for-pack Baz
when foo/bar.ml is mentioned in baz.mlpack.
When a plugin is calling a function from another plugin, the name
need to be qualified (Foo_plugin.Bar.baz instead of Bar.baz).
Btw, we discard the generated files plugins/*/*_mod.ml, they are
obsolete now, replaced by DECLARE PLUGIN.
Nota: there's a potential problem in the micromega directory,
some .ml files are linked both in micromega_plugin and in csdpcert.
And we now compile these files with a -for-pack, even if they are
not packed in the case of csdpcert. In practice, csdpcert seems
to work well, but we should verify with OCaml experts.
|
|
This patch splits pretty printing representation from IO operations.
- `Pp` is kept in charge of the abstract pretty printing representation.
- The `Feedback` module provides interface for doing printing IO.
The patch continues work initiated for 8.5 and has the following effects:
- The following functions in `Pp`: `pp`, `ppnl`, `pperr`, `pperrnl`,
`pperr_flush`, `pp_flush`, `flush_all`, `msg`, `msgnl`, `msgerr`,
`msgerrnl`, `message` are removed. `Feedback.msg_*` functions must be
used instead.
- Feedback provides different backends to handle output, currently,
`stdout`, `emacs` and CoqIDE backends are provided.
- Clients cannot specify flush policy anymore, thus `pp_flush` et al are
gone.
- `Feedback.feedback` takes an `edit_or_state_id` instead of the old
mix.
Lightly tested: Test-suite passes, Proof General and CoqIDE seem to work.
|
|
|
|
Originally, rel-context was represented as:
Context.rel_context = Names.Name.t * Constr.t option * Constr.t
Now it is represented as:
Context.Rel.t = LocalAssum of Names.Name.t * Constr.t
| LocalDef of Names.Name.t * Constr.t * Constr.t
Originally, named-context was represented as:
Context.named_context = Names.Id.t * Constr.t option * Constr.t
Now it is represented as:
Context.Named.t = LocalAssum of Names.Id.t * Constr.t
| LocalDef of Names.Id.t * Constr.t * Constr.t
Motivation:
(1) In "tactics/hipattern.ml4" file we define "test_strict_disjunction"
function which looked like this:
let test_strict_disjunction n lc =
Array.for_all_i (fun i c ->
match (prod_assum (snd (decompose_prod_n_assum n c))) with
| [_,None,c] -> isRel c && Int.equal (destRel c) (n - i)
| _ -> false) 0 lc
Suppose that you do not know about rel-context and named-context.
(that is the case of people who just started to read the source code)
Merlin would tell you that the type of the value you are destructing
by "match" is:
'a * 'b option * Constr.t (* worst-case scenario *)
or
Named.Name.t * Constr.t option * Constr.t (* best-case scenario (?) *)
To me, this is akin to wearing an opaque veil.
It is hard to figure out the meaning of the values you are looking at.
In particular, it is hard to discover the connection between the value
we are destructing above and the datatypes and functions defined
in the "kernel/context.ml" file.
In this case, the connection is there, but it is not visible
(between the function above and the "Context" module).
------------------------------------------------------------------------
Now consider, what happens when the reader see the same function
presented in the following form:
let test_strict_disjunction n lc =
Array.for_all_i (fun i c ->
match (prod_assum (snd (decompose_prod_n_assum n c))) with
| [LocalAssum (_,c)] -> isRel c && Int.equal (destRel c) (n - i)
| _ -> false) 0 lc
If the reader haven't seen "LocalAssum" before, (s)he can use Merlin
to jump to the corresponding definition and learn more.
In this case, the connection is there, and it is directly visible
(between the function above and the "Context" module).
(2) Also, if we already have the concepts such as:
- local declaration
- local assumption
- local definition
and we describe these notions meticulously in the Reference Manual,
then it is a real pity not to reinforce the connection
of the actual code with the abstract description we published.
|