| Age | Commit message (Collapse) | Author |
|
|
|
Reviewed-by: Zimmi48
Reviewed-by: maximedenes
|
|
Ack-by: ejgallego
Reviewed-by: gares
|
|
As documented in the feedback API.
|
|
`Mlutil.simpl` and `Mlutil.atomic_eta_red` did some unsound eta-reductions as
follows:
(fun x0 ... xn => MLexn x0 ... xn) ->eta MLexn.
`MLexn` raises an exception thus is not a value in OCaml. So the above
simplification may change the behavior of extracted programs. This patch
restricts `atomic_eta_red` to eta-redexes whose core is both atomic and value.
Acknowledgement: This work is financially supported by Peano System Inc.
on-behalf-of: @peano-system <info@peano-system.jp>
|
|
- Inline record projections by default (except for Haskell extraction).
- Extend `pp_record_proj` for record projections involving `MLmagic`.
- Remove special treatments for pretty-printing for record projections other
than `pp_record_proj`.
- `micromega.ml` had to be changed due to this change of the extraction plugin.
Acknowledgement: This work is financially supported by Peano System Inc.
on-behalf-of: @peano-system <info@peano-system.jp>
|
|
Primitive operations addc, addcarryc, subc, subcarryc, and diveucl are
implemented in the kernel so that they can be used by OCaml code (e.g.,
extracted code) as the other primitives.
|
|
The ExtrOCamlInt63 module can be required to map primitives from the Int63
module to their OCaml implementation (module Uint63 from the kernel).
|
|
Reviewed-by: SkySkimmer
Ack-by: ppedrot
|
|
Not pretty, but it had to be done some day, as `Globnames` seems to be
on the way out.
I have taken the opportunity to reduce the number of `open` in the
codebase.
The qualified style would indeed allow us to use a bit nicer names
`GlobRef.Inductive` instead of `IndRef`, etc... once we have the
tooling to do large-scale refactoring that could be tried.
|
|
The changes are large due to `Pervasives` deprecation:
- the `Pervasives` module has been deprecated in favor of `Stdlib`, we
have opted for introducing a few wrapping functions in `Util` and
just unqualified the rest of occurrences. We avoid the shims as in
the previous attempt.
- a bug regarding partial application have been fixed.
- some formatting functions have been deprecated, but previous
versions don't include a replacement, thus the warning has been
disabled.
We may want to clean up things a bit more, in particular
w.r.t. modules once we can move to OCaml 4.07 as the minimum required
version.
Note that there is a clash between 4.08.0 modules `Option` and `Int`
and Coq's ones. It is not clear if we should resolve that clash or
not, see PR #10469 for more discussion.
On the good side, OCaml 4.08.0 does provide a few interesting
functionalities, including nice new warnings useful for devs.
|
|
Reviewed-by: ejgallego
Reviewed-by: ppedrot
|
|
We make a few libobject constructions (Module, Module Type,
Include,...) first-class and rephrase their handling in direct style (removing
the inversion of control). This makes it easier to define iterators over
objects without hacks like inspecting the tags of dynamic objects.
|
|
The old code was conditionally dumping and catching `Not_found`, as
noted by Gaëtan Gilbert on #10433:
> we could just remove the dump, the sibling functions
> (`full_extraction`, etc...) don't bother to dump for instance.
|
|
Reviewed-by: SkySkimmer
Reviewed-by: gares
|
|
|
|
We had to move the private opaque constraints out of the constant declaration
into the opaque table. The API is not very pretty yet due to a pervasive
confusion between monomorphic global constraints and polymorphic local ones,
but once we get rid of futures in the kernel this should be magically solved.
|
|
The main idea of this PR is to distinguish the types of "proof object"
`Proof_global.t` and the type of "proof object associated to a
constant, the new `Lemmas.t`.
This way, we can move the terminator setup to the higher layer in
`vernac`, which is the one that really knows about constants, paving
the way for further simplification and in particular for a unified
handling of constant saving by removal of the control inversion here.
Terminators are now internal to `Lemmas`, as it is the only part of
the code applying them.
As a consequence, proof nesting is now handled by `Lemmas`, and
`Proof_global.t` is just a single `Proof.t` plus some environmental
meta-data.
We are also enable considerable simplification in a future PR, as this
patch makes `Proof.t` and `Proof_global.t` essentially the same, so we
should expect to handle them under a unified interface.
|
|
|
|
eg ![proof] becomes STATE proof
This commits still supports the old ![]
so there is redundancy:
~~~
VERNAC EXTEND Foo STATE proof
| ...
VERNAC EXTEND Foo
| ![proof] ...
~~~
with the ![] form being local to the rule and the STATE form
applying to the whole EXTEND except for the rules with a ![].
|
|
![proof_stack] is equivalent to the old meaning of ![proof]: the body
has type `pstate:Proof_global.t option -> Proof_global.t option`
The other specifiers are for the following body types:
~~~
![open_proof] `is_ontop:bool -> pstate`
![maybe_open_proof] `is_ontop:bool -> pstate option`
![proof] `pstate:pstate -> pstate`
![proof_opt_query] `pstate:pstate option -> unit`
![proof_query] `pstate:pstate -> unit`
~~~
The `is_ontop` is only used for the warning message when declaring a
section variable inside a proof, we could also just stop warning.
The specifiers look closely related to stm classifiers, but currently
they're unconnected. Notably this means that a ![proof_query] doesn't
have to be classified QUERY.
![proof_stack] is only used by g_rewrite/rewrite whose behaviour I
don't fully understand, maybe we can drop it in the future.
For compat we may want to consider keeping ![proof] with its old
meaning and using some new name for the new meaning. OTOH fixing
plugins to be stricter is easier if we change it as the errors tell us
where it's used.
|
|
Typically instead of [start_proof : ontop:Proof_global.t option -> bla ->
Proof_global.t] we have [start_proof : bla -> Proof_global.pstate] and
the pstate is pushed on the stack by a caller around the
vernacentries/mlg level.
Naming can be a bit awkward, hopefully it can be improved (maybe in a
followup PR).
We can see some patterns appear waiting for nicer combinators, eg in
mlg we often only want to work with the current proof, not the stack.
Behaviour should be similar modulo bugs, let's see what CI says.
|
|
We simply pass them as arguments, now that they are not called by the
kernel anymore.
The checker definitely needs to access the opaque proofs. In order not to
touch the API at all, I added a hook there, but it could also be provided
as an additional argument, at the cost of changing all the upwards callers.
|
|
|
|
|
|
|
|
|
|
Prevent errors when under annotating binders.
|
|
Kernel should be mostly correct, higher levels do random stuff at
times.
|
|
Note currently it's impossible to define inductives in SProp because
indtypes.ml and the pretyper aren't fully plugged.
|
|
It used to simply remember the normal form of the type of the constructor.
This is somewhat problematic as this is ambiguous in presence of
let-bindings. Rather, we store this data in a fully expanded way, relying
on rel_contexts.
Probably fixes a crapload of bugs with inductive types containing
let-bindings, but it seems that not many were reported in the bugtracker.
|
|
Ack-by: SkySkimmer
Reviewed-by: Zimmi48
Reviewed-by: mattam82
Reviewed-by: maximedenes
Reviewed-by: ppedrot
|
|
This work makes it possible to take advantage of a compact
representation for integers in the entire system, as opposed to only
in some reduction machines. It is useful for heavily computational
applications, where even constructing terms is not possible without such
a representation.
Concretely, it replaces part of the retroknowledge machinery with
a primitive construction for integers in terms, and introduces a kind of
FFI which maps constants to operators (on integers). Properties of these
operators are expressed as explicit axioms, whereas they were hidden in
the retroknowledge-based approach.
This has been presented at the Coq workshop and some Coq Working Groups,
and has been used by various groups for STM trace checking,
computational analysis, etc.
Contributions by Guillaume Bertholon and Pierre Roux <Pierre.Roux@onera.fr>
Co-authored-by: Benjamin Grégoire <Benjamin.Gregoire@inria.fr>
Co-authored-by: Vincent Laporte <Vincent.Laporte@fondation-inria.fr>
|
|
The code to generate the legacy bodies is moved to its only user in
extraction.
It almost seems like we could remove it (ie no special extraction code
for primitive projection constants) but then we run into issues with
automatic unboxing eg `Record foo := { a : nat; b : a <= 5 }.` gets
extracted to `type foo = nat` and (if we remove the special code) `let
a = a`.
|
|
|
|
|
|
This is a pre-requisite to use automated formatting tools such as
`ocamlformat`, also, there were quite a few places where the comments
had basically no effect, thus it was confusing for the developer.
p.s: Reading some comments was a lot of fun :)
|
|
|
|
|
|
Removing a few Global.env in the way.
|
|
Almost all of ml4 were removed in the process. The only remaining files
are in the test-suite and probably need a bit of fiddling with coq_makefile,
and there only two really remaning ml4 files containing code.
|
|
|
|
We remove sections paths from kernel names. This is a cleanup as most of the times this information was unused. This implies a change in the Kernel API and small user visible changes with regards to tactic qualification. In particular, the removal of "global discharge" implies a large cleanup of code.
Additionally, the change implies that some machinery in `library` and `safe_typing` must now take an `~in_section` parameter, as to provide the information whether a section is open or not.
|
|
|
|
|
|
[Dune](https://github.com/ocaml/dune) is a compositional declarative
build system for OCaml. It provides automatic generation of
`version.ml`, `.merlin`, `META`, `opam`, API documentation; install
management; easy integration with external libraries, test runners,
and modular builds.
In particular, Dune uniformly handles components regardless whether
they live in, or out-of-tree. This greatly simplifies cases where a
plugin [or CoqIde] is checked out in the current working copy but then
distributed separately [and vice-versa]. Dune can thus be used as a
more flexible `coq_makefile` replacement.
For now we provide experimental support for a Dune build. In order to
build Coq + the standard library with Dune type:
```
$ make -f Makefile.dune world
```
This PR includes a preliminary, developer-only preview of Dune for
Coq. There is still ongoing work, see
https://github.com/coq/coq/issues/8052 for tracking status towards
full support.
## Technical description.
Dune works out of the box with Coq, once we have fixed some modularity
issues. The main remaining challenge was to support `.vo` files.
As Dune doesn't support custom build rules yet, to properly build
`.vo` files we provide a small helper script `tools/coq_dune.ml`. The
script will scan the Coq library directories and generate the
corresponding rules for `.v -> .vo` and `.ml4 -> .ml` builds. The
script uses `coqdep` as to correctly output the dependencies of
`.v` files. `coq_dune` is akin to `coq_makefile` and should be able to
be used to build Coq projects in the future.
Due to this pitfall, the build process has to proceed in three stages:
1) build `coqdep` and `coq_dune`; 2) generate `dune` files for
`theories` and `plugins`; 3) perform a regular build with all
targets are in scope.
## FAQ
### Why Dune?
Coq has a moderately complex build system and it is not a secret that
many developer-hours have been spent fighting with `make`.
In particular, the current `make`-based system does offer poor support
to verify that the current build rules and variables are coherent, and
requires significant manual, error-prone. Many variables must be
passed by hand, duplicated, etc... Additionally, our make system
offers poor integration with now standard OCaml ecosystem tools such
as `opam`, `ocamlfind` or `odoc`. Another critical point is build
compositionality. Coq is rich in 3rd party contributions, and a big
shortcoming of the current make system is that it cannot be used to
build these projects; requiring us to maintain a custom tool,
`coq_makefile`, with the corresponding cost.
In the past, there has been some efforts to migrate Coq to more
specialized build systems, however these stalled due to a variety of
reasons. Dune, is a declarative, OCaml-specific build tool that is on
the path to become the standard build system for the OCaml ecosystem.
Dune seems to be a good fit for Coq well: it is well-supported, fast,
compositional, and designed for large projects.
### Does Dune replace the make-based build system?
The current, make-based build system is unmodified by this PR and kept
as the default option. However, Dune has the potential
### Is this PR complete? What does it provide?
This PR is ready for developer preview and feedback. The build system
is functional, however, more work is necessary in order to make Dune
the default for Coq.
The main TODOs are tracked at https://github.com/coq/coq/issues/8052
This PR allows developers to use most of the features of Dune today:
- Modular organization of the codebase; each component is built only
against declared dependencies so components are checked for
containment more strictly.
- Hygienic builds; Dune places all artifacts under `_build`.
- Automatic generation of `.install` files, simplified OPAM workflow.
- `utop` support, `-opaque` in developer mode, etc...
- `ml4` files are handled using `coqp5`, a native-code customized
camlp5 executable which brings much faster `ml4 -> ml` processing.
### What dependencies does Dune require?
Dune doesn't depend on any 3rd party package other than the OCaml compiler.
### Some Benchs:
```
$ /usr/bin/time make DUNEOPT="-j 1000" -f Makefile.dune states
59.50user 18.81system 0:29.83elapsed 262%CPU (0avgtext+0avgdata 302996maxresident)k
0inputs+646632outputs (0major+4893811minor)pagefaults 0swaps
$ /usr/bin/time sh -c "./configure -local -native-compiler no && make -j states"
88.21user 23.65system 0:32.96elapsed 339%CPU (0avgtext+0avgdata 304992maxresident)k
0inputs+1051680outputs (0major+5300680minor)pagefaults 0swaps
```
|
|
|
|
The upper layers still need a mapping constant -> projection, which is
provided by Recordops.
|
|
|
|
This brings more compatibility with handling of mutual primitive records
in the kernel.
|