| Age | Commit message (Collapse) | Author |
|
|
|
This allows the following to interpret "0" in the expected scope:
Notation "0" := true : bool_scope.
Axiom f : bool -> bool -> nat.
Record R := { p : bool -> nat }.
Check (@f 0) 0.
Check fun r => r.(@p) 0.
|
|
a keyword.
Ack-by: SkySkimmer
Reviewed-by: herbelin
Ack-by: ppedrot
|
|
Reviewed-by: ppedrot
|
|
Reviewed-by: ppedrot
|
|
This does not affect the rendering but gives better structured
html/tex files.
|
|
|
|
We clarify that there are two kinds of verbatim: inline and block.
We add a test testing verbatim and others.
Co-authored-by: Xia Li-yao <Lysxia@users.noreply.github.com>
|
|
Reviewed-by: jfehrle
|
|
|
|
|
|
The line count remains fragile though... Any idea to do it
automatically?
|
|
|
|
|
|
Fixes #13266 (see #12675, 8641cb7385).
|
|
|
|
In Evar := C[Meta] problems of unification.ml, and C[ ] contains
binders, Meta was wrongly considered by pose_all_metas_as_evars as
under these binders (while Metas are always defined in the initial
context of the unification problem).
|
|
Co-authored-by: <Gaëtan Gilbert <gaetan.gilbert@skyskimmer.net>
|
|
We re-expose `declare_projections` and `declare_structure_entry` as it
is needed by coq-elpi.
Ideally we would provide a better way in recordops to interact with
this, in fact `declare_structure_entry` is just a wrapper around
recordops + libobject structure so there is hope it goes away entirely
in the future.
The need for Elpi to manually call `declare_projections` should
actually disappear in future refactorings.
|
|
|
|
In preparation for better handling of the regular record / class
codepath. This will also allow to pack record data better.
|
|
In preparation for reworking the record declaration path to use the
common infrastructure, we perform some refactoring.
The current choices are far from definitive, as we will consolidate
the data types more as we move along with the work here.
|
|
In preparation for reworking the record declaration path to use the
common infrastructure, we perform some refactoring.
The current choices are far from definitive, as we will consolidate
the data types more as we move along with the work here.
|
|
information constructors
Reviewed-by: SkySkimmer
|
|
Unfortunately, compilers are currently unable to optimize the nextafter
function, even in the degenerate case where the second argument is
explicitly infinite. So, this commit implements this case by hand.
On the following testcase, this gives a 15% speedup.
From Coq Require Import Int63 BinPos PrimFloat.
Definition foo n :=
let eps := sub (next_up one) one in
Pos.iter (fun x => next_down (add x eps)) two n.
Time Eval vm_compute in foo 100000000.
And when looking at the cost of just the allocation-free version of
next_down, the speedup is 1500%. Said otherwise, the latency of next_down
is now on par with the measurement noise due to cache misses and the like.
|
|
operations.
Floating-point values are boxed, which means that any operation causes an
allocation. While short-lived, they nonetheless cause the minor heap to
fill, which in turn triggers the garbage collector.
To reduce the number of allocations, I initially went with a shadow stack
mechanism for storing floating-point values. But assuming the CoqInterval
library is representative, this was way too complicated in practice, as
most stack-located values ended up being passed to nextdown and nextup
before being stored in memory.
So, this commit implements a different mechanism. Variants of nextdown and
nextup are added, which reuse the allocation of their input argument.
Obviously, this is only correct if there is no other reference to this
argument. To ensure this property, the commit only uses these opcode
during a peephole optimization. If two primitive operations follow one
another, then the second one can reuse the allocation of the first one,
since it never had time to even reach the stack.
For CoqInterval, this divides the number of allocations due to
floating-point operations by about two.
The following snippet is made 4% faster by this commit (and 13% faster if
we consider only the floating-point operations).
From Coq Require Import Int63 BinPos PrimFloat.
Definition foo n :=
let eps := sub (next_up one) one in
Pos.iter (fun x => next_down (add x eps)) two n.
Time Eval vm_compute in foo 100000000.
|
|
|
|
|
|
Since the code is compiled in -fPIC mode, the compiler cannot inline the
functions, due to the ABI mandating the ability to interpose visible
symbols. Hiding the symbols of coq_float64.h would work, except that they
float64.ml needs to reference them. (See #13124 for more details.)
This commit improves performances by 7% on the following code.
From Coq Require Import Int63 BinPos PrimFloat.
Definition foo n :=
let two := of_int63 2 in
Pos.iter (fun x => sub (mul x two) two) two n.
Time Eval vm_compute in foo 100000000.
If we consider only the floating-point operations (by ignoring the cost of
the loop), the speedup is actually 30%.
|
|
This does not make the code any slower, since
Is_coq_array(accu) && Is_uint63(sp[0])
and
!Is_accu(accu) && !Is_accu(sp[0])
take the exact same number of tests to pass in the concrete case.
In the accumulator case, it takes one more test to fail, but we do not
care about the performances then.
|
|
|
|
|
|
|
|
|
|
Otherwise, these constructs would be followed by a spurious Kreturn
opcode, when in tail position.
|
|
|
|
|
|
1. There is no point in marking plain integers as GC roots.
2. There is no need to restore the stack pointer, as the stack is not
allocated on the OCaml heap (contrarily to coq_env).
|
|
Fix #13348
|
|
Fix #13354
This change is very specific to array, but should not be a significant
obstacle to generalization of the feature to eg axioms if we want to later.
|
|
BTW it was incorrect (array needs an instance)
|
|
|
|
Reviewed-by: Blaisorblade
Ack-by: SkySkimmer
|
|
|
|
|
|
Reviewed-by: gares
Ack-by: SkySkimmer
Ack-by: ejgallego
|
|
Co-authored-by: Gaëtan Gilbert <gaetan.gilbert@skyskimmer.net>
|
|
second-order/contextual pattern abbreviations to only parsing
Reviewed-by: gares
|
|
Reviewed-by: ejgallego
|
|
|