| Age | Commit message (Collapse) | Author |
|
The use of a term is not needed for the fast typing algorithm of the
application case, so this tweak brings the best of both worlds.
|
|
There is little point to this as the type is dependent on an open value and
is never computed further.
|
|
|
|
|
|
We know that the two are living in a common type, so that it is useless
to perform the comparison check. Note that we only use this fast-path when
the branches are only made of lambda-abstractions, but this covers
all actual cases.
|
|
They are only used once, thus it is completely useless to reallocate arrays
that are going to be destructed immediately.
|
|
No reason to split the code, this function is only used once.
|
|
|
|
Just like in the Sixth Sense, it turns out it was dead code all along.
|
|
This is a partial resurrection of #6423 but only for the kernel.
IMHO, we pay a bit of price for this but it is a good safety
measure.
Only warning "4: fragile pattern matching" and "44: open hides a type"
are disabled.
We would like to enable 44 for sure once we do some alias cleanup.
|
|
The upper layers still need a mapping constant -> projection, which is
provided by Recordops.
|
|
|
|
Not sure if worth using in other places.
|
|
|
|
We eta-expand cofixpoints when needed, so that their call-by-need
evaluation is correctly implemented by VM and native_compute.
|
|
|
|
|
|
|
|
We expected `nparams + nrealargs + consnrealargs` but the `nrealargs`
should not be there. This breaks cumulativity of constructors for any
inductive with indices (so records still work, explaining why the test
case in #6747 works).
|
|
|
|
|
|
In Reductionops.infer_conv we did not have enough information to
properly try to unify irrelevant universes. This requires changing the
Reduction.universe_compare type a bit.
|
|
- Nothing to check in conversion as they have a common supertype
by typing.
- In inference, enforce that one is lower than the other.
|
|
|
|
Now that the cache is distinct, there should be no nasty side-effects changing
the value of one side while reducing the other.
|
|
|
|
|
|
|
|
|
|
information.
|
|
|
|
|
|
This ensures by construction that we never infer constraints outside
the variance model.
|
|
|
|
Since cumulativity of an inductive type is the universe constraints
which make a term convertible with its universe-renamed copy, the only
constraints we can get are between a universe and its copy.
As such we do not need to be able to represent arbitrary constraints
between universes and copied universes in a double-sized ucontext,
instead we can just keep around an array describing whether a bound
universe is covariant, invariant or irrelevant (CIC has no
contravariant conversion rule).
Printing is fairly obtuse and should be improved: when we print the
CumulativityInfo we add marks to the universes of the instance: = for
invariant, + for covariant and * for irrelevant. ie
Record Foo@{i j k} := { foo : Type@{i} -> Type@{j} }.
Print Foo.
gives
Cumulative Record Foo : Type@{max(i+1, j+1)} := Build_Foo
{ foo : Type@{i} -> Type@{j} }
(* =i +j *k |= *)
|
|
|
|
The part in Reduction should be semantics preserving, but Reductionops
only tried cumulativity if equality fails. This is probably wrong so I
changed it.
|
|
|
|
This fixes the previous patch in rare corner-cases where unification code was
relying on both kernel conversion and specific transparent state.
|
|
This definitely qualifies as a micro-optimization, but it would not be
performed by Flambda. Indeed, it is unsound in general w.r.t. OCaml
semantics, as it involves a fixpoint and changes potential non-termination.
In our case it doesn't matter semantically, but it is indeed observable
on computation intensive developments like UniMath.
|
|
- use Redflags.red_projection
- share unfold_projection between CClosure and Reduction
|
|
This heuristic is justified by the fact that during a conversion check
between a flexible and a rigid term, the flexible one is eventually going
to be fully weak-head normalized. So in this case instead of performing
many small reduction steps on the flexible term, we perform full weak-head
reduction, including delta.
It is slightly more efficient in actual developments, and it fixes a corner
case encountered by Jason Gross.
Fixes #6667: Kernel conversion is much, much slower than `Eval lazy`.
|
|
Adding a "let-in"-sensitive function hnf_prod_applist_assum to
instantiate parameters and using it for printing.
Thanks to PMP for reporting.
|
|
|
|
|
|
New module introduced in OCaml 4.05 I think, can create problems when
linking with the OCaml toplevel for `Drop`.
|
|
Extending terms is notoriously difficult. We try to get more help from
the compiler by making sure such an extension will trigger non
exhaustive pattern matching warnings.
|
|
|
|
We do up to `Term` which is the main bulk of the changes.
|
|
This will allow to merge back `Names` with `API.Names`
|