| Age | Commit message (Collapse) | Author |
|
Now it is a private field, locations are optional.
|
|
Also remove obvious comments.
|
|
The transition has been done a bit brutally. I think we can still save a
lot of useless normalizations here and there by providing the right API
in EConstr. Nonetheless, this is a first step.
|
|
For now we only normalize sorts, and we leave instances for the next
commit.
|
|
Incidentally, this fixes a printing bug in output/inference.v where the
displayed name of an evar was the wrong one because its type was not
evar-expanded enough.
|
|
This removes quite a few unsafe casts. Unluckily, I had to reintroduce
the old non-module based names for these data structures, because I could
not reproduce easily the same hierarchy in EConstr.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This allows to factorize code and prevents the unnecessary use of back and
forth conversions between the various types of terms.
Note that functions from typing may now raise errors as PretypeError rather
than TypeError, because they call the proper wrapper. I think that they were
wrongly calling the kernel because of an overlook of open modules.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Typing.type_of was using conversion for types of fixpoints while it
could have used unification.
|
|
|
|
module)
For the moment, there is an Error module in compilers-lib/ocamlbytecomp.cm(x)a
|
|
This is a reimplementation of Hugo's PR#117.
We are trying to address the problem that the name of some reduction functions
was not saying what they were doing (e.g. whd_betadeltaiota was doing let-in
reduction). Like PR#117, we are careful that no function changed semantics
without changing the names. Porting existing ML code should be a matter of
renamings a few function calls.
Also, we introduce more precise reduction flags fMATCH, fFIX, fCOFIX
collectively denominated iota.
We renamed the following functions:
Closure.betadeltaiota -> Closure.all
Closure.betadeltaiotanolet -> Closure.allnolet
Reductionops.beta -> Closure.beta
Reductionops.zeta -> Closure.zeta
Reductionops.betaiota -> Closure.betaiota
Reductionops.betaiotazeta -> Closure.betaiotazeta
Reductionops.delta -> Closure.delta
Reductionops.betalet -> Closure.betazeta
Reductionops.betadelta -> Closure.betadeltazeta
Reductionops.betadeltaiota -> Closure.all
Reductionops.betadeltaiotanolet -> Closure.allnolet
Closure.no_red -> Closure.nored
Reductionops.nored -> Closure.nored
Reductionops.nf_betadeltaiota -> Reductionops.nf_all
Reductionops.whd_betadelta -> Reductionops.whd_betadeltazeta
Reductionops.whd_betadeltaiota -> Reductionops.whd_all
Reductionops.whd_betadeltaiota_nolet -> Reductionops.whd_allnolet
Reductionops.whd_betadelta_stack -> Reductionops.whd_betadeltazeta_stack
Reductionops.whd_betadeltaiota_stack -> Reductionops.whd_all_stack
Reductionops.whd_betadeltaiota_nolet_stack -> Reductionops.whd_allnolet_stack
Reductionops.whd_betadelta_state -> Reductionops.whd_betadeltazeta_state
Reductionops.whd_betadeltaiota_state -> Reductionops.whd_all_state
Reductionops.whd_betadeltaiota_nolet_state -> Reductionops.whd_allnolet_state
Reductionops.whd_eta -> Reductionops.shrink_eta
Tacmach.pf_whd_betadeltaiota -> Tacmach.pf_whd_all
Tacmach.New.pf_whd_betadeltaiota -> Tacmach.New.pf_whd_all
And removed the following ones:
Reductionops.whd_betaetalet
Reductionops.whd_betaetalet_stack
Reductionops.whd_betaetalet_state
Reductionops.whd_betadeltaeta_stack
Reductionops.whd_betadeltaeta_state
Reductionops.whd_betadeltaeta
Reductionops.whd_betadeltaiotaeta_stack
Reductionops.whd_betadeltaiotaeta_state
Reductionops.whd_betadeltaiotaeta
They were unused and having some reduction functions perform eta is confusing
as whd_all and nf_all don't do it.
|
|
|
|
This allows a smooth addition of various unsafe flags without wreaking
havoc in the ML codebase.
|
|
|
|
The rational is that
1. further typing flags may be available in the future
2. it makes it easier to trace and document the argument
|
|
Some parts of Evarutils were related to the management of evars under constraints.
We put them in the Evardefine file.
|
|
|
|
|
|
Originally, rel-context was represented as:
Context.rel_context = Names.Name.t * Constr.t option * Constr.t
Now it is represented as:
Context.Rel.t = LocalAssum of Names.Name.t * Constr.t
| LocalDef of Names.Name.t * Constr.t * Constr.t
Originally, named-context was represented as:
Context.named_context = Names.Id.t * Constr.t option * Constr.t
Now it is represented as:
Context.Named.t = LocalAssum of Names.Id.t * Constr.t
| LocalDef of Names.Id.t * Constr.t * Constr.t
Motivation:
(1) In "tactics/hipattern.ml4" file we define "test_strict_disjunction"
function which looked like this:
let test_strict_disjunction n lc =
Array.for_all_i (fun i c ->
match (prod_assum (snd (decompose_prod_n_assum n c))) with
| [_,None,c] -> isRel c && Int.equal (destRel c) (n - i)
| _ -> false) 0 lc
Suppose that you do not know about rel-context and named-context.
(that is the case of people who just started to read the source code)
Merlin would tell you that the type of the value you are destructing
by "match" is:
'a * 'b option * Constr.t (* worst-case scenario *)
or
Named.Name.t * Constr.t option * Constr.t (* best-case scenario (?) *)
To me, this is akin to wearing an opaque veil.
It is hard to figure out the meaning of the values you are looking at.
In particular, it is hard to discover the connection between the value
we are destructing above and the datatypes and functions defined
in the "kernel/context.ml" file.
In this case, the connection is there, but it is not visible
(between the function above and the "Context" module).
------------------------------------------------------------------------
Now consider, what happens when the reader see the same function
presented in the following form:
let test_strict_disjunction n lc =
Array.for_all_i (fun i c ->
match (prod_assum (snd (decompose_prod_n_assum n c))) with
| [LocalAssum (_,c)] -> isRel c && Int.equal (destRel c) (n - i)
| _ -> false) 0 lc
If the reader haven't seen "LocalAssum" before, (s)he can use Merlin
to jump to the corresponding definition and learn more.
In this case, the connection is there, and it is directly visible
(between the function above and the "Context" module).
(2) Also, if we already have the concepts such as:
- local declaration
- local assumption
- local definition
and we describe these notions meticulously in the Reference Manual,
then it is a real pity not to reinforce the connection
of the actual code with the abstract description we published.
|
|
|
|
|
|
|
|
|
|
cleaning done in e8c47b652a0.
It had no serious consequences except having whd-reduction blocked on
a let-in when typing a return clause with let-ins in the arity (a
priori resulting in return types of the form e.g. "(let x:=t in fun y
=> T) u" instead of T[x:=t;y:=u], if I'm not mistaking).
This fixes 3210.v in test-suite.
|
|
- prod_applist
- prod_applist_assum
- lambda_applist
- lambda_applist_assum
expect an instance matching the quantified context. They are now in
term.ml, with "list" being possibly "vect".
Names are a bit arbitrary. Better propositions are welcome. They are
put in term.ml in that reduction is after all not needed, because the
intent is not to do β or ι on the fly but rather to substitute a λΓ.c
or ∀Γ.c (seen as internalization of a Γ⊢c) into one step,
independently of the idea of reducing.
On the other side:
- beta_applist
- beta_appvect
are seen as optimizations of application doing reduction on the fly
only if possible. They are then kept as functions relevant for
reduction.ml.
|
|
|
|
Some functions from pretyping/typing.ml and their derivatives were potential
source of evarmap leaks, as they dropped their resulting evarmap. This commit
clarifies the situation by renaming them according to a unsafe_* scheme. Their
sound variant is likewise renamed to their old name. The following renamings
were made.
- Typing.type_of -> unsafe_type_of
- Typing.e_type_of -> type_of
- A new e_type_of function that matches the e_ prefix policy
- Tacmach.pf_type_of -> pf_unsafe_type_of
- A new safe pf_type_of function.
All uses of unsafe_* functions should be eventually eliminated.
|
|
|
|
|
|
|
|
In order not to be too costly, there is an [unsafe] flag to be set if the
tactic does not have to check that the partial proof term is well-typed (to
be used with caution though).
This patch breaks one [fix]-based example in the refine test-suite, but a huge
development like CompCert still goes through.
|
|
|
|
- realargs: refers either to the indices of an inductive, or to the proper args
of a constructor
- params: refers to parameters (which are common to inductive and constructors)
- allargs = params + realargs
- realdecls: refers to the defining context of indices or proper args
of a constructor (it includes letins)
- paramdecls: refers to the defining context of params (it includes letins)
- alldecls = paramdecls + realdecls
|
|
a global reference that the current (goal) env contains all the
section variables that the global reference expects to be present.
Note that the test for inclusion might be costly: everytime a
conversion happens in a section variable copied in a goal, this
conversion has to be redone when referring to a constant dependent on
this section variable.
It is unclear to me whether we should not instead give global names to
section variables so that they exist even if they are not listed in
the context of the current goal.
Here are two examples which are still problematic:
Section A.
Let B := True : Type.
Definition C := eq_refl : B = True.
Theorem D : Type.
clearbody B.
set (x := C).
unfold C in x.
(* inconsistent context *)
or
Section A.
Let B : Type.
exact True.
Qed.
Definition C := eq_refl : B = True. (* Note that this violated the Qed. *)
Theorem D : Type.
set (x := C).
unfold C in x.
(* inconsistent context *)
|
|
the computed direction argument. In case pbty is conv, no refreshing is done
as the evar body must be convertible with the given term, however refreshing
of template application evar arguments can still happen.
(Re)-Closing bug #2966.
|