aboutsummaryrefslogtreecommitdiff
path: root/doc/sphinx
diff options
context:
space:
mode:
authorJim Fehrle2019-08-05 15:10:32 -0700
committerJim Fehrle2019-11-20 08:53:00 -0800
commitb4eca882b6692b6374dfff8517f9f5a5cc4970f5 (patch)
treeed72a4b0a4cc67c4a988349fb28e0600e7f03ea7 /doc/sphinx
parent4aa756934eb37c6b6d70eddf2b46871bb8ff0956 (diff)
Update grammar in the Terms section of Gallina chapter
Update doc_grammar tool The grammar in the doc is generated semi-automatically with doc_grammar: - the grammar is automatically extracted from the mlg files - developer-prepared editing scripts *.mlg_edit modify the extracted grammar for clarity, simplicity and ordering of productions - chunks of the resulting grammar are automatically inserted into the rsts using instructions embedded in the rsts Running doc_grammar is currently a manual step. The grammar updates in the rst files have been manually reviewed.
Diffstat (limited to 'doc/sphinx')
-rw-r--r--doc/sphinx/addendum/extended-pattern-matching.rst2
-rw-r--r--doc/sphinx/addendum/extraction.rst2
-rw-r--r--doc/sphinx/addendum/generalized-rewriting.rst2
-rwxr-xr-xdoc/sphinx/conf.py17
-rw-r--r--doc/sphinx/language/gallina-extensions.rst40
-rw-r--r--doc/sphinx/language/gallina-specification-language.rst310
-rw-r--r--doc/sphinx/proof-engine/ltac.rst6
-rw-r--r--doc/sphinx/proof-engine/vernacular-commands.rst6
-rw-r--r--doc/sphinx/user-extensions/syntax-extensions.rst2
9 files changed, 269 insertions, 118 deletions
diff --git a/doc/sphinx/addendum/extended-pattern-matching.rst b/doc/sphinx/addendum/extended-pattern-matching.rst
index b568160356..45b3f6f161 100644
--- a/doc/sphinx/addendum/extended-pattern-matching.rst
+++ b/doc/sphinx/addendum/extended-pattern-matching.rst
@@ -192,7 +192,7 @@ Disjunctive patterns
--------------------
Multiple patterns that share the same right-hand-side can be
-factorized using the notation :n:`{+| @mult_pattern}`. For
+factorized using the notation :n:`{+| @patterns_comma}`. For
instance, :g:`max` can be rewritten as follows:
.. coqtop:: in reset
diff --git a/doc/sphinx/addendum/extraction.rst b/doc/sphinx/addendum/extraction.rst
index 0754145b39..7136cc28d1 100644
--- a/doc/sphinx/addendum/extraction.rst
+++ b/doc/sphinx/addendum/extraction.rst
@@ -283,7 +283,7 @@ Notice that in the case of type scheme axiom (i.e. whose type is an
arity, that is a sequence of product finished by a sort), then some type
variables have to be given (as quoted strings). The syntax is then:
-.. cmdv:: Extract Constant @qualid @string ... @string => @string
+.. cmdv:: Extract Constant @qualid {+ @string } => @string
:undocumented:
The number of type variables is checked by the system. For example:
diff --git a/doc/sphinx/addendum/generalized-rewriting.rst b/doc/sphinx/addendum/generalized-rewriting.rst
index 4feda5e875..ca5b5e54a7 100644
--- a/doc/sphinx/addendum/generalized-rewriting.rst
+++ b/doc/sphinx/addendum/generalized-rewriting.rst
@@ -741,7 +741,7 @@ following grammar:
: topdown `strategy` (top-down)
: hints `ident` (apply hints from hint database)
: terms `term` ... `term` (any of the terms)
- : eval `redexpr` (apply reduction)
+ : eval `red_expr` (apply reduction)
: fold `term` (unify)
: ( `strategy` )
diff --git a/doc/sphinx/conf.py b/doc/sphinx/conf.py
index 867a19efe5..f1dd7479c5 100755
--- a/doc/sphinx/conf.py
+++ b/doc/sphinx/conf.py
@@ -183,18 +183,17 @@ todo_include_todos = False
nitpicky = True
nitpick_ignore = [ ('token', token) for token in [
- 'tactic',
- # 142 occurrences currently sort of defined in the ltac chapter,
- # but is it the right place?
- 'module',
- 'redexpr',
- 'modpath',
- 'dirpath',
'collection',
+ 'command',
+ 'dirpath',
+ 'modpath',
+ 'module',
+ 'red_expr',
+ 'symbol',
+ 'tactic',
'term_pattern',
'term_pattern_string',
- 'command',
- 'symbol' ]]
+]]
# -- Options for HTML output ----------------------------------------------
diff --git a/doc/sphinx/language/gallina-extensions.rst b/doc/sphinx/language/gallina-extensions.rst
index a047bab421..ae0afc7819 100644
--- a/doc/sphinx/language/gallina-extensions.rst
+++ b/doc/sphinx/language/gallina-extensions.rst
@@ -79,14 +79,6 @@ To build an object of type :token:`ident`, one should provide the constructor
Definition half := mkRat true 1 2 (O_S 1) one_two_irred.
Check half.
-.. FIXME: move this to the main grammar in the spec chapter
-
-.. _record-named-fields-grammar:
-
- .. productionlist::
- record_term : {| [`field_def` ; … ; `field_def`] |}
- field_def : `ident` [`binders`] := `term`
-
Alternatively, the following syntax allows creating objects by using named fields, as
shown in this grammar. The fields do not have to be in any particular order, nor do they have
to be all present if the missing ones can be inferred or prompted for
@@ -163,10 +155,11 @@ available:
.. _record_projections_grammar:
- .. productionlist:: terms
- projection : `term` `.` ( `qualid` )
- : `term` `.` ( `qualid` `arg` … `arg` )
- : `term` `.` ( @`qualid` `term` … `term` )
+ .. insertgram term_projection term_projection
+
+ .. productionlist:: coq
+ term_projection : `term0` .( `qualid` `args_opt` )
+ : `term0` .( @ `qualid` `term1_list_opt` )
Syntax of Record projections
@@ -591,7 +584,7 @@ Advanced recursive functions
The following experimental command is available when the ``FunInd`` library has been loaded via ``Require Import FunInd``:
-.. cmd:: Function @ident {* @binder} { @decrease_annot } : @type := @term
+.. cmd:: Function @ident {* @binder} { @fixannot } : @type := @term
This command can be seen as a generalization of ``Fixpoint``. It is actually a wrapper
for several ways of defining a function *and other useful related
@@ -600,16 +593,11 @@ The following experimental command is available when the ``FunInd`` library has
The meaning of this declaration is to define a function ident,
similarly to ``Fixpoint``. Like in ``Fixpoint``, the decreasing argument must
be given (unless the function is not recursive), but it might not
- necessarily be *structurally* decreasing. The point of the :n:`{ @decrease_annot }` annotation
+ necessarily be *structurally* decreasing. The point of the :n:`{ @fixannot }` annotation
is to name the decreasing argument *and* to describe which kind of
decreasing criteria must be used to ensure termination of recursive
calls.
- .. productionlist::
- decrease_annot : struct `ident`
- : measure `term` `ident`
- : wf `term` `ident`
-
The ``Function`` construction also enjoys the ``with`` extension to define
mutually recursive definitions. However, this feature does not work
for non structurally recursive functions.
@@ -874,7 +862,7 @@ Sections create local contexts which can be shared across multiple definitions.
:name: Let Fixpoint
:undocumented:
- .. cmdv:: Let CoFixpoint @ident @cofix_body {* with @cofix_body}
+ .. cmdv:: Let CoFixpoint @ident @fix_body {* with @fix_body}
:name: Let CoFixpoint
:undocumented:
@@ -2323,6 +2311,18 @@ Printing universes
Existential variables
---------------------
+.. insertgram term_evar evar_binding
+
+.. productionlist:: coq
+ term_evar : ?[ `ident` ]
+ : ?[ ?`ident` ]
+ : ?`ident` `evar_bindings_opt`
+ evar_bindings_opt : @{ `evar_bindings_semi` }
+ : `empty`
+ evar_bindings_semi : `evar_bindings_semi` ; `evar_binding`
+ : `evar_binding`
+ evar_binding : `ident` := `term`
+
|Coq| terms can include existential variables which represents unknown
subterms to eventually be replaced by actual subterms.
diff --git a/doc/sphinx/language/gallina-specification-language.rst b/doc/sphinx/language/gallina-specification-language.rst
index 94e9ccf0d4..3cc101d06b 100644
--- a/doc/sphinx/language/gallina-specification-language.rst
+++ b/doc/sphinx/language/gallina-specification-language.rst
@@ -52,7 +52,7 @@ Comments
They can contain any character. However, embedded :token:`string` literals must be
correctly closed. Comments are treated as blanks.
-Identifiers and field identifiers
+Identifiers
Identifiers, written :token:`ident`, are sequences of letters, digits, ``_`` and
``'``, that do not start with a digit or ``'``. That is, they are
recognized by the following grammar (except that the string ``_`` is reserved;
@@ -60,7 +60,6 @@ Identifiers and field identifiers
.. productionlist:: coq
ident : `first_letter`[`subsequent_letter`…`subsequent_letter`]
- field : .`ident`
first_letter : a..z ∣ A..Z ∣ _ ∣ `unicode_letter`
subsequent_letter : `first_letter` ∣ 0..9 ∣ ' ∣ `unicode_id_part`
@@ -71,10 +70,6 @@ Identifiers and field identifiers
symbols and non-breaking space. :production:`unicode_id_part`
non-exhaustively includes symbols for prime letters and subscripts.
- Field identifiers, written :token:`field`, are identifiers prefixed by
- `.` (dot) with no blank between the dot and the identifier. They are used in
- the syntax of qualified identifiers.
-
Numerals
Numerals are sequences of digits with an optional fractional part
and exponent, optionally preceded by a minus sign. :token:`int` is an integer;
@@ -144,62 +139,50 @@ presentation of Cic is given in Chapter :ref:`calculusofinductiveconstructions`.
are given in Chapter :ref:`extensionsofgallina`. How to customize the syntax
is described in Chapter :ref:`syntaxextensionsandinterpretationscopes`.
-.. productionlist:: coq
- term : forall `binders` , `term`
- : fun `binders` => `term`
- : fix `fix_bodies`
- : cofix `cofix_bodies`
- : let `ident` [`binders`] [: `term`] := `term` in `term`
- : let fix `fix_body` in `term`
- : let cofix `cofix_body` in `term`
- : let ( [`name` , … , `name`] ) [`dep_ret_type`] := `term` in `term`
- : let ' `pattern` [in `term`] := `term` [`return_type`] in `term`
- : if `term` [`dep_ret_type`] then `term` else `term`
- : `term` : `term`
- : `term` <: `term`
- : `term` :>
- : `term` -> `term`
- : `term` `arg` … `arg`
- : @ `qualid` [`term` … `term`]
- : `term` % `ident`
- : match `match_item` , … , `match_item` [`return_type`] with
- : [[|] `equation` | … | `equation`] end
- : `qualid`
- : `sort`
- : `num`
- : _
- : ( `term` )
- arg : `term`
- : ( `ident` := `term` )
- binders : `binder` … `binder`
- binder : `name`
- : ( `name` … `name` : `term` )
- : ( `name` [: `term`] := `term` )
- : ' `pattern`
- name : `ident` | _
- qualid : `ident` | `qualid` `field`
- sort : SProp | Prop | Set | Type
- fix_bodies : `fix_body`
- : `fix_body` with `fix_body` with … with `fix_body` for `ident`
- cofix_bodies : `cofix_body`
- : `cofix_body` with `cofix_body` with … with `cofix_body` for `ident`
- fix_body : `ident` `binders` [`annotation`] [: `term`] := `term`
- cofix_body : `ident` [`binders`] [: `term`] := `term`
- annotation : { struct `ident` }
- match_item : `term` [as `name`] [in `qualid` [`pattern` … `pattern`]]
- dep_ret_type : [as `name`] `return_type`
- return_type : return `term`
- equation : `mult_pattern` | … | `mult_pattern` => `term`
- mult_pattern : `pattern` , … , `pattern`
- pattern : `qualid` `pattern` … `pattern`
- : @ `qualid` `pattern` … `pattern`
- : `pattern` as `ident`
- : `pattern` % `ident`
- : `qualid`
- : _
- : `num`
- : ( `pattern` | … | `pattern` )
+.. insertgram term binders_opt
+.. productionlist:: coq
+ term : forall `open_binders` , `term`
+ : fun `open_binders` => `term`
+ : `term_let`
+ : if `term` `as_return_type_opt` then `term` else `term`
+ : `term_fix`
+ : `term100`
+ term100 : `term_cast`
+ : `term10`
+ term10 : `term1` `args`
+ : @ `qualid` `universe_annot_opt` `term1_list_opt`
+ : `term1`
+ args : `args` `arg`
+ : `arg`
+ arg : ( `ident` := `term` )
+ : `term1`
+ term1_list_opt : `term1_list_opt` `term1`
+ : `empty`
+ empty :
+ term1 : `term_projection`
+ : `term0` % `ident`
+ : `term0`
+ args_opt : `args`
+ : `empty`
+ term0 : `qualid` `universe_annot_opt`
+ : `sort`
+ : `numeral`
+ : `string`
+ : _
+ : `term_evar`
+ : `term_match`
+ : ( `term` )
+ : {| `fields_def` |}
+ : `{ `term` }
+ : `( `term` )
+ : ltac : ( `ltac_expr` )
+ fields_def : `field_def` ; `fields_def`
+ : `field_def`
+ : `empty`
+ field_def : `qualid` `binders_opt` := `term`
+ binders_opt : `binders`
+ : `empty`
Types
-----
@@ -213,6 +196,13 @@ of types inside the syntactic class :token:`term`.
Qualified identifiers and simple identifiers
--------------------------------------------
+.. insertgram qualid field
+
+.. productionlist:: coq
+ qualid : `qualid` `field`
+ : `ident`
+ field : .`ident`
+
*Qualified identifiers* (:token:`qualid`) denote *global constants*
(definitions, lemmas, theorems, remarks or facts), *global variables*
(parameters or axioms), *inductive types* or *constructors of inductive
@@ -220,11 +210,15 @@ types*. *Simple identifiers* (or shortly :token:`ident`) are a syntactic subset
of qualified identifiers. Identifiers may also denote *local variables*,
while qualified identifiers do not.
-Numerals
---------
+Field identifiers, written :token:`field`, are identifiers prefixed by
+`.` (dot) with no blank between the dot and the identifier.
+
-Numerals have no definite semantics in the calculus. They are mere
-notations that can be bound to objects through the notation mechanism
+Numerals and strings
+--------------------
+
+Numerals and strings have no predefined semantics in the calculus. They are
+merely notations that can be bound to objects through the notation mechanism
(see Chapter :ref:`syntaxextensionsandinterpretationscopes` for details).
Initially, numerals are bound to Peano’s representation of natural
numbers (see :ref:`datatypes`).
@@ -243,6 +237,35 @@ numbers (see :ref:`datatypes`).
Sorts
-----
+.. insertgram sort universe_level
+
+.. productionlist:: coq
+ sort : Set
+ : Prop
+ : SProp
+ : Type
+ : Type @{ _ }
+ : Type @{ `universe` }
+ universe : max ( `universe_exprs_comma` )
+ : `universe_expr`
+ universe_exprs_comma : `universe_exprs_comma` , `universe_expr`
+ : `universe_expr`
+ universe_expr : `universe_name` `universe_increment_opt`
+ universe_name : `qualid`
+ : Set
+ : Prop
+ universe_increment_opt : + `num`
+ : `empty`
+ universe_annot_opt : @{ `universe_levels_opt` }
+ : `empty`
+ universe_levels_opt : `universe_levels_opt` `universe_level`
+ : `empty`
+ universe_level : Set
+ : Prop
+ : Type
+ : _
+ : `qualid`
+
There are four sorts :g:`SProp`, :g:`Prop`, :g:`Set` and :g:`Type`.
- :g:`SProp` is the universe of *definitionally irrelevant
@@ -266,6 +289,35 @@ More on sorts can be found in Section :ref:`sorts`.
Binders
-------
+.. insertgram open_binders exclam_opt
+
+.. productionlist:: coq
+ open_binders : `names` : `term`
+ : `binders`
+ names : `names` `name`
+ : `name`
+ name : _
+ : `ident`
+ binders : `binders` `binder`
+ : `binder`
+ binder : `name`
+ : ( `names` : `term` )
+ : ( `name` `colon_term_opt` := `term` )
+ : { `name` }
+ : { `names` `colon_term_opt` }
+ : `( `typeclass_constraints_comma` )
+ : `{ `typeclass_constraints_comma` }
+ : ' `pattern0`
+ : ( `name` : `term` | `term` )
+ typeclass_constraints_comma : `typeclass_constraints_comma` , `typeclass_constraint`
+ : `typeclass_constraint`
+ typeclass_constraint : `exclam_opt` `term`
+ : { `name` } : `exclam_opt` `term`
+ : `name` : `exclam_opt` `term`
+ exclam_opt : !
+ : `empty`
+
+
Various constructions such as :g:`fun`, :g:`forall`, :g:`fix` and :g:`cofix`
*bind* variables. A binding is represented by an identifier. If the binding
variable is not used in the expression, the identifier can be replaced by the
@@ -291,8 +343,8 @@ the case of a single sequence of bindings sharing the same type (e.g.:
.. index:: fun ... => ...
-Abstractions
-------------
+Abstractions: fun
+-----------------
The expression :n:`fun @ident : @type => @term` defines the
*abstraction* of the variable :token:`ident`, of type :token:`type`, over the term
@@ -313,8 +365,8 @@ Section :ref:`let-in`).
.. index:: forall
-Products
---------
+Products: forall
+----------------
The expression :n:`forall @ident : @type, @term` denotes the
*product* of the variable :token:`ident` of type :token:`type`, over the term :token:`term`.
@@ -359,6 +411,14 @@ Section :ref:`explicit-applications`).
Type cast
---------
+.. insertgram term_cast term_cast
+
+.. productionlist:: coq
+ term_cast : `term10` <: `term`
+ : `term10` <<: `term`
+ : `term10` : `term`
+ : `term10` :>
+
The expression :n:`@term : @type` is a type cast expression. It enforces
the type of :token:`term` to be :token:`type`.
@@ -384,6 +444,22 @@ guess the missing piece of information.
Let-in definitions
------------------
+.. insertgram term_let names_comma
+
+.. productionlist:: coq
+ term_let : let `name` `colon_term_opt` := `term` in `term`
+ : let `name` `binders` `colon_term_opt` := `term` in `term`
+ : let `single_fix` in `term`
+ : let `names_tuple` `as_return_type_opt` := `term` in `term`
+ : let ' `pattern` := `term` `return_type_opt` in `term`
+ : let ' `pattern` in `pattern` := `term` `return_type` in `term`
+ colon_term_opt : : `term`
+ : `empty`
+ names_tuple : ( `names_comma` )
+ : ()
+ names_comma : `names_comma` , `name`
+ : `name`
+
:n:`let @ident := @term in @term’`
denotes the local binding of :token:`term` to the variable
:token:`ident` in :token:`term`’. There is a syntactic sugar for let-in
@@ -392,8 +468,60 @@ stands for :n:`let @ident := fun {+ @binder} => @term in @term’`.
.. index:: match ... with ...
-Definition by case analysis
----------------------------
+Definition by cases: match
+--------------------------
+
+.. insertgram term_match record_pattern
+
+.. productionlist:: coq
+ term_match : match `case_items_comma` `return_type_opt` with `or_opt` `eqns_or_opt` end
+ case_items_comma : `case_items_comma` , `case_item`
+ : `case_item`
+ return_type_opt : `return_type`
+ : `empty`
+ as_return_type_opt : `as_name_opt` `return_type`
+ : `empty`
+ return_type : return `term100`
+ case_item : `term100` `as_name_opt` `in_opt`
+ as_name_opt : as `name`
+ : `empty`
+ in_opt : in `pattern`
+ : `empty`
+ or_opt : |
+ : `empty`
+ eqns_or_opt : `eqns_or`
+ : `empty`
+ eqns_or : `eqns_or` | `eqn`
+ : `eqn`
+ eqn : `patterns_comma_list_or` => `term`
+ patterns_comma_list_or : `patterns_comma_list_or` | `patterns_comma`
+ : `patterns_comma`
+ patterns_comma : `patterns_comma` , `pattern`
+ : `pattern`
+ pattern : `pattern10` : `term`
+ : `pattern10`
+ pattern10 : `pattern1` as `name`
+ : `pattern1_list`
+ : @ `qualid` `pattern1_list_opt`
+ : `pattern1`
+ pattern1_list : `pattern1_list` `pattern1`
+ : `pattern1`
+ pattern1_list_opt : `pattern1_list`
+ : `empty`
+ pattern1 : `pattern0` % `ident`
+ : `pattern0`
+ pattern0 : `qualid`
+ : {| `record_patterns_opt` |}
+ : _
+ : ( `patterns_or` )
+ : `numeral`
+ : `string`
+ patterns_or : `patterns_or` | `pattern`
+ : `pattern`
+ record_patterns_opt : `record_patterns_opt` ; `record_pattern`
+ : `record_pattern`
+ : `empty`
+ record_pattern : `qualid` := `pattern`
Objects of inductive types can be destructured by a case-analysis
construction called *pattern matching* expression. A pattern matching
@@ -403,11 +531,11 @@ to apply specific treatments accordingly.
This paragraph describes the basic form of pattern matching. See
Section :ref:`Mult-match` and Chapter :ref:`extendedpatternmatching` for the description
of the general form. The basic form of pattern matching is characterized
-by a single :token:`match_item` expression, a :token:`mult_pattern` restricted to a
+by a single :token:`case_item` expression, a :token:`patterns_comma` restricted to a
single :token:`pattern` and :token:`pattern` restricted to the form
:n:`@qualid {* @ident}`.
-The expression match ":token:`term`:math:`_0` :token:`return_type` with
+The expression match ":token:`term`:math:`_0` :token:`return_type_opt` with
:token:`pattern`:math:`_1` => :token:`term`:math:`_1` :math:`|` … :math:`|`
:token:`pattern`:math:`_n` => :token:`term`:math:`_n` end" denotes a
*pattern matching* over the term :token:`term`:math:`_0` (expected to be
@@ -417,10 +545,10 @@ expression. Each of :token:`pattern`:math:`_i` has a form :token:`qualid`
:token:`ident` where :token:`qualid` must denote a constructor. There should be
exactly one branch for every constructor of :math:`I`.
-The :token:`return_type` expresses the type returned by the whole match
+The :token:`return_type_opt` expresses the type returned by the whole match
expression. There are several cases. In the *non dependent* case, all
-branches have the same type, and the :token:`return_type` is the common type of
-branches. In this case, :token:`return_type` can usually be omitted as it can be
+branches have the same type, and the :token:`return_type_opt` is the common type of
+branches. In this case, :token:`return_type_opt` can usually be omitted as it can be
inferred from the type of the branches [2]_.
In the *dependent* case, there are three subcases. In the first subcase,
@@ -520,8 +648,30 @@ Sections :ref:`if-then-else` and :ref:`irrefutable-patterns`).
single: fix
single: cofix
-Recursive functions
--------------------
+Recursive and co-recursive functions: fix and cofix
+---------------------------------------------------
+
+.. insertgram term_fix term1_extended_opt
+
+.. productionlist:: coq
+ term_fix : `single_fix`
+ : `single_fix` with `fix_bodies` for `ident`
+ single_fix : fix `fix_body`
+ : cofix `fix_body`
+ fix_bodies : `fix_bodies` with `fix_body`
+ : `fix_body`
+ fix_body : `ident` `binders_opt` `fixannot_opt` `colon_term_opt` := `term`
+ fixannot_opt : `fixannot`
+ : `empty`
+ fixannot : { struct `ident` }
+ : { wf `term1_extended` `ident` }
+ : { measure `term1_extended` `ident_opt` `term1_extended_opt` }
+ term1_extended : `term1`
+ : @ `qualid` `universe_annot_opt`
+ ident_opt : `ident`
+ : `empty`
+ term1_extended_opt : `term1_extended`
+ : `empty`
The expression “``fix`` :token:`ident`:math:`_1` :token:`binder`:math:`_1` ``:``
:token:`type`:math:`_1` ``:=`` :token:`term`:math:`_1` ``with … with``
@@ -547,6 +697,8 @@ syntax: :n:`let fix @ident @binders := @term in` stands for
The Vernacular
==============
+.. insertgramXX vernac ident_opt2
+
.. productionlist:: coq
decorated-sentence : [ `decoration` … `decoration` ] `sentence`
sentence : `assumption`
@@ -568,7 +720,7 @@ The Vernacular
ind_body : `ident` [`binders`] : `term` :=
: [[|] `ident` [`binders`] [:`term`] | … | `ident` [`binders`] [:`term`]]
fixpoint : Fixpoint `fix_body` with … with `fix_body` .
- : CoFixpoint `cofix_body` with … with `cofix_body` .
+ : CoFixpoint `fix_body` with … with `fix_body` .
assertion : `assertion_keyword` `ident` [`binders`] : `term` .
assertion_keyword : Theorem | Lemma
: Remark | Fact
diff --git a/doc/sphinx/proof-engine/ltac.rst b/doc/sphinx/proof-engine/ltac.rst
index e37f300915..b2b426ada5 100644
--- a/doc/sphinx/proof-engine/ltac.rst
+++ b/doc/sphinx/proof-engine/ltac.rst
@@ -127,7 +127,7 @@ mode but it can also be used in toplevel definitions as shown below.
: gfail [`natural`] [`message_token` ... `message_token`]
: fresh [ `component` … `component` ]
: context `ident` [`term`]
- : eval `redexpr` in `term`
+ : eval `red_expr` in `term`
: type of `term`
: constr : `term`
: uconstr : `term`
@@ -986,9 +986,9 @@ Computing in a constr
Evaluation of a term can be performed with:
-.. tacn:: eval @redexpr in @term
+.. tacn:: eval @red_expr in @term
- where :n:`@redexpr` is a reduction tactic among :tacn:`red`, :tacn:`hnf`,
+ where :n:`@red_expr` is a reduction tactic among :tacn:`red`, :tacn:`hnf`,
:tacn:`compute`, :tacn:`simpl`, :tacn:`cbv`, :tacn:`lazy`, :tacn:`unfold`,
:tacn:`fold`, :tacn:`pattern`.
diff --git a/doc/sphinx/proof-engine/vernacular-commands.rst b/doc/sphinx/proof-engine/vernacular-commands.rst
index e87b76b4ab..89b24ea8a3 100644
--- a/doc/sphinx/proof-engine/vernacular-commands.rst
+++ b/doc/sphinx/proof-engine/vernacular-commands.rst
@@ -195,7 +195,7 @@ Requests to the environment
(see Section :ref:`invocation-of-tactics`).
-.. cmd:: Eval @redexpr in @term
+.. cmd:: Eval @red_expr in @term
This command performs the specified reduction on :n:`@term`, and displays
the resulting term with its type. The term to be reduced may depend on
@@ -1146,7 +1146,7 @@ described first.
Print all the currently non-transparent strategies.
-.. cmd:: Declare Reduction @ident := @redexpr
+.. cmd:: Declare Reduction @ident := @red_expr
This command allows giving a short name to a reduction expression, for
instance ``lazy beta delta [foo bar]``. This short name can then be used
@@ -1158,7 +1158,7 @@ described first.
functor applications will be rejected if these declarations are not
local. The name :n:`@ident` cannot be used directly as an Ltac tactic, but
nothing prevents the user from also performing a
- :n:`Ltac @ident := @redexpr`.
+ :n:`Ltac @ident := @red_expr`.
.. seealso:: :ref:`performingcomputations`
diff --git a/doc/sphinx/user-extensions/syntax-extensions.rst b/doc/sphinx/user-extensions/syntax-extensions.rst
index 02910e603a..dbe714c388 100644
--- a/doc/sphinx/user-extensions/syntax-extensions.rst
+++ b/doc/sphinx/user-extensions/syntax-extensions.rst
@@ -871,7 +871,7 @@ notations are given below. The optional :production:`scope` is described in
: Inductive `ind_body` [`decl_notation`] with … with `ind_body` [`decl_notation`].
: CoInductive `ind_body` [`decl_notation`] with … with `ind_body` [`decl_notation`].
: Fixpoint `fix_body` [`decl_notation`] with … with `fix_body` [`decl_notation`].
- : CoFixpoint `cofix_body` [`decl_notation`] with … with `cofix_body` [`decl_notation`].
+ : CoFixpoint `fix_body` [`decl_notation`] with … with `fix_body` [`decl_notation`].
: [Local] Declare Custom Entry `ident`.
decl_notation : [where `string` := `term` [: `scope`] and … and `string` := `term` [: `scope`]].
modifiers : `modifier`, … , `modifier`