| Age | Commit message (Collapse) | Author |
|
|
|
Reviewed-by: Zimmi48
Ack-by: ejgallego
Reviewed-by: maximedenes
|
|
|
|
|
|
|
|
The current `perf` CI target is quite heavy, failing from out of
memory sometimes. We use the target suggested by Jason Gross (<- thanks)
in https://github.com/coq/coq/pull/12577#issuecomment-651970064
|
|
|
|
|
|
|
|
|
|
The previous refactoring in `Declare` to add `CInfo.t` makes this a
good moment to clean overlays up w.r.t. deprecation.
All cases but one is just a matter of simple renaming, for the other
the use of an internal API is replaced by newer API.
|
|
Reviewed-by: ejgallego
|
|
This partially reverts commit 35a1cc4f5c708b745a2810a64d220f49eff4beca.
(I've not added back the nix things, since I'm not sure what purpose
they serve, and I've adjusted the targets slightly.)
The CI build should no longer take an enormously long time to start, and
fiat-crypto-legacy code is actively being used to track down memory
issues in #12487. Additionally, f-c-l revealed a genuine bug in #7825,
and so I'd like to keep f-c-l in the CI at least until #7825 is
finished.
I've been maintaining compatibility of f-c-l with master anyway on the
side, in part to continue some performance experiments with it, and
expect to continue to do so at least until the end of this calendar
year, and it'd be nice for me to get advance warning when a PR is going
to break it on master. (It seems reasonable to me to take it off the CI
again once I'm no longer maintaining it / collecting data from it, and /
or once #7825 is finished.)
|
|
Having two different modules led to the availability of internal API in
the mli.
|
|
It's tested on the bench, so might as well test it on the CI. Hopefully
it's not too memory-heavy. (It should only take a couple of minutes,
time-wise.)
|
|
Reviewed-by: ejgallego
|
|
|
|
|
|
Reviewed-by: Zimmi48
|
|
|
|
Reviewed-by: vbgl
|
|
Reviewed-by: maximedenes
|
|
Fixes #12496
|
|
|
|
|
|
|
|
Adapted from 747936a9d9a7402f537e1e1a857c7591d8e88d2a
|
|
|
|
|
|
|
|
Following upstream advice.
|
|
1. Fix casing of build_prep_overlay argument.
Follow-up of 6cc6b87f997d7a5e848203b49bfedfaa96c53bb2
2. Call autoconf directly.
Adapted from a9996619e2d2352e0e60faf4dbde78fa1549b2af
|
|
some machines.
Reviewed-by: SkySkimmer
Reviewed-by: cpitclaudel
|
|
|
|
The will make it possible to put a VsCoq toplevel in `ide/vscoq`.
|
|
Note that this should reduce the overall build time of fiat-crypto
related targets by about 10--20 minutes, as I've removed the heaviest
jobs (about 25--30 minutes in serial) from the OCaml target.
I'd like to keep the OCaml target around just to make sure that Coq
doesn't introduce a change to extraction that breaks compilation of
extracted OCaml code. See https://github.com/ocaml/ocaml/issues/7826
for the issue tracking performance of compiling the extracted OCaml
code (and perhaps there should be another issue opened on the OCaml bug
tracker about flambda on the fiat-crypto extracted files?)
Alternative to #12405
Closes #12405
Fixes #12400
|
|
|
|
Fixes #12386
|
|
|
|
|
|
Reviewed-by: SkySkimmer
Reviewed-by: vbgl
|
|
h/t SkySkimmer at
https://github.com/coq/coq/pull/12316#issuecomment-630952659
|
|
As per PR review request
|
|
Fixes #12300
Note that I currently only paginate the API call for the number of
reviews, not the main API call, because (a) the main API call doesn't
seem subject to pagination (it returns a dict, not an array), and (b)
because fetching the total number of pages incurs an extra API call for
each one that we want to paginate, even if there is only one page. We
could work around (b) with a significantly more complicated
`curl_paginate` function which heuristically recognizes the end of the
header/beginning of the body, such as
```bash
curl_paginate() {
# as per https://developer.github.com/v3/guides/traversing-with-pagination/#changing-the-number-of-items-received, GitHub will never give us more than 100
url="$1?per_page=100"
# We need to process the header to get the pagination. We have two
# options:
#
# 1. We can make an extra API call at the beginning to get the total
# number of pages, search for a rel="last" link, and then loop
# over all the pages.
#
# 2. We can ask for the header info with every single curl request,
# search for a rel="next" link to follow to the next page, and
# then parse out the body from the header.
#
# Although (1) is simpler, we choose to do (2) to save an extra API
# call per invocation of curl.
while [ ! -z "${url}" ]; do
response="$(curl -si "${url}")"
# we search for something like 'link: <https://api.github.com/repositories/1377159/pulls/12129/reviews?page=2>; rel="next", <https://api.github.com/repositories/1377159/pulls/12129/reviews?page=2>; rel="last"' and take the first 'next' url
url="$(echo "${response}" | grep -m 1 -io '^link: .*>; rel="next"' | grep -o '<[^>]*>; rel="next"' | grep -o '<[^>]*>' | sed s'/[<>]//g')"
echo "Response: ${response}" >&2
echo "${response}" |
{
is_header="yes"
while read line; do
if [ "${is_header}" == "yes" ]; then
if echo "${line}" | grep -q '^\s*[\[{]'; then # we treat lines beginning with [ or { as the beginning of the response body
is_header="no"
echo "${line}"
fi
else
echo "${line}"
fi
done
}
done
}
```
|
|
Reviewed-by: SkySkimmer
|
|
Reviewed-by: SkySkimmer
Ack-by: jfehrle
|
|
|
|
This is a new development where I'm aggregating a specific set of
benchmarks. It's intended to be relatively lightweight, taking only a
handful of minutes. It's probably one of the few developments currently
testing Ltac2.
|
|
cc: #12350
|
|
Reviewed-by: Zimmi48
Reviewed-by: cpitclaudel
|