Commit Graph

98 Commits

Author SHA1 Message Date
83c8f76235 Merge branch 'ps/ci-meson'
The meson-build procedure is integrated into CI to catch and
prevent bitrotting.

* ps/ci-meson:
  ci: wire up Meson builds
  t: introduce compatibility options to clar-based tests
  t: fix out-of-tree tests for some git-p4 tests
  Makefile: detect missing Meson tests
  meson: detect missing tests at configure time
  t/unit-tests: rename clar-based unit tests to have a common prefix
  Makefile: drop -DSUPPRESS_ANNOTATED_LEAKS
  ci/lib: support custom output directories when creating test artifacts
2024-12-23 09:32:25 -08:00
19fbad7918 Merge branch 'ps/ci-gitlab-update'
GitLab CI updates.

* ps/ci-gitlab-update:
  ci/lib: fix "CI setup" sections with GitLab CI
  ci/lib: do not interpret escape sequences in `group ()` arguments
  ci/lib: remove duplicate trap to end "CI setup" group
  gitlab-ci: update macOS images to Sonoma
2024-12-23 09:32:07 -08:00
eab5dbab92 ci: wire up Meson builds
Wire up CI builds for both GitLab and GitHub that use the Meson build
system.

While the setup is mostly trivial, one gotcha is the test output
directory used to be in "t/", but now it is contained in the build
directory. To unify the logic across Makefile- and Meson-based builds we
explicitly set up the `TEST_OUTPUT_DIRECTORY` variable so that it is the
same for both build systems.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-13 06:48:47 -08:00
714c134dd6 ci/lib: support custom output directories when creating test artifacts
Update `create_failed_test_artifacts ()` so that it can handle arbitrary
test output directories. This fixes creation of these artifacts for
macOS on GitLab CI, which uses a separate output directory already. This
will also be used by our out-of-tree builds with Meson.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-13 06:48:45 -08:00
c6b43f663e ci/lib: fix "CI setup" sections with GitLab CI
Whenever we source "ci/lib.sh" we wrap the directives in a separate
group so that they can easily be collapsed in the web UI. And as we
source the script multiple times during a single CI run we thus end up
with the same section name reused multiple times, as well.

This is broken on GitLab CI though, where reusing the same group name is
not supported. The consequence is that only the last of these sections
can be collapsed.

Fix this issue by including the name of the sourcing script in the
group's name.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-12 16:57:21 +09:00
d2ca12020f ci/lib: do not interpret escape sequences in group () arguments
We use printf to set up sections with GitLab CI, which requires us to
print a bunch of escape sequences via printf. The group name is
controlled by the user and is expanded directly into the formatting
string, which may cause problems in case the argument contains escape
sequences or formatting directives.

Fix this potential issue by using formatting directives to pass variable
data.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-12 16:57:21 +09:00
33b06fa603 ci/lib: remove duplicate trap to end "CI setup" group
We exlicitly trap on EXIT in order to end the "CI setup" group. This
isn't necessary though given that `begin_group ()` already sets up the
trap for us.

Remove the duplicate trap.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-12 16:57:21 +09:00
1fc7ddf35b test-lib: unconditionally enable leak checking
Over the last two releases we have plugged a couple hundred of memory
leaks exposed by the Git test suite. With the preceding commits we have
finally fixed the last leak exposed by our test suite, which means that
we are now basically leak free wherever we have branch coverage.

From hereon, the Git test suite should ideally stay free of memory
leaks. Most importantly, any test suite that is being added should
automatically be subject to the leak checker, and if that test does not
pass it is a strong signal that the added code introduced new memory
leaks and should not be accepted without further changes.

Drop the infrastructure around TEST_PASSES_SANITIZE_LEAK to reflect this
new requirement. Like this, all test suites will be subject to the leak
checker by default.

This is being intentionally strict, but we still have an escape hatch:
the SANITIZE_LEAK prerequisite. There is one known case in t5601 where
the leak sanitizer itself is buggy, so adding this prereq in such cases
is acceptable. Another acceptable situation is when a newly added test
uncovers preexisting memory leaks: when fixing that memory leak would be
sufficiently complicated it is fine to annotate and document the leak
accordingly. But in any case, the burden is now on the patch author to
explain why exactly they have to add the SANITIZE_LEAK prerequisite.

The TEST_PASSES_SANITIZE_LEAK annotations will be dropped in the next
patch.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-21 08:23:47 +09:00
7b1f01f02e Merge branch 'ss/duplicate-typos'
Typofixes.

* ss/duplicate-typos:
  global: Fix duplicate word typos
2024-11-01 12:53:23 -04:00
c32d4a8cfe global: Fix duplicate word typos
Used regex to find these typos:

    (?<!struct )(?<=\s)([a-z]{1,}) \1(?=\s)

Signed-off-by: Sven Strickroth <email@cs-ware.de>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2024-10-21 16:05:04 -04:00
b7a08e947e ci: handle Windows-based CI jobs in GitLab CI
We try to abstract away any differences between different CI platforms
in "ci/lib.sh", such that knowledge specific to e.g. GitHub Actions or
GitLab CI is neatly encapsulated in a single place. Next to some generic
variables, we also set up some variables that are specific to the actual
platform that the CI operates on, e.g. Linux or macOS.

We do not yet support Windows runners on GitLab CI. Unfortunately, those
systems do not use the same "CI_JOB_IMAGE" environment variable as both
Linux and macOS do. Instead, we can use the "OS" variable, which should
have a value of "Windows_NT" on Windows platforms.

Handle the combination of "$OS,$CI_JOB_IMAGE" and introduce support for
Windows.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-10-09 11:33:04 -07:00
7cd8f1cc6e ci: add Ubuntu 16.04 job to GitLab CI
In the preceding commits we had to convert the linux32 job to be based
on Ubuntu 20.04 instead of Ubuntu 16.04 due to a limitation in GitHub
Workflows. This was the only job left that still tested against this old
but supported Ubuntu version, and we have no other jobs that test with a
comparatively old Linux distribution.

Add a new job to GitLab CI that tests with Ubuntu 16.04 to cover the
resulting test gap. GitLab doesn't modify Docker images in the same way
GitHub does and thus doesn't fall prey to the same issue. There are two
compatibility issues uncovered by this:

  - Ubuntu 16.04 does not support HTTP/2 in Apache. We thus cannot set
    `GIT_TEST_HTTPD=true`, which would otherwise cause us to fail when
    Apache fails to start.

  - Ubuntu 16.04 cannot use recent JGit versions as they depend on a
    more recent Java runtime than we have available. We thus disable
    installing any kind of optional dependencies that do not come from
    the package manager.

These two restrictions are fine though, as we only really care about
whether Git compiles and runs on such old distributions in the first
place.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-09-13 09:02:30 -07:00
6da44da936 Merge branch 'rj/test-sanitize-leak-log-fix'
Tests that use GIT_TEST_SANITIZE_LEAK_LOG feature got their exit
status inverted, which has been corrected.

* rj/test-sanitize-leak-log-fix:
  test-lib: GIT_TEST_SANITIZE_LEAK_LOG enabled by default
  test-lib: fix GIT_TEST_SANITIZE_LEAK_LOG
2024-07-17 10:47:24 -07:00
8c1d6691bc test-lib: GIT_TEST_SANITIZE_LEAK_LOG enabled by default
As we currently describe in t/README, it can happen that:

    Some tests run "git" (or "test-tool" etc.) without properly checking
    the exit code, or git will invoke itself and fail to ferry the
    abort() exit code to the original caller.

Therefore, GIT_TEST_SANITIZE_LEAK_LOG=true is needed to be set to
capture all memory leaks triggered by our tests.

It seems unnecessary to force users to remember this option, as
forgetting it could lead to missed memory leaks.

We could solve the problem by making it "true" by default, but that
might suggest we think "false" makes sense, which isn't the case.

Therefore, the best approach is to remove the option entirely while
maintaining the capability to detect memory leaks in blind spots of our
tests.

Signed-off-by: Rubén Justo <rjusto@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-07-11 08:37:44 -07:00
4551858c18 Merge branch 'ps/ci-fix-detection-of-ubuntu-20'
Fix for an embarrassing typo that prevented Python2 tests from running
anywhere.

* ps/ci-fix-detection-of-ubuntu-20:
  ci: fix check for Ubuntu 20.04
2024-06-17 15:55:53 -07:00
df651330ab ci: fix check for Ubuntu 20.04
In 5ca0c455f1 (ci: fix Python dependency on Ubuntu 24.04, 2024-05-06),
we made the use of Python 2 conditional on whether or not the CI job
runs Ubuntu 20.04. There was a brown-paper-bag-style bug though, where
the condition forgot to invoke the `test` builtin. The result of it is
that the check always fails, and thus all of our jobs run with Python 3
by accident.

Fix this.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-06-06 09:59:27 -07:00
3fc99d037f Merge branch 'jt/port-ci-whitespace-check-to-gitlab'
The "whitespace check" task that was enabled for GitHub Actions CI
has been ported to GitLab CI.

* jt/port-ci-whitespace-check-to-gitlab:
  gitlab-ci: add whitespace error check
  ci: make the whitespace report optional
  ci: separate whitespace check script
  github-ci: fix link to whitespace error
  ci: pre-collapse GitLab CI sections
2024-05-15 09:52:54 -07:00
dddddea4b5 Merge branch 'ps/ci-python-2-deprecation'
Unbreak CI jobs so that we do not attempt to use Python 2 that has
been removed from the platform.

* ps/ci-python-2-deprecation:
  ci: fix Python dependency on Ubuntu 24.04
2024-05-13 10:19:46 -07:00
5ca0c455f1 ci: fix Python dependency on Ubuntu 24.04
Newer versions of Ubuntu have dropped Python 2 starting with Ubuntu
23.04. By default though, our CI setups will try to use that Python
version on all Ubuntu-based jobs except for the "linux-gcc" one.

We didn't notice this issue due to two reasons:

  - The "ubuntu:latest" tag always points to the latest LTS release.
    Until a few weeks ago this was Ubuntu 22.04, which still had Python
    2.

  - Our Docker-based CI jobs had their own script to install
    dependencies until 9cdeb34b96 (ci: merge scripts which install
    dependencies, 2024-04-12), where we didn't even try to install
    Python at all for many of them.

Since the CI refactorings have originally been implemented, Ubuntu
24.04 was released, and it being an LTS versions means that the "latest"
tag now points to that Python-2-less version. Consequently, those jobs
that use "ubuntu:latest" broke.

Address this by using Python 2 on Ubuntu 20.04, only, whereas we use
Python 3 on all other Ubuntu jobs. Eventually, we should think about
dropping support for Python 2 completely.

Reported-by: Justin Tobler <jltobler@gmail.com>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-06 12:26:46 -07:00
7789ea5842 ci: pre-collapse GitLab CI sections
Sections of CI output defined by `begin_group()` and `end_group()` are
expanded in GitLab pipelines by default. This can make CI job output
rather noisy and harder to navigate. Update the behavior for GitLab
pipelines to now collapse sections by default.

Signed-off-by: Justin Tobler <jltobler@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-03 12:11:49 -07:00
2c5c7639e5 ci: fix setup of custom path for GitLab CI
Part of "install-dependencies.sh" is to install some binaries required
for tests into a custom directory that gets added to the PATH. This
directory is located at "$HOME/path" and thus depends on the current
user that the script executes as.

This creates problems for GitLab CI, which installs dependencies as the
root user, but runs tests as a separate, unprivileged user. As their
respective home directories are different, we will end up using two
different custom path directories. Consequently, the unprivileged user
will not be able to find the binaries that were set up as root user.

Fix this issue by allowing CI to override the custom path, which allows
GitLab to set up a constant value that isn't derived from "$HOME".

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:50 -07:00
d1ef3d3b1d ci: merge custom PATH directories
We're downloading various executables required by our tests. Each of
these executables goes into its own directory, which is then appended to
the PATH variable. Consequently, whenever we add a new dependency and
thus a new directory, we would have to adapt to this change in several
places.

Refactor this to instead put all binaries into a single directory.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:50 -07:00
2d65e5b6a6 ci: rename "runs_on_pool" to "distro"
The "runs_on_pool" environment variable is used by our CI scripts to
distinguish the different kinds of operating systems. It is quite
specific to GitHub Actions though and not really a descriptive name.

Rename the variable to "distro" to clarify its intent.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-12 08:47:49 -07:00
c0350cb964 ci: add jobs to test with the reftable backend
Add CI jobs for both GitHub Workflows and GitLab CI to run Git with the
new reftable backend.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-02-07 08:28:37 -08:00
56090a35ab ci: add macOS jobs to GitLab CI
Add a job to GitLab CI which runs tests on macOS, which matches the
equivalent "osx-clang" job that we have for GitHub Workflows. One
significant difference though is that this new job runs on Apple M1
machines and thus uses the "arm64" architecture. As GCC does not yet
support this comparatively new architecture we cannot easily include an
equivalent for the "osx-gcc" job that exists in GitHub Workflows.

Note that one test marked as `test_must_fail` is surprisingly passing:

  t7815-grep-binary.sh                             (Wstat: 0 Tests: 22 Failed: 0)
    TODO passed:   12

This seems to boil down to an unexpected difference in how regcomp(3P)
works when matching NUL bytes. Cross-checking with the respective GitHub
job shows that this is not an issue unique to the GitLab CI job as it
passes in the same way there.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-18 11:53:17 -08:00
c4b84b137a ci: make p4 setup on macOS more robust
When setting up Perforce on macOS we put both `p4` and `p4d` into
"$HOME/bin". On GitHub CI this directory is indeed contained in the PATH
environment variable and thus there is no need for additional setup than
to put the binaries there. But GitLab CI does not do this, and thus our
Perforce-based tests would be skipped there even though we download the
binaries.

Refactor the setup code to become more robust by downloading binaries
into a separate directory which we then manually append to our PATH.
This matches what we do on Linux-based jobs.

Note that it may seem like we already did append "$HOME/bin" to PATH
because we're actually removing the lines that adapt PATH. But we only
ever adapted the PATH variable in "ci/install-dependencies.sh", and
didn't adapt it when running "ci/run-build-and-test.sh". Consequently,
the required binaries wouldn't be found during the test run unless the
CI platform already had the "$HOME/bin" in PATH right from the start.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-18 11:53:17 -08:00
b4e6618fdf Merge branch 'js/ci-discard-prove-state'
The way CI testing used "prove" could lead to running the test
suite twice needlessly, which has been corrected.

* js/ci-discard-prove-state:
  ci: avoid running the test suite _twice_
2023-12-09 16:37:48 -08:00
14a4445d18 Merge branch 'ps/ci-gitlab'
Add support for GitLab CI.

* ps/ci-gitlab:
  ci: add support for GitLab CI
  ci: install test dependencies for linux-musl
  ci: squelch warnings when testing with unusable Git repo
  ci: unify setup of some environment variables
  ci: split out logic to set up failed test artifacts
  ci: group installation of Docker dependencies
  ci: make grouping setup more generic
  ci: reorder definitions for grouping functions
2023-12-09 16:37:48 -08:00
e7e03ef995 ci: avoid running the test suite _twice_
This is a late amendment of 4a6e4b9602 (CI: remove Travis CI support,
2021-11-23), whereby the `.prove` file (being written by the `prove`
command that is used to run the test suite) is no longer retained
between CI builds: This feature was only ever used in the Travis CI
builds, we tried for a while to do the same in Azure Pipelines CI runs
(but I gave up on it after a while), and we never used that feature in
GitHub Actions (nor does the new GitLab CI code use it).

Retaining the Prove cache has been fragile from the start, even though
the idea seemed good at the time, the idea being that the `.prove` file
caches information about previous `prove` runs (`save`) and uses them
(`slow`) to run the tests in the order from longer-running to shorter
ones, making optimal use of the parallelism implied by `--jobs=<N>`.

However, using a Prove cache can cause some surprising behavior: When
the `prove` caches information about a test script it has run,
subsequent `prove` runs (with `--state=slow`) will run the same test
script again even if said script is not specified on the `prove`
command-line!

So far, this bug did not matter. Right until d8f416bbb8 (ci: run unit
tests in CI, 2023-11-09) did it not matter.

But starting with that commit, we invoke `prove` _twice_ in CI, once to
run the regular test suite of regression test scripts, and once to run
the unit tests. Due to the bug, the second invocation re-runs all of the
tests that were already run as part of the first invocation. This not
only wastes build minutes, it also frequently causes the `osx-*` jobs to
fail because they already take a long time and now are likely to run
into a timeout.

The worst part about it is that there is actually no benefit to keep
running with `--state=slow,save`, ever since we decided no longer to
try to reuse the Prove cache between CI runs.

So let's just drop that Prove option and live happily ever after.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-14 09:24:23 +09:00
0e3b67e2aa ci: add support for GitLab CI
We already support Azure Pipelines and GitHub Workflows in the Git
project, but until now we do not have support for GitLab CI. While it is
arguably not in the interest of the Git project to maintain a ton of
different CI platforms, GitLab has recently ramped up its efforts and
tries to contribute to the Git project more regularly.

Part of a problem we hit at GitLab rather frequently is that our own,
custom CI setup we have is so different to the setup that the Git
project has. More esoteric jobs like "linux-TEST-vars" that also set a
couple of environment variables do not exist in GitLab's custom CI
setup, and maintaining them to keep up with what Git does feels like
wasted time. The result is that we regularly send patch series upstream
that fail to compile or pass tests in GitHub Workflows. We would thus
like to integrate the GitLab CI configuration into the Git project to
help us send better patch series upstream and thus reduce overhead for
the maintainer. Results of these pipeline runs will be made available
(at least) in GitLab's mirror of the Git project at [1].

This commit introduces the integration into our regular CI scripts so
that most of the setup continues to be shared across all of the CI
solutions. Note that as the builds on GitLab CI run as unprivileged
user, we need to pull in both sudo and shadow packages to our Alpine
based job to set this up.

[1]: https://gitlab.com/gitlab-org/git

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:10 +09:00
dd02c3b68c ci: squelch warnings when testing with unusable Git repo
Our CI jobs that run on Docker also use mostly the same architecture to
build and test Git via the "ci/run-build-and-tests.sh" script. These
scripts also provide some functionality to massage the Git repository
we're supposedly operating in.

In our Docker-based infrastructure we may not even have a Git repository
available though, which leads to warnings when those functions execute.
Make the helpers exit gracefully in case either there is no Git in our
PATH, or when not running in a Git repository.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:10 +09:00
9f17bef9a6 ci: unify setup of some environment variables
Both GitHub Actions and Azure Pipelines set up the environment variables
GIT_TEST_OPTS, GIT_PROVE_OPTS and MAKEFLAGS. And while most values are
actually the same, the setup is completely duplicate. With the upcoming
support for GitLab CI this duplication would only extend even further.

Unify the setup of those environment variables so that only the uncommon
parts are separated. While at it, we also perform some additional small
improvements:

    - We now always pass `--state=failed,slow,save` via GIT_PROVE_OPTS.
      It doesn't hurt on platforms where we don't persist the state, so
      this further reduces boilerplate.

    - When running on Windows systems we set `--no-chain-lint` and
      `--no-bin-wrappers`. Interestingly though, we did so _after_
      already having exported the respective environment variables.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:09 +09:00
e624f206bc ci: split out logic to set up failed test artifacts
We have some logic in place to create a directory with the output from
failed tests, which will then subsequently be uploaded as CI artifacts.
We're about to add support for GitLab CI, which will want to reuse the
logic.

Split the logic into a separate function so that it is reusable.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:09 +09:00
a7d499cb93 ci: make grouping setup more generic
Make the grouping setup more generic by always calling `begin_group ()`
and `end_group ()` regardless of whether we have stubbed those functions
or not. This ensures we can more readily add support for additional CI
platforms.

Furthermore, the `group ()` function is made generic so that it is the
same for both GitHub Actions and for other platforms. There is a
semantic conflict here though: GitHub Actions used to call `set +x` in
`group ()` whereas the non-GitHub case unconditionally uses `set -x`.
The latter would get overriden if we kept the `set +x` in the generic
version of `group ()`. To resolve this conflict, we simply drop the `set
+x` in the generic variant of this function. As `begin_group ()` calls
`set -x` anyway this is not much of a change though, as the only
commands that aren't printed anymore now are the ones between the
beginning of `group ()` and the end of `begin_group ()`.

Last, this commit changes `end_group ()` to also accept a parameter that
indicates _which_ group should end. This will be required by a later
commit that introduces support for GitLab CI.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:09 +09:00
a4761b605c ci: reorder definitions for grouping functions
We define a set of grouping functions that are used to group together
output in our CI, where these groups then end up as collapsible sections
in the respective pipeline platform. The way these functions are defined
is not easily extensible though as we have an up front check for the CI
_not_ being GitHub Actions, where we define the non-stub logic in the
else branch.

Reorder the conditional branches such that we explicitly handle GitHub
Actions.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:08 +09:00
682a868f67 ci: upgrade to using macos-13
In April, GitHub announced that the `macos-13` pool is available:
https://github.blog/changelog/2023-04-24-github-actions-macos-13-is-now-available/.
It is only a matter of time until the `macos-12` pool is going away,
therefore we should switch now, without pressure of a looming deadline.

Since the `macos-13` runners no longer include Python2, we also drop
specifically testing with Python2 and switch uniformly to Python3, see
https://github.com/actions/runner-images/blob/HEAD/images/macos/macos-13-Readme.md
for details about the software available on the `macos-13` pool's
runners.

Also, on macOS 13, Homebrew seems to install a `gcc@9` package that no
longer comes with a regular `unistd.h` (there seems only to be a
`ssp/unistd.h`), and hence builds would fail with:

    In file included from base85.c:1:
    git-compat-util.h:223:10: fatal error: unistd.h: No such file or directory
      223 | #include <unistd.h>
          |          ^~~~~~~~~~
    compilation terminated.

The reason why we install GCC v9.x explicitly is historical, and back in
the days it was because it was the _newest_ version available via
Homebrew: 176441bfb5 (ci: build Git with GCC 9 in the 'osx-gcc' build
job, 2019-11-27).

To reinstate the spirit of that commit _and_ to fix that build failure,
let's switch to the now-newest GCC version: v13.x.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-03 18:52:02 +09:00
6ba913629f ci(linux-asan-ubsan): let's save some time
Every once in a while, the `git-p4` tests flake for reasons outside of
our control. It typically fails with "Connection refused" e.g. here:
https://github.com/git/git/actions/runs/5969707156/job/16196057724

	[...]
	+ git p4 clone --dest=/home/runner/work/git/git/t/trash directory.t9807-git-p4-submit/git //depot
	  Initialized empty Git repository in /home/runner/work/git/git/t/trash directory.t9807-git-p4-submit/git/.git/
	  Perforce client error:
		Connect to server failed; check $P4PORT.
		TCP connect to localhost:9807 failed.
		connect: 127.0.0.1:9807: Connection refused
	  failure accessing depot: could not run p4
	  Importing from //depot into /home/runner/work/git/git/t/trash directory.t9807-git-p4-submit/git
	 [...]

This happens in other jobs, too, but in the `linux-asan-ubsan` job it
hurts the most because that job often takes over a full hour to run,
therefore re-running a failed `linux-asan-ubsan` job is _very_ costly.

The purpose of the `linux-asan-ubsan` job is to exercise the C code of
Git, anyway, and any part of Git's source code that the `git-p4` tests
run and that would benefit from the attention of ASAN/UBSAN are run
better in other tests anyway, as debugging C code run via Python scripts
can get a bit hairy.

In fact, it is not even just `git-p4` that is the problem (even if it
flakes often enough to be problematic in the CI builds), but really the
part about Python scripts. So let's just skip any Python parts of the
tests from being run in that job.

For good measure, also skip the Subversion tests because debugging C
code run via Perl scripts is as much fun as debugging C code run via
Python scripts. And it will reduce the time this very expensive job
takes, which is a big benefit.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-08-29 13:54:55 -07:00
ec6915265a ci: run ASan/UBSan in a single job
When we started running sanitizers in CI via 1c0962c0c4 (ci: add address
and undefined sanitizer tasks, 2022-10-20), we ran them as two separate
CI jobs, since as that commit notes, the combination "seems to take
forever".

And indeed, it does with gcc. However, since the previous commit
switched to using clang, the situation is different, and we can save
some CPU by using a single job for both. Comparing before/after CI runs,
this saved about 14 minutes (the single combined job took 54m, versus
44m plus 24m for ASan and UBSan jobs, respectively). That's wall-clock
and not CPU, but since our jobs are mostly CPU-bound, the two should be
closely proportional.

This does increase the end-to-end time of a CI run, though, since before
this patch the two jobs could run in parallel, and the sanitizer job is
our longest single job. It also means that we won't get a separate
result for "this passed with UBSan but not with ASan" or vice versa).
But as 1c0962c0c4 noted, that is not a very useful signal in practice.

Below are some more detailed timings of gcc vs clang that I measured by
running the test suite on my local workstation. Each measurement counts
only the time to run the test suite with each compiler (not the compile
time itself). We'll focus on the wall-clock times for simplicity, though
the CPU times follow roughly similar trends.

Here's a run with CC=gcc as a baseline:

  real	1m12.931s
  user	9m30.566s
  sys	8m9.538s

Running with SANITIZE=address increases the time by a factor of ~4.7x:

  real	5m40.352s
  user	49m37.044s
  sys	36m42.950s

Running with SANITIZE=undefined increases the time by a factor of ~1.7x:

  real	2m5.956s
  user	12m42.847s
  sys	19m27.067s

So let's call that 6.4 time units to run them separately (where a unit
is the time it takes to run the test suite with no sanitizers). As a
simplistic model, we might imagine that running them together would take
5.4 units (we save 1 unit because we are no longer running the test
suite twice, but just paying the sanitizer overhead on top of a single
run).

But that's not what happens. Running with SANITIZE=address,undefined
results in a factor of 9.3x:

  real	11m9.817s
  user	77m31.284s
  sys	96m40.454s

So not only did we not get faster when doing them together, we actually
spent 1.5x as much CPU as doing them separately! And while those
wall-clock numbers might not look too terrible, keep in mind that this
is on an unloaded 8-core machine. In the CI environment, wall-clock
times will be much closer to CPU times. So not only are we wasting CPU,
but we risk hitting timeouts.

Now let's try the same thing with clang. Here's our no-sanitizer
baseline run, which is almost identical to the gcc one (which is quite
convenient, because we can keep using the same "time units" to get an
apples-to-apples comparison):

  real	1m11.844s
  user	9m28.313s
  sys	8m8.240s

And now again with SANITIZE=address, we get a 5x factor (so slightly
worse than gcc's 4.7x, though I wouldn't read too much into it; there is
a fair bit of run-to-run noise):

  real	6m7.662s
  user	49m24.330s
  sys	44m13.846s

And with SANITIZE=undefined, we are at 1.5x, slightly outperforming gcc
(though again, that's probably mostly noise):

  real	1m50.028s
  user	11m0.973s
  sys	16m42.731s

So running them separately, our total cost is 6.5x. But if we combine
them in a single run (SANITIZE=address,undefined), we get:

  real	6m51.804s
  user	52m32.049s
  sys	51m46.711s

which is a factor of 5.7x. That's along the lines we'd hoped for!
Running them together saves us almost a whole time unit. And that's not
counting any time spent outside the test suite itself (starting the job,
setting up the environment, compiling) that we're no longer duplicating
by having two jobs.

So clang behaves like we'd hope: the overhead to run the sanitizers is
additive as you add more sanitizers. Whereas gcc's numbers seem very
close to multiplicative, almost as if the sanitizers were enforcing
their overheads on each other (though that is purely a guess on what is
going on; ultimately what matters to us is the amount of time it takes).

And that roughly matches the CI improvement I saw. A "time unit" there
is more like 12 minutes, and the observed time savings was 14 minutes
(with the extra presumably coming from avoiding duplicated setup, etc).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-06-03 10:35:13 +09:00
35898ad24d Makefile: use sha1collisiondetection by default on OSX and Darwin
When the sha1collisiondetection library was added and made the default
in [1] the interaction with APPLE_COMMON_CRYPTO added in [2] and [3]
seems to have been missed. On modern OSX and Darwin we are able to use
Apple's CommonCrypto both for SHA-1, and as a generic (but partial)
OpenSSL replacement.

This left OSX and Darwin without protection against the SHAttered
attack when building Git in its default configuration.

Let's also use sha1collisiondetection on OSX, to do so we'll need to
split up the "APPLE_COMMON_CRYPTO" flag into that flag and a new
"APPLE_COMMON_CRYPTO_SHA1".

Because of this we can stop conflating whether we want to use Apple's
CommonCrypto at all, and whether we want to use it for SHA-1.  This
makes the CI recipe added in [4] simpler.

1. e6b07da278 (Makefile: make DC_SHA1 the default, 2017-03-17)
2. 4dcd7732db (Makefile: add support for Apple CommonCrypto facility, 2013-05-19)
3. 61067954ce (cache.h: eliminate SHA-1 deprecation warnings on Mac OS X, 2013-05-19)
4. 1ad5c3df35 (ci: use DC_SHA1=YesPlease on osx-clang job for CI,
   2022-10-20)

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-12-16 06:06:56 +09:00
0b32d1aea2 Merge branch 'ab/ci-use-macos-12'
CI fix.

* ab/ci-use-macos-12:
  CI: upgrade to macos-12, and pin OSX version
2022-12-10 14:01:06 +09:00
d11192255d CI: upgrade to macos-12, and pin OSX version
Per [1] and the warnings our CI is emitting GitHub is phasing in
"macos-12" as their "macos-latest".

As with [2], let's pin our image to a specific version so that we're
not having it swept from under us, and our upgrade cycle can be more
predictable than whenever GitHub changes their images.

1. https://github.com/actions/runner-images/issues/6384
2. 0178420b9c (github-actions: run gcc-8 on ubuntu-20.04 image,
   2022-11-25)

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-12-07 13:36:22 +09:00
515ffabccf Merge branch 'jx/ci-ubuntu-fix'
Adjust the GitHub CI to newer ubuntu release.

* jx/ci-ubuntu-fix:
  ci: install python on ubuntu
  ci: use the same version of p4 on both Linux and macOS
  ci: remove the pipe after "p4 -V" to catch errors
  github-actions: run gcc-8 on ubuntu-20.04 image
2022-11-29 10:41:06 +09:00
0d3507f3e7 ci: install python on ubuntu
Python is missing from the default ubuntu-22.04 runner image, which
prevents git-p4 from working. To install python on ubuntu, we need
to provide the correct package names:

 * On Ubuntu 18.04 (bionic), "/usr/bin/python2" is provided by the
   "python" package, and "/usr/bin/python3" is provided by the "python3"
   package.

 * On Ubuntu 20.04 (focal) and above, "/usr/bin/python2" is provided by
   the "python2" package which has a different name from bionic, and
   "/usr/bin/python3" is provided by "python3".

Since the "ubuntu-latest" runner image has a higher version, its
safe to use "python2" or "python3" package name.

Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-11-27 09:33:43 +09:00
31a1952bbd ci: use the same version of p4 on both Linux and macOS
There would be a segmentation fault when running p4 v16.2 on ubuntu
22.04 which is the latest version of ubuntu runner image for github
actions.

By checking each version from [1], p4d version 21.1 and above can work
properly on ubuntu 22.04. But version 22.x will break some p4 test
cases. So p4 version 21.x is exactly the version we can use.

With this update, the versions of p4 for Linux and macOS happen to be
the same. So we can add the version number directly into the "P4WHENCE"
variable, and reuse it in p4 installation for macOS.

By removing the "LINUX_P4_VERSION" variable from "ci/lib.sh", the
comment left above has nothing to do with p4, but still applies to
git-lfs. Since we have a fixed version of git-lfs installed on Linux,
we may have a different version on macOS.

[1]: https://cdist2.perforce.com/perforce/

Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-11-27 09:32:56 +09:00
0178420b9c github-actions: run gcc-8 on ubuntu-20.04 image
GitHub starts to upgrade its runner image "ubuntu-latest" from version
"ubuntu-20.04" to version "ubuntu-22.04". It will fail to find and
install "gcc-8" package on the new runner image.

Change some of the runner images from "ubuntu-latest" to "ubuntu-20.04"
in order to install "gcc-8" as a dependency.

The first revision of this patch tried to replace "$runs_on_pool" in
"ci/*.sh" with a new "$runs_on_os" environment variable based on the
"os" field in the matrix strategy. But these "os" fields in matrix
strategies are obsolete legacies from commit [1] and commit [2], and
are no longer useful. So remove these unused "os" fields.

[1]: c08bb26010 (CI: rename the "Linux32" job to lower-case "linux32",
                 2021-11-23)
[2]: 25715419bf (CI: don't run "make test" twice in one job, 2021-11-23)

Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-11-27 09:31:12 +09:00
dc1cf3580e Makefile & test-tool: replace "DC_SHA1" variable with a "define"
Address the root cause of technical debt we've been carrying since
sha1collisiondetection was made the default in [1]. In a preceding
commit we narrowly fixed a bug where the "DC_SHA1" variable would be
unset (in combination with "NO_APPLE_COMMON_CRYPTO=" on OSX), even
though we had the sha1collisiondetection library enabled.

But the only reason we needed to have such a user-exposed knob went
away with [1], and it's been doing nothing useful since then. We don't
care if you define DC_SHA1=*, we only care that you don't ask for any
other SHA-1 implementation. If it turns out that you didn't, we'll use
sha1collisiondetection, whether you had "DC_SHA1" set or not.

As a result of this being confusing we had e.g. [2] for cmake and the
recent [3] for ci/lib.sh setting "DC_SHA1" explicitly, even though
this was always a NOOP.

A much simpler way to do this is to stop having the Makefile and
CMakeLists.txt set "DC_SHA1" to be picked up by the test-lib.sh, let's
instead add a trivial "test-tool sha1-is-sha1dc". It returns zero if
we're using sha1collisiondetection, non-zero otherwise.

1. e6b07da278 (Makefile: make DC_SHA1 the default, 2017-03-17)
2. c4b2f41b5f (cmake: support for testing git with ctest, 2020-06-26)
3. 1ad5c3df35 (ci: use DC_SHA1=YesPlease on osx-clang job for CI,
   2022-10-20)

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-07 22:11:51 -05:00
4039b8f112 Merge branch 'jc/more-sanitizer-at-ci'
Enable address and undefined sanitizer tasks at GitHub Actions CI.

* jc/more-sanitizer-at-ci:
  ci: add address and undefined sanitizer tasks
2022-10-25 17:11:44 -07:00
1ad5c3df35 ci: use DC_SHA1=YesPlease on osx-clang job for CI
7b8cfe34 (Merge branch 'ed/fsmonitor-on-networked-macos',
2022-10-17) broke the build on macOS with sha1dc by bypassing our
hash abstraction (git_SHA_CTX etc.), but it wasn't caught before the
problematic topic was merged down to the 'master' branch.  Nobody
was even compile testing with DC_SHA1 set, although it is the
recommended choice in these days for folks when they use SHA-1.

This was because the default for macOS uses Apple Common Crypto, and
both of the two CI jobs did not override the default.  Tweak one of
them to use DC_SHA1 to improve the coverage.

We may want to give similar diversity for Linux jobs so that some of
them build with other implementations of SHA-1; they currently all
build and test with DC_SHA1 as that is the default on everywhere
other than macOS.

But let's start small to fill only the immediate need.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-10-20 10:01:37 -07:00
1c0962c0c4 ci: add address and undefined sanitizer tasks
The current code is clean with these two sanitizers, and we would
like to keep it that way by running the checks for any new code.

The signal of "passed with asan, but not ubsan" (or vice versa) is
not that useful in practice, so it is tempting to run both santizers
in a single task, but it seems to take forever, so tentatively let's
try having two separate ones.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-10-20 09:20:59 -07:00
c24feabcfb CI: use "GIT_TEST_SANITIZE_LEAK_LOG=true" in linux-leaks
As noted in a preceding commit the leak checking done by
"GIT_TEST_PASSING_SANITIZE_LEAK=true" (added in [1]) is incomplete
without combining it with "GIT_TEST_SANITIZE_LEAK_LOG=true".

Let's run our CI with that, to ensure that we catch cases where our
tests are missing the abort() exit code resulting from a leak for
whatever reason. The reasons for that are discussed in detail in a
preceding commit.

1. 956d2e4639 (tests: add a test mode for SANITIZE=leak, run it in
   CI, 2021-09-23)

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-27 16:35:40 -07:00