Compare commits

..

117 Commits

Author SHA1 Message Date
42ce4c7930 Git 2.25.5
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2021-02-12 15:49:55 +01:00
97d1dcb1ef Sync with 2.24.4
* maint-2.24:
  Git 2.24.4
  Git 2.23.4
  Git 2.22.5
  Git 2.21.4
  Git 2.20.5
  Git 2.19.6
  Git 2.18.5
  Git 2.17.6
  unpack_trees(): start with a fresh lstat cache
  run-command: invalidate lstat cache after a command finished
  checkout: fix bug that makes checkout follow symlinks in leading path
2021-02-12 15:49:55 +01:00
06214d171b Git 2.24.4
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2021-02-12 15:49:50 +01:00
92ac04b8ee Sync with 2.23.4
* maint-2.23:
  Git 2.23.4
  Git 2.22.5
  Git 2.21.4
  Git 2.20.5
  Git 2.19.6
  Git 2.18.5
  Git 2.17.6
  unpack_trees(): start with a fresh lstat cache
  run-command: invalidate lstat cache after a command finished
  checkout: fix bug that makes checkout follow symlinks in leading path
2021-02-12 15:49:50 +01:00
d60b6a96f0 Git 2.23.4
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2021-02-12 15:49:46 +01:00
4bd06fd490 Sync with 2.22.5
* maint-2.22:
  Git 2.22.5
  Git 2.21.4
  Git 2.20.5
  Git 2.19.6
  Git 2.18.5
  Git 2.17.6
  unpack_trees(): start with a fresh lstat cache
  run-command: invalidate lstat cache after a command finished
  checkout: fix bug that makes checkout follow symlinks in leading path
2021-02-12 15:49:45 +01:00
c753e2a7a8 Git 2.22.5
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2021-02-12 15:49:41 +01:00
bcf08f33d8 Sync with 2.21.4
* maint-2.21:
  Git 2.21.4
  Git 2.20.5
  Git 2.19.6
  Git 2.18.5
  Git 2.17.6
  unpack_trees(): start with a fresh lstat cache
  run-command: invalidate lstat cache after a command finished
  checkout: fix bug that makes checkout follow symlinks in leading path
2021-02-12 15:49:41 +01:00
c735d7470e Git 2.21.4
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2021-02-12 15:49:36 +01:00
b1726b1a38 Sync with 2.20.5
* maint-2.20:
  Git 2.20.5
  Git 2.19.6
  Git 2.18.5
  Git 2.17.6
  unpack_trees(): start with a fresh lstat cache
  run-command: invalidate lstat cache after a command finished
  checkout: fix bug that makes checkout follow symlinks in leading path
2021-02-12 15:49:35 +01:00
8b1a5f33d3 Git 2.20.5
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2021-02-12 15:49:17 +01:00
804963848e Sync with 2.19.6
* maint-2.19:
  Git 2.19.6
  Git 2.18.5
  Git 2.17.6
  unpack_trees(): start with a fresh lstat cache
  run-command: invalidate lstat cache after a command finished
  checkout: fix bug that makes checkout follow symlinks in leading path
2021-02-12 15:49:17 +01:00
9fb2a1fb08 Git 2.19.6
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2021-02-12 15:47:48 +01:00
fb049fd85b Sync with 2.18.5
* maint-2.18:
  Git 2.18.5
  Git 2.17.6
  unpack_trees(): start with a fresh lstat cache
  run-command: invalidate lstat cache after a command finished
  checkout: fix bug that makes checkout follow symlinks in leading path
2021-02-12 15:47:47 +01:00
6eed462c8f Git 2.18.5
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2021-02-12 15:47:43 +01:00
9b77cec89b Sync with 2.17.6
* maint-2.17:
  Git 2.17.6
  unpack_trees(): start with a fresh lstat cache
  run-command: invalidate lstat cache after a command finished
  checkout: fix bug that makes checkout follow symlinks in leading path
2021-02-12 15:47:42 +01:00
6b82d3eea6 Git 2.17.6
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2021-02-12 15:47:02 +01:00
22539ec3b5 unpack_trees(): start with a fresh lstat cache
We really want to avoid relying on stale information.

Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2021-02-12 15:47:02 +01:00
0d58fef58a run-command: invalidate lstat cache after a command finished
In the previous commit, we intercepted calls to `rmdir()` to invalidate
the lstat cache in the successful case, so that the lstat cache could
not have the idea that a directory exists where there is none.

The same situation can arise, of course, when a separate process is
spawned (most notably, this is the case in `submodule_move_head()`).
Obviously, we cannot know whether a directory was removed in that
process, therefore we must invalidate the lstat cache afterwards.

Note: in contrast to `lstat_cache_aware_rmdir()`, we invalidate the
lstat cache even in case of an error: the process might have removed a
directory and still have failed afterwards.

Co-authored-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2021-02-12 15:47:02 +01:00
684dd4c2b4 checkout: fix bug that makes checkout follow symlinks in leading path
Before checking out a file, we have to confirm that all of its leading
components are real existing directories. And to reduce the number of
lstat() calls in this process, we cache the last leading path known to
contain only directories. However, when a path collision occurs (e.g.
when checking out case-sensitive files in case-insensitive file
systems), a cached path might have its file type changed on disk,
leaving the cache on an invalid state. Normally, this doesn't bring
any bad consequences as we usually check out files in index order, and
therefore, by the time the cached path becomes outdated, we no longer
need it anyway (because all files in that directory would have already
been written).

But, there are some users of the checkout machinery that do not always
follow the index order. In particular: checkout-index writes the paths
in the same order that they appear on the CLI (or stdin); and the
delayed checkout feature -- used when a long-running filter process
replies with "status=delayed" -- postpones the checkout of some entries,
thus modifying the checkout order.

When we have to check out an out-of-order entry and the lstat() cache is
invalid (due to a previous path collision), checkout_entry() may end up
using the invalid data and thrusting that the leading components are
real directories when, in reality, they are not. In the best case
scenario, where the directory was replaced by a regular file, the user
will get an error: "fatal: unable to create file 'foo/bar': Not a
directory". But if the directory was replaced by a symlink, checkout
could actually end up following the symlink and writing the file at a
wrong place, even outside the repository. Since delayed checkout is
affected by this bug, it could be used by an attacker to write
arbitrary files during the clone of a maliciously crafted repository.

Some candidate solutions considered were to disable the lstat() cache
during unordered checkouts or sort the entries before passing them to
the checkout machinery. But both ideas include some performance penalty
and they don't future-proof the code against new unordered use cases.

Instead, we now manually reset the lstat cache whenever we successfully
remove a directory. Note: We are not even checking whether the directory
was the same as the lstat cache points to because we might face a
scenario where the paths refer to the same location but differ due to
case folding, precomposed UTF-8 issues, or the presence of `..`
components in the path. Two regression tests, with case-collisions and
utf8-collisions, are also added for both checkout-index and delayed
checkout.

Note: to make the previously mentioned clone attack unfeasible, it would
be sufficient to reset the lstat cache only after the remove_subtree()
call inside checkout_entry(). This is the place where we would remove a
directory whose path collides with the path of another entry that we are
currently trying to check out (possibly a symlink). However, in the
interest of a thorough fix that does not leave Git open to
similar-but-not-identical attack vectors, we decided to intercept
all `rmdir()` calls in one fell swoop.

This addresses CVE-2021-21300.

Co-authored-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br>
2021-02-12 15:47:02 +01:00
7397ca3373 Git 2.25.4
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:31:07 -07:00
b86a4be245 Git 2.24.3
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:30:34 -07:00
f2771efd07 Git 2.23.3
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:30:27 -07:00
c9808fa014 Git 2.22.4
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:30:19 -07:00
9206d27eb5 Git 2.21.3
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:30:08 -07:00
041bc65923 Git 2.20.4
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:28:57 -07:00
76b54ee9b9 Git 2.19.5
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:26:41 -07:00
ba6f0905fd Git 2.18.4
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:24:14 -07:00
df5be6dc3f Git 2.17.5
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:10:58 -07:00
1a3609e402 fsck: reject URL with empty host in .gitmodules
Git's URL parser interprets

	https:///example.com/repo.git

to have no host and a path of "example.com/repo.git".  Curl, on the
other hand, internally redirects it to https://example.com/repo.git.  As
a result, until "credential: parse URL without host as empty host, not
unset", tricking a user into fetching from such a URL would cause Git to
send credentials for another host to example.com.

Teach fsck to block and detect .gitmodules files using such a URL to
prevent sharing them with Git versions that are not yet protected.

A relative URL in a .gitmodules file could also be used to trigger this.
The relative URL resolver used for .gitmodules does not normalize
sequences of slashes and can follow ".." components out of the path part
and to the host part of a URL, meaning that such a relative URL can be
used to traverse from a https://foo.example.com/innocent superproject to
a https:///attacker.example.com/exploit submodule. Fortunately,
redundant extra slashes in .gitmodules are rare, so we can catch this by
detecting one after a leading sequence of "./" and "../" components.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
2020-04-19 16:10:58 -07:00
e7fab62b73 credential: treat URL with empty scheme as invalid
Until "credential: refuse to operate when missing host or protocol",
Git's credential handling code interpreted URLs with empty scheme to
mean "give me credentials matching this host for any protocol".

Luckily libcurl does not recognize such URLs (it tries to look for a
protocol named "" and fails). Just in case that changes, let's reject
them within Git as well. This way, credential_from_url is guaranteed to
always produce a "struct credential" with protocol and host set.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:10:58 -07:00
c44088ecc4 credential: treat URL without scheme as invalid
libcurl permits making requests without a URL scheme specified.  In
this case, it guesses the URL from the hostname, so I can run

	git ls-remote http::ftp.example.com/path/to/repo

and it would make an FTP request.

Any user intentionally using such a URL is likely to have made a typo.
Unfortunately, credential_from_url is not able to determine the host and
protocol in order to determine appropriate credentials to send, and
until "credential: refuse to operate when missing host or protocol",
this resulted in another host's credentials being leaked to the named
host.

Teach credential_from_url_gently to consider such a URL to be invalid
so that fsck can detect and block gitmodules files with such URLs,
allowing server operators to avoid serving them to downstream users
running older versions of Git.

This also means that when such URLs are passed on the command line, Git
will print a clearer error so affected users can switch to the simpler
URL that explicitly specifies the host and protocol they intend.

One subtlety: .gitmodules files can contain relative URLs, representing
a URL relative to the URL they were cloned from.  The relative URL
resolver used for .gitmodules can follow ".." components out of the path
part and past the host part of a URL, meaning that such a relative URL
can be used to traverse from a https://foo.example.com/innocent
superproject to a https::attacker.example.com/exploit submodule.
Fortunately a leading ':' in the first path component after a series of
leading './' and '../' components is unlikely to show up in other
contexts, so we can catch this by detecting that pattern.

Reported-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
2020-04-19 16:10:58 -07:00
fe29a9b7b0 credential: die() when parsing invalid urls
When we try to initialize credential loading by URL and find that the
URL is invalid, we set all fields to NULL in order to avoid acting on
malicious input. Later when we request credentials, we diagonse the
erroneous input:

	fatal: refusing to work with credential missing host field

This is problematic in two ways:

- The message doesn't tell the user *why* we are missing the host
  field, so they can't tell from this message alone how to recover.
  There can be intervening messages after the original warning of
  bad input, so the user may not have the context to put two and two
  together.

- The error only occurs when we actually need to get a credential.  If
  the URL permits anonymous access, the only encouragement the user gets
  to correct their bogus URL is a quiet warning.

  This is inconsistent with the check we perform in fsck, where any use
  of such a URL as a submodule is an error.

When we see such a bogus URL, let's not try to be nice and continue
without helpers. Instead, die() immediately. This is simpler and
obviously safe. And there's very little chance of disrupting a normal
workflow.

It's _possible_ that somebody has a legitimate URL with a raw newline in
it. It already wouldn't work with credential helpers, so this patch
steps that up from an inconvenience to "we will refuse to work with it
at all". If such a case does exist, we should figure out a way to work
with it (especially if the newline is only in the path component, which
we normally don't even pass to helpers). But until we see a real report,
we're better off being defensive.

Reported-by: Carlo Arenas <carenas@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:10:58 -07:00
a2b26ffb1a fsck: convert gitmodules url to URL passed to curl
In 07259e74ec (fsck: detect gitmodules URLs with embedded newlines,
2020-03-11), git fsck learned to check whether URLs in .gitmodules could
be understood by the credential machinery when they are handled by
git-remote-curl.

However, the check is overbroad: it checks all URLs instead of only
URLs that would be passed to git-remote-curl. In principle a git:// or
file:/// URL does not need to follow the same conventions as an http://
URL; in particular, git:// and file:// protocols are not succeptible to
issues in the credential API because they do not support attaching
credentials.

In the HTTP case, the URL in .gitmodules does not always match the URL
that would be passed to git-remote-curl and the credential machinery:
Git's URL syntax allows specifying a remote helper followed by a "::"
delimiter and a URL to be passed to it, so that

	git ls-remote http::https://example.com/repo.git

invokes git-remote-http with https://example.com/repo.git as its URL
argument. With today's checks, that distinction does not make a
difference, but for a check we are about to introduce (for empty URL
schemes) it will matter.

.gitmodules files also support relative URLs. To ensure coverage for the
https based embedded-newline attack, urldecode and check them directly
for embedded newlines.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
2020-04-19 16:10:58 -07:00
8ba8ed568e credential: refuse to operate when missing host or protocol
The credential helper protocol was designed to be very flexible: the
fields it takes as input are treated as a pattern, and any missing
fields are taken as wildcards. This allows unusual things like:

  echo protocol=https | git credential reject

to delete all stored https credentials (assuming the helpers themselves
treat the input that way). But when helpers are invoked automatically by
Git, this flexibility works against us. If for whatever reason we don't
have a "host" field, then we'd match _any_ host. When you're filling a
credential to send to a remote server, this is almost certainly not what
you want.

Prevent this at the layer that writes to the credential helper. Add a
check to the credential API that the host and protocol are always passed
in, and add an assertion to the credential_write function that speaks
credential helper protocol to be doubly sure.

There are a few ways this can be triggered in practice:

  - the "git credential" command passes along arbitrary credential
    parameters it reads from stdin.

  - until the previous patch, when the host field of a URL is empty, we
    would leave it unset (rather than setting it to the empty string)

  - a URL like "example.com/foo.git" is treated by curl as if "http://"
    was present, but our parser sees it as a non-URL and leaves all
    fields unset

  - the recent fix for URLs with embedded newlines blanks the URL but
    otherwise continues. Rather than having the desired effect of
    looking up no credential at all, many helpers will return _any_
    credential

Our earlier test for an embedded newline didn't catch this because it
only checked that the credential was cleared, but didn't configure an
actual helper. Configuring the "verbatim" helper in the test would show
that it is invoked (it's obviously a silly helper which doesn't look at
its input, but the point is that it shouldn't be run at all). Since
we're switching this case to die(), we don't need to bother with a
helper. We can see the new behavior just by checking that the operation
fails.

We'll add new tests covering partial input as well (these can be
triggered through various means with url-parsing, but it's simpler to
just check them directly, as we know we are covered even if the url
parser changes behavior in the future).

[jn: changed to die() instead of logging and showing a manual
 username/password prompt]

Reported-by: Carlo Arenas <carenas@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:10:58 -07:00
24036686c4 credential: parse URL without host as empty host, not unset
We may feed a URL like "cert:///path/to/cert.pem" into the credential
machinery to get the key for a client-side certificate. That
credential has no hostname field, which is about to be disallowed (to
avoid confusion with protocols where a helper _would_ expect a
hostname).

This means as of the next patch, credential helpers won't work for
unlocking certs. Let's fix that by doing two things:

  - when we parse a url with an empty host, set the host field to the
    empty string (asking only to match stored entries with an empty
    host) rather than NULL (asking to match _any_ host).

  - when we build a cert:// credential by hand, similarly assign an
    empty string

It's the latter that is more likely to impact real users in practice,
since it's what's used for http connections. But we don't have good
infrastructure to test it.

The url-parsing version will help anybody using git-credential in a
script, and is easy to test.

Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:10:57 -07:00
73aafe9bc2 t0300: use more realistic inputs
Many of the tests in t0300 give partial inputs to git-credential,
omitting a protocol or hostname. We're checking only high-level things
like whether and how helpers are invoked at all, and we don't care about
specific hosts. However, in preparation for tightening up the rules
about when we're willing to run a helper, let's start using input that's
a bit more realistic: pretend as if http://example.com is being
examined.

This shouldn't change the point of any of the tests, but do note we have
to adjust the expected output to accommodate this (filling a credential
will repeat back the protocol/host fields to stdout, and the helper
debug messages and askpass prompt will change on stderr).

Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:10:57 -07:00
a88dbd2f8c t0300: make "quit" helper more realistic
We test a toy credential helper that writes "quit=1" and confirms that
we stop running other helpers. However, that helper is unrealistic in
that it does not bother to read its stdin at all.

For now we don't send any input to it, because we feed git-credential a
blank credential. But that will change in the next patch, which will
cause this test to racily fail, as git-credential will get SIGPIPE
writing to the helper rather than exiting because it was asked to.

Let's make this one-off helper more like our other sample helpers, and
have it source the "dump" script. That will read stdin, fixing the
SIGPIPE problem. But it will also write what it sees to stderr. We can
make the test more robust by checking that output, which confirms that
we do run the quit helper, don't run any other helpers, and exit for the
reason we expected.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:10:52 -07:00
67b0a24910 Git 2.25.3
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-17 18:12:01 -07:00
0822e66b5d Git 2.25.2
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-17 15:06:37 -07:00
65588b0b2e unicode: update the width tables to Unicode 13.0
Now that Unicode 13.0 has been announced[0], update the character
width tables to the new version.

[0] https://home.unicode.org/announcing-the-unicode-standard-version-13-0/

Signed-off-by: Beat Bolli <dev+git@drbeat.li>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-17 15:06:37 -07:00
7be274b0ff Merge branch 'js/ci-windows-update' into maint
Updates to the CI settings.

* js/ci-windows-update:
  Azure Pipeline: switch to the latest agent pools
  ci: prevent `perforce` from being quarantined
  t/lib-httpd: avoid using macOS' sed
2020-03-17 15:02:26 -07:00
9a75ecda1b Merge branch 'jk/run-command-formatfix' into maint
Code style cleanup.

* jk/run-command-formatfix:
  run-command.h: fix mis-indented struct member
2020-03-17 15:02:26 -07:00
221887a492 Merge branch 'jk/doc-credential-helper' into maint
Docfix.

* jk/doc-credential-helper:
  doc: move credential helper info into gitcredentials(7)
2020-03-17 15:02:26 -07:00
32fc2c6dd6 Merge branch 'js/mingw-open-in-gdb' into maint
Dev support.

* js/mingw-open-in-gdb:
  mingw: add a helper function to attach GDB to the current process
2020-03-17 15:02:25 -07:00
fe0d2c8ddb Merge branch 'js/test-unc-fetch' into maint
Test updates.

* js/test-unc-fetch:
  t5580: test cloning without file://, test fetching via UNC paths
2020-03-17 15:02:25 -07:00
618db3621a Merge branch 'js/test-write-junit-xml-fix' into maint
Testfix.

* js/test-write-junit-xml-fix:
  tests: fix --write-junit-xml with subshells
2020-03-17 15:02:25 -07:00
50e1b4166f Merge branch 'en/simplify-check-updates-in-unpack-trees' into maint
Code simplification.

* en/simplify-check-updates-in-unpack-trees:
  unpack-trees: exit check_updates() early if updates are not wanted
2020-03-17 15:02:25 -07:00
fda2baffd2 Merge branch 'jc/doc-single-h-is-for-help' into maint
Both "git ls-remote -h" and "git grep -h" give short usage help,
like any other Git subcommand, but it is not unreasonable to expect
that the former would behave the same as "git ls-remote --head"
(there is no other sensible behaviour for the latter).  The
documentation has been updated in an attempt to clarify this.

* jc/doc-single-h-is-for-help:
  Documentation: clarify that `-h` alone stands for `help`
2020-03-17 15:02:24 -07:00
41d910ea6c Merge branch 'hd/show-one-mergetag-fix' into maint
"git show" and others gave an object name in raw format in its
error output, which has been corrected to give it in hex.

* hd/show-one-mergetag-fix:
  show_one_mergetag: print non-parent in hex form.
2020-03-17 15:02:24 -07:00
2d7247af6f Merge branch 'am/mingw-poll-fix' into maint
MinGW's poll() emulation has been improved.

* am/mingw-poll-fix:
  mingw: workaround for hangs when sending STDIN
2020-03-17 15:02:24 -07:00
4e730fcd18 Merge branch 'hi/gpg-use-check-signature' into maint
"git merge signed-tag" while lacking the public key started to say
"No signature", which was utterly wrong.  This regression has been
reverted.

* hi/gpg-use-check-signature:
  Revert "gpg-interface: prefer check_signature() for GPG verification"
2020-03-17 15:02:23 -07:00
76ccbdaf97 Merge branch 'ds/partial-clone-fixes' into maint
Fix for a bug revealed by a recent change to make the protocol v2
the default.

* ds/partial-clone-fixes:
  partial-clone: avoid fetching when looking for objects
  partial-clone: demonstrate bugs in partial fetch
2020-03-17 15:02:23 -07:00
569b89842d Merge branch 'en/t3433-rebase-stat-dirty-failure' into maint
The merge-recursive machinery failed to refresh the cache entry for
a merge result in a couple of places, resulting in an unnecessary
merge failure, which has been fixed.

* en/t3433-rebase-stat-dirty-failure:
  merge-recursive: fix the refresh logic in update_file_flags
  t3433: new rebase testcase documenting a stat-dirty-like failure
2020-03-17 15:02:23 -07:00
16a4bf1035 Merge branch 'en/check-ignore' into maint
"git check-ignore" did not work when the given path is explicitly
marked as not ignored with a negative entry in the .gitignore file.

* en/check-ignore:
  check-ignore: fix documentation and implementation to match
2020-03-17 15:02:23 -07:00
3246495a5c Merge branch 'jk/push-option-doc-markup-fix' into maint
Doc markup fix.

* jk/push-option-doc-markup-fix:
  doc/config/push: use longer "--" line for preformatted example
2020-03-17 15:02:22 -07:00
56f97d5896 Merge branch 'jk/doc-diff-parallel' into maint
Update to doc-diff.

* jk/doc-diff-parallel:
  doc-diff: use single-colon rule in rendering Makefile
2020-03-17 15:02:22 -07:00
1a4abcbb3b Merge branch 'jh/notes-fanout-fix' into maint
The code to automatically shrink the fan-out in the notes tree had
an off-by-one bug, which has been killed.

* jh/notes-fanout-fix:
  notes.c: fix off-by-one error when decreasing notes fanout
  t3305: check notes fanout more carefully and robustly
2020-03-17 15:02:22 -07:00
7e84f4608f Merge branch 'jk/index-pack-dupfix' into maint
The index-pack code now diagnoses a bad input packstream that
records the same object twice when it is used as delta base; the
code used to declare a software bug when encountering such an
input, but it is an input error.

* jk/index-pack-dupfix:
  index-pack: downgrade twice-resolved REF_DELTA to die()
2020-03-17 15:02:22 -07:00
fa24bbe864 Merge branch 'js/rebase-i-with-colliding-hash' into maint
"git rebase -i" identifies existing commits in its todo file with
their abbreviated object name, which could become ambigous as it
goes to create new commits, and has a mechanism to avoid ambiguity
in the main part of its execution.  A few other cases however were
not covered by the protection against ambiguity, which has been
corrected.

* js/rebase-i-with-colliding-hash:
  rebase -i: also avoid SHA-1 collisions with missingCommitsCheck
  rebase -i: re-fix short SHA-1 collision
  parse_insn_line(): improve error message when parsing failed
2020-03-17 15:02:21 -07:00
a7a2e12b6e Merge branch 'jk/clang-sanitizer-fixes' into maint
C pedantry ;-) fix.

* jk/clang-sanitizer-fixes:
  obstack: avoid computing offsets from NULL pointer
  xdiff: avoid computing non-zero offset from NULL pointer
  avoid computing zero offsets from NULL pointer
  merge-recursive: use subtraction to flip stage
  merge-recursive: silence -Wxor-used-as-pow warning
2020-03-17 15:02:21 -07:00
93d0892891 Merge branch 'dt/submodule-rm-with-stale-cache' into maint
Running "git rm" on a submodule failed unnecessarily when
.gitmodules is only cache-dirty, which has been corrected.

* dt/submodule-rm-with-stale-cache:
  git rm submodule: succeed if .gitmodules index stat info is zero
2020-03-17 15:02:21 -07:00
dae477777e Merge branch 'pb/recurse-submodule-in-worktree-fix' into maint
The "--recurse-submodules" option of various subcommands did not
work well when run in an alternate worktree, which has been
corrected.

* pb/recurse-submodule-in-worktree-fix:
  submodule.c: use get_git_dir() instead of get_git_common_dir()
  t2405: clarify test descriptions and simplify test
  t2405: use git -C and test_commit -C instead of subshells
  t7410: rename to t2405-worktree-submodule.sh
2020-03-17 15:02:21 -07:00
758d0773ba Merge branch 'es/outside-repo-errmsg-hints' into maint
An earlier update to show the location of working tree in the error
message did not consider the possibility that a git command may be
run in a bare repository, which has been corrected.

* es/outside-repo-errmsg-hints:
  prefix_path: show gitdir if worktree unavailable
  prefix_path: show gitdir when arg is outside repo
2020-03-17 15:02:20 -07:00
f0c344ce57 Merge branch 'js/builtin-add-i-cmds' into maint
Minor bugfixes to "git add -i" that has recently been rewritten in C.

* js/builtin-add-i-cmds:
  built-in add -i: accept open-ended ranges again
  built-in add -i: do not try to `patch`/`diff` an empty list of files
2020-03-17 15:02:20 -07:00
506223f9c5 Git 2.24.2
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-17 14:36:45 -07:00
17a02783d8 Git 2.23.2
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-17 14:33:34 -07:00
69fab82147 Git 2.22.3
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-17 14:24:55 -07:00
fe22686494 Git 2.21.2
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-17 14:16:08 -07:00
d1259ce117 Git 2.20.3
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-17 13:46:10 -07:00
a5979d7009 Git 2.19.4
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-17 13:43:08 -07:00
21a3e5016b Git 2.18.3
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-17 13:34:12 -07:00
c42c0f1297 Git 2.17.4
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-17 13:25:33 -07:00
5c20398699 prefix_path: show gitdir if worktree unavailable
If there is no worktree at present, we can still hint the user about
Git's current directory by showing them the absolute path to the Git
directory. Even though the Git directory doesn't make it as easy to
locate the worktree in question, it can still help a user figure out
what's going on while developing a script.

This fixes a segmentation fault introduced in e0020b2f
("prefix_path: show gitdir when arg is outside repo", 2020-02-14).

Signed-off-by: Emily Shaffer <emilyshaffer@google.com>
[jc: added minimum tests, with help from Szeder Gábor]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-15 09:35:46 -07:00
07259e74ec fsck: detect gitmodules URLs with embedded newlines
The credential protocol can't handle values with newlines. We already
detect and block any such URLs from being used with credential helpers,
but let's also add an fsck check to detect and block gitmodules files
with such URLs. That will let us notice the problem earlier when
transfer.fsckObjects is turned on. And in particular it will prevent bad
objects from spreading, which may protect downstream users running older
versions of Git.

We'll file this under the existing gitmodulesUrl flag, which covers URLs
with option injection. There's really no need to distinguish the exact
flaw in the URL in this context. Likewise, I've expanded the description
of t7416 to cover all types of bogus URLs.
2020-03-12 02:56:50 -04:00
c716fe4bd9 credential: detect unrepresentable values when parsing urls
The credential protocol can't represent newlines in values, but URLs can
embed percent-encoded newlines in various components. A previous commit
taught the low-level writing routines to die() when encountering this,
but we can be a little friendlier to the user by detecting them earlier
and handling them gracefully.

This patch teaches credential_from_url() to notice such components,
issue a warning, and blank the credential (which will generally result
in prompting the user for a username and password). We blank the whole
credential in this case. Another option would be to blank only the
invalid component. However, we're probably better off not feeding a
partially-parsed URL result to a credential helper. We don't know how a
given helper would handle it, so we're better off to err on the side of
matching nothing rather than something unexpected.

The die() call in credential_write() is _probably_ impossible to reach
after this patch. Values should end up in credential structs only by URL
parsing (which is covered here), or by reading credential protocol input
(which by definition cannot read a newline into a value). But we should
definitely keep the low-level check, as it's our final and most accurate
line of defense against protocol injection attacks. Arguably it could
become a BUG(), but it probably doesn't matter much either way.

Note that the public interface of credential_from_url() grows a little
more than we need here. We'll use the extra flexibility in a future
patch to help fsck catch these cases.
2020-03-12 02:55:24 -04:00
17f1c0b8c7 t/lib-credential: use test_i18ncmp to check stderr
The credential tests have a "check" function which feeds some input to
git-credential and checks the stdout and stderr. We look for exact
matches in the output. For stdout, this makes sense; the output is
the credential protocol. But for stderr, we may be showing various
diagnostic messages, or the prompts fed to the askpass program, which
could be translated. Let's mark them as such.
2020-03-12 02:55:17 -04:00
9a6bbee800 credential: avoid writing values with newlines
The credential protocol that we use to speak to helpers can't represent
values with newlines in them. This was an intentional design choice to
keep the protocol simple, since none of the values we pass should
generally have newlines.

However, if we _do_ encounter a newline in a value, we blindly transmit
it in credential_write(). Such values may break the protocol syntax, or
worse, inject new valid lines into the protocol stream.

The most likely way for a newline to end up in a credential struct is by
decoding a URL with a percent-encoded newline. However, since the bug
occurs at the moment we write the value to the protocol, we'll catch it
there. That should leave no possibility of accidentally missing a code
path that can trigger the problem.

At this level of the code we have little choice but to die(). However,
since we'd not ever expect to see this case outside of a malicious URL,
that's an acceptable outcome.

Reported-by: Felix Wilhelm <fwilhelm@google.com>
2020-03-12 02:55:16 -04:00
237a28173f show_one_mergetag: print non-parent in hex form.
When a mergetag names a non-parent, which can occur after a shallow
clone, its hash was previously printed as raw data. Print it in hex form
instead.

Signed-off-by: Harald van Dijk <harald@gigawatt.nl>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-02 12:34:00 -08:00
0106b1d4be Revert "gpg-interface: prefer check_signature() for GPG verification"
This reverts commit 72b006f4bf, which
breaks the end-user experience when merging a signed tag without
having the public key.  We should report "can't check because we
have no public key", but the code with this change claimed that
there was no signature.
2020-02-28 09:43:17 -08:00
94f4d01932 mingw: workaround for hangs when sending STDIN
Explanation
-----------
The problem here is flawed `poll()` implementation. When it tries to
see if pipe can be written without blocking, it eventually calls
`NtQueryInformationFile()` and tests `WriteQuotaAvailable`. However,
the meaning of quota was misunderstood. The value of quota is reduced
when either some data was written to a pipe, *or* there is a pending
read on the pipe. Therefore, if there is a pending read of size >= than
the pipe's buffer size, poll() will think that pipe is not writable and
will hang forever, usually that means deadlocking both pipe users.

I have studied the problem and found that Windows pipes track two values:
`QuotaUsed` and `BytesInQueue`. The code in `poll()` apparently wants to
know `BytesInQueue` instead of quota. Unfortunately, `BytesInQueue` can
only be requested from read end of the pipe, while `poll()` receives
write end.

The git's implementation of `poll()` was copied from gnulib, which also
contains a flawed implementation up to today.

I also had a look at implementation in cygwin, which is also broken in a
subtle way. It uses this code in `pipe_data_available()`:
	fpli.WriteQuotaAvailable = (fpli.OutboundQuota - fpli.ReadDataAvailable)
However, `ReadDataAvailable` always returns 0 for the write end of the pipe,
turning the code into an obfuscated version of returning pipe's total
buffer size, which I guess will in turn have `poll()` always say that pipe
is writable. The commit that introduced the code doesn't say anything about
this change, so it could be some debugging code that slipped in.

These are the typical sizes used in git:
0x2000 - default read size in `strbuf_read()`
0x1000 - default read size in CRT, used by `strbuf_getwholeline()`
0x2000 - pipe buffer size in compat\mingw.c

As a consequence, as soon as child process uses `strbuf_read()`,
`poll()` in parent process will hang forever, deadlocking both
processes.

This results in two observable behaviors:
1) If parent process begins sending STDIN quickly (and usually that's
   the case), then first `poll()` will succeed and first block will go
   through. MAX_IO_SIZE_DEFAULT is 8MB, so if STDIN exceeds 8MB, then
   it will deadlock.
2) If parent process waits a little bit for any reason (including OS
   scheduler) and child is first to issue `strbuf_read()`, then it will
   deadlock immediately even on small STDINs.

The problem is illustrated by `git stash push`, which will currently
read the entire patch into memory and then send it to `git apply` via
STDIN. If patch exceeds 8MB, git hangs on Windows.

Possible solutions
------------------
1) Somehow obtain `BytesInQueue` instead of `QuotaUsed`
   I did a pretty thorough search and didn't find any ways to obtain
   the value from write end of the pipe.
2) Also give read end of the pipe to `poll()`
   That can be done, but it will probably invite some dirty code,
   because `poll()`
   * can accept multiple pipes at once
   * can accept things that are not pipes
   * is expected to have a well known signature.
3) Make `poll()` always reply "writable" for write end of the pipe
   Afterall it seems that cygwin (accidentally?) does that for years.
   Also, it should be noted that `pump_io_round()` writes 8MB blocks,
   completely ignoring the fact that pipe's buffer size is only 8KB,
   which means that pipe gets clogged many times during that single
   write. This may invite a deadlock, if child's STDERR/STDOUT gets
   clogged while it's trying to deal with 8MB of STDIN. Such deadlocks
   could be defeated with writing less than pipe's buffer size per
   round, and always reading everything from STDOUT/STDERR before
   starting next round. Therefore, making `poll()` always reply
   "writable" shouldn't cause any new issues or block any future
   solutions.
4) Increase the size of the pipe's buffer
   The difference between `BytesInQueue` and `QuotaUsed` is the size
   of pending reads. Therefore, if buffer is bigger than size of reads,
   `poll()` won't hang so easily. However, I found that for example
   `strbuf_read()` will get more and more hungry as it reads large inputs,
   eventually surpassing any reasonable pipe buffer size.

Chosen solution
---------------
Make `poll()` always reply "writable" for write end of the pipe.
Hopefully one day someone will find a way to implement it properly.

Reproduction
------------
printf "%8388608s" X >large_file.txt
git stash push --include-untracked -- large_file.txt

I have decided not to include this as test to avoid slowing down the
test suite. I don't expect the specific problem to come back, and
chances are that `git stash push` will be reworked to avoid sending the
entire patch via STDIN.

Signed-off-by: Alexandr Miloslavskiy <alexandr.miloslavskiy@syntevo.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-27 14:23:29 -08:00
1ff466c018 Documentation: clarify that -h alone stands for help
We seem to be getting new users who get confused every 20 months or
so with this "-h consistently wants to give help, but the commands
to which `-h` may feel like a good short-form option want it to mean
something else." compromise.

Let's make sure that the readers know that `git cmd -h` (with no
other arguments) is a way to get usage text, even for commands like
ls-remote and grep.

Also extend the description that is already in gitcli.txt, as it is
clear that users still get confused with the current text.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-27 14:14:01 -08:00
7f487ce062 Azure Pipeline: switch to the latest agent pools
It would seem that at least the `vs2015-win2012r2` pool (which we use
via its old name, `Hosted`) is about to be phased out. Let's switch
before that.

While at it, use the newer pool names as suggested at
https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops#use-a-microsoft-hosted-agent

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-27 09:58:43 -08:00
5ed9fc3fc8 ci: prevent perforce from being quarantined
The most recent Azure Pipelines macOS agents enable what Apple calls
"System Integrity Protection". This makes `p4d -V` hang: there is some
sort of GUI dialog waiting for the user to acknowledge that the copied
binaries are legit and may be executed, but on build agents, there is no
user who could acknowledge that.

Let's ask Homebrew specifically to _not_ quarantine the Perforce
binaries.

Helped-by: Aleksandr Chebotov
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-27 09:58:42 -08:00
eafff6e41e t/lib-httpd: avoid using macOS' sed
Among other differences relative to GNU sed, macOS' sed always ends its
output with a trailing newline, even if the input did not have such a
trailing newline.

Surprisingly, this makes three httpd-based tests fail on macOS: t5616,
t5702 and t5703. ("Surprisingly" because those tests have been around
for some time, but apparently nobody runs them on macOS with a working
Apache2 setup.)

The reason is that we use `sed` in those tests to filter the response of
the web server. Apart from the fact that we use GNU constructs (such as
using a space after the `c` command instead of a backslash and a
newline), we have another problem: macOS' sed LF-only newlines while
webservers are supposed to use CR/LF ones.

Even worse, t5616 uses `sed` to replace a binary part of the response
with a new binary part (kind of hoping that the replaced binary part
does not contain a 0x0a byte which would be interpreted as a newline).

To that end, it calls on Perl to read the binary pack file and
hex-encode it, then calls on `sed` to prefix every hex digit pair with a
`\x` in order to construct the text that the `c` statement of the `sed`
invocation is supposed to insert. So we call Perl and sed to construct a
sed statement. The final nail in the coffin is that macOS' sed does not
even interpret those `\x<hex>` constructs.

Let's just replace all of that by Perl snippets. With Perl, at least, we
do not have to deal with GNU vs macOS semantics, we do not have to worry
about unwanted trailing newlines, and we do not have to spawn commands
to construct arguments for other commands to be spawned (i.e. we can
avoid a whole lot of shell scripting complexity).

The upshot is that this fixes t5616, t5702 and t5703 on macOS with
Apache2.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-27 09:58:41 -08:00
3e96c66805 partial-clone: avoid fetching when looking for objects
When using partial clone, find_non_local_tags() in builtin/fetch.c
checks each remote tag to see if its object also exists locally. There
is no expectation that the object exist locally, but this function
nevertheless triggers a lazy fetch if the object does not exist. This
can be extremely expensive when asking for a commit, as we are
completely removed from the context of the non-existent object and
thus supply no "haves" in the request.

6462d5eb9a (fetch: remove fetch_if_missing=0, 2019-11-05) removed a
global variable that prevented these fetches in favor of a bitflag.
However, some object existence checks were not updated to use this flag.

Update find_non_local_tags() to use OBJECT_INFO_SKIP_FETCH_OBJECT in
addition to OBJECT_INFO_QUICK. The _QUICK option only prevents
repreparing the pack-file structures. We need to be extremely careful
about supplying _SKIP_FETCH_OBJECT when we expect an object to not exist
due to updated refs.

This resolves a broken test in t5616-partial-clone.sh.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-22 09:23:08 -08:00
d0badf8797 partial-clone: demonstrate bugs in partial fetch
While testing partial clone, I noticed some odd behavior. I was testing
a way of running 'git init', followed by manually configuring the remote
for partial clone, and then running 'git fetch'. Astonishingly, I saw
the 'git fetch' process start asking the server for multiple rounds of
pack-file downloads! When tweaking the situation a little more, I
discovered that I could cause the remote to hang up with an error.

Add two tests that demonstrate these two issues.

In the first test, we find that when fetching with blob filters from
a repository that previously did not have any tags, the 'git fetch
--tags origin' command fails because the server sends "multiple
filter-specs cannot be combined". This only happens when using
protocol v2.

In the second test, we see that a 'git fetch origin' request with
several ref updates results in multiple pack-file downloads. This must
be due to Git trying to fault-in the objects pointed by the refs. What
makes this matter particularly nasty is that this goes through the
do_oid_object_info_extended() method, so there are no "haves" in the
negotiation. This leads the remote to send every reachable commit and
tree from each new ref, providing a quadratic amount of data transfer!
This test is fixed if we revert 6462d5eb9a (fetch: remove
fetch_if_missing=0, 2019-11-05), but that revert causes other test
failures. The real fix will need more care.

The tests are ordered in this way because if I swap the test order the
tag test will succeed instead of fail. I believe this is because somehow
we need the srv.bare repo to not have any tags when we clone, but then
have tags in our next fetch.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-22 09:23:08 -08:00
539052f42f run-command.h: fix mis-indented struct member
An accidental conversion of a tab to 4 spaces snuck into 4c4066d95d
(run-command: move doc to run-command.h, 2019-11-17), messing up the
alignment when you have the project-recommended 8-width tabstops. Let's
revert that line.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-22 09:05:34 -08:00
fb1c18fc46 merge-recursive: fix the refresh logic in update_file_flags
If we need to delete a higher stage entry in the index to place the file
at stage 0, then we'll lose that file's stat information.  In such
situations we may still be able to detect that the file on disk is the
version we want (as noted by our comment in the code:
  /* do not overwrite file if already present */
), but we do still need to update the mtime since we are creating a new
cache_entry for that file.  Update the logic used to determine whether
we refresh a file's mtime.

Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-19 10:13:31 -08:00
73113c5922 t3433: new rebase testcase documenting a stat-dirty-like failure
A user discovered a case where they had a stack of 20 simple commits to
rebase, and the rebase would succeed in picking the first commit and
then error out with a pair of "Could not execute the todo command" and
"Your local changes to the following files would be overwritten by
merge" messages.

Their steps actually made use of the -i flag, but I switched it over to
-m to make it simpler to trigger the bug.  With that flag, it bisects
back to commit 68aa495b59 (rebase: implement --merge via the
interactive machinery, 2018-12-11), but that's misleading.  If you
change the -m flag to --keep-empty, then the problem persists and will
bisect back to 356ee4659b (sequencer: try to commit without forking
'git commit', 2017-11-24)

After playing with the testcase for a bit, I discovered that added
--exec "sleep 1" to the command line makes the rebase succeed, making me
suspect there is some kind of discard and reloading of caches that lead
us to believe that something is stat dirty, but I didn't succeed in
digging any further than that.

Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-19 10:13:27 -08:00
7ec8125fba check-ignore: fix documentation and implementation to match
check-ignore has two different modes, and neither of these modes has an
implementation that matches the documentation.  These modes differ in
whether they just print paths or whether they also print the final
pattern matched by the path.  The fix is different for both modes, so
I'll discuss both separately.

=== First (default) mode ===

The first mode is documented as:

    For each pathname given via the command-line or from a file via
    --stdin, check whether the file is excluded by .gitignore (or other
    input files to the exclude mechanism) and output the path if it is
    excluded.

However, it fails to do this because it did not account for negated
patterns.  Commands other than check-ignore verify exclusion rules via
calling

   ... -> treat_one_path() -> is_excluded() -> last_matching_pattern()

while check-ignore has a call path of the form:

   ... -> check_ignore()                    -> last_matching_pattern()

The fact that the latter does not include the call to is_excluded()
means that it is susceptible to to messing up negated patterns (since
that is the only significant thing is_excluded() adds over
last_matching_pattern()).  Unfortunately, we can't make it just call
is_excluded(), because the same codepath is used by the verbose mode
which needs to know the matched pattern in question.  This brings us
to...

=== Second (verbose) mode ===

The second mode, known as verbose mode, references the first in the
documentation and says:

    Also output details about the matching pattern (if any) for each
    given pathname. For precedence rules within and between exclude
    sources, see gitignore(5).

The "Also" means it will print patterns that match the exclude rules as
noted for the first mode, and also print which pattern matches.  Unless
more information is printed than just pathname and pattern (which is not
done), this definition is somewhat ill-defined and perhaps even
self-contradictory for negated patterns: A path which matches a negated
exclude pattern is NOT excluded and thus shouldn't be printed by the
former logic, while it certainly does match one of the explicit patterns
and thus should be printed by the latter logic.

=== Resolution ==

Since the second mode exists to find out which pattern matches given
paths, and showing the user a pattern that begins with a '!' is
sufficient for them to figure out whether the pattern is excluded, the
existing behavior is desirable -- we just need to update the
documentation to match the implementation (i.e. it is about printing
which pattern is matched by paths, not about showing which paths are
excluded).

For the first or default mode, users just want to know whether a pattern
is excluded.  As such, the existing documentation is desirable; change
the implementation to match the documented behavior.

Finally, also adjust a few tests in t0008 that were caught up by this
discrepancy in how negated paths were handled.

Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-18 15:28:58 -08:00
2607d39da3 doc-diff: use single-colon rule in rendering Makefile
When rendering the troff manpages to text via "man", we create an ad-hoc
Makefile and feed it to "make". The purpose here is two-fold:

  - reuse results from a prior interrupted render of the same tree

  - use make's -j option to build in parallel

But the second part doesn't seem to work (at least with my version of
GNU make, 4.2.1). It just runs one render at a time.

We use a double-colon "all" rule for each file, like:

  all:: foo
  foo:
    ...actual render recipe...
  all:: bar
  bar:
    ...actual render recipe...
  ...and so on...

And it's this double-colon that seems to inhibit the parallelism. We can
just switch to a regular single-colon rule. Even though we do have
multiple rules for "all" here, we don't have any recipe to execute for
"all" (we only care about triggering its dependencies), so the
distinction is irrelevant.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-18 13:53:30 -08:00
0aa6ce3094 doc/config/push: use longer "--" line for preformatted example
The example for the push.pushOption config tries to create a
preformatted section, but uses only two dashes in its "--" line. In
AsciiDoc this is an "open block", with no type; the lines end up jumbled
because they're formatted as paragraphs. We need four or more dashes to
make it a "listing block" that will respect the linebreaks.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-18 13:48:31 -08:00
e0020b2f82 prefix_path: show gitdir when arg is outside repo
When developing a script, it can be painful to understand why Git thinks
something is outside the current repo, if the current repo isn't what
the user thinks it is. Since this can be tricky to diagnose, especially
in cases like submodules or nested worktrees, let's give the user a hint
about which repository is offended about that path.

Signed-off-by: Emily Shaffer <emilyshaffer@google.com>
Acked-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-16 15:32:59 -08:00
cc4f2eb828 doc: move credential helper info into gitcredentials(7)
The details of how credential helpers can be called or implemented were
originally covered in Documentation/technical/. Those are topics that
end users might care about (and we even referenced them in the
credentials manpage), but those docs typically don't ship as part of the
end user documentation, making them less useful.

This situation got slightly worse recently in f3b9055624 (credential:
move doc to credential.h, 2019-11-17), where we moved them into the C
header file, making them even harder to find.

So let's move put this information into the gitcredentials(7)
documentation, which is meant to describe the overall concepts of our
credential handling. This was already pointing to the API docs for these
concepts, so we can just include it inline instead.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-14 14:43:16 -08:00
08809c09aa mingw: add a helper function to attach GDB to the current process
When debugging Git, the criss-cross spawning of processes can make
things quite a bit difficult, especially when a Unix shell script is
thrown in the mix that calls a `git.exe` that then segfaults.

To help debugging such things, we introduce the `open_in_gdb()` function
which can be called at a code location where the segfault happens (or as
close as one can get); This will open a new MinTTY window with a GDB
that already attached to the current process.

Inspired by Derrick Stolee.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-14 10:02:07 -08:00
bfe2bbb47f t5580: test cloning without file://, test fetching via UNC paths
On Windows, it is quite common to work with network drives. The format
of the paths to network drives (or "network shares", or UNC paths) is:

	\\<server>\<share>\...

We already have a couple regression tests revolving around those types
of paths, but we missed cloning and fetching from UNC paths without
leading `file://` (and with backslashes instead of forward slashes).
This lil' patch closes that gap.

It gets a bit silly to add the commands to the name of the test script,
so let's just rename it while we're testing more UNC stuff.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-14 09:59:41 -08:00
076ee3e8a2 tests: fix --write-junit-xml with subshells
In t0000, more precisely in its `test_bool_env` test case, there are two
subshells that are supposed to fail. To be even _more_ precise, they
fail by calling the `error` function, and that is okay, because it is in
a subshell, and it is expected that those two subshell invocations fail.

However, the `error` function also tries to finalize the JUnit XML (if
that XML was asked for, via `--write-junit-xml`. As a consequence, the
XML is edited to add a `time` attribute for the `testsuite` tag. And
since there are two expected `error` calls in addition to the final
`test_done`, the `finalize_junit_xml` function is called three times and
naturally the `time` attribute is added _three times_.

Azure Pipelines is not happy with that, complaining thusly:

 ##[warning]Failed to read D:\a\1\s\t\out\TEST-t0000-basic.xml. Error : 'time' is a duplicate attribute name. Line 2, position 82..

One possible way to address this would be to unset `write_junit_xml` in
the `test_bool_env` test case.

But that would be fragile, as other `error` calls in subshells could be
introduced.

So let's just modify `finalize_junit_xml` to remove any `time` attribute
before adding the authoritative one.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-12 09:19:42 -08:00
a21781011f index-pack: downgrade twice-resolved REF_DELTA to die()
When we're resolving a REF_DELTA, we compare-and-swap its type from
REF_DELTA to whatever real type the base object has, as discussed in
ab791dd138 (index-pack: fix race condition with duplicate bases,
2014-08-29). If the old type wasn't a REF_DELTA, we consider that a
BUG(). But as discussed in that commit, we might see this case whenever
we try to resolve an object twice, which may happen because we have
multiple copies of the base object.

So this isn't a bug at all, but rather a sign that the input pack is
broken. And indeed, this case is triggered already in t5309.5 and
t5309.6, which create packs with delta cycles and duplicate bases. But
we never noticed because those tests are marked expect_failure.

Those tests were added by b2ef3d9ebb (test index-pack on packs with
recoverable delta cycles, 2013-08-23), which was leaving the door open
for cases that we theoretically _could_ handle. And when we see an
already-resolved object like this, in theory we could keep going after
confirming that the previously resolved child->real_type matches
base->obj->real_type. But:

  - enforcing the "only resolve once" rule here saves us from an
    infinite loop in other parts of the code. If we keep going, then the
    delta cycle in t5309.5 causes us to loop infinitely, as
    find_ref_delta_children() doesn't realize which objects have already
    been resolved. So there would be more changes needed to make this
    case work, and in the meantime we'd be worse off.

  - any pack that triggers this is broken anyway. It either has a
    duplicate base object, or it has a cycle which causes us to bring in
    a duplicate via --fix-thin. In either case, we'd end up rejecting
    the pack in write_idx_file(), which also detects duplicates.

So the tests have little value in documenting what we _could_ be doing
(and have been neglected for 6+ years). Let's switch them to confirming
that we handle this case cleanly (and switch out the BUG() for a more
informative die() so that we do so).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-04 13:19:11 -08:00
dbc27477ff notes.c: fix off-by-one error when decreasing notes fanout
As noted in the previous commit, the nature of the fanout heuristic
in the notes code causes the exact point at which we increase or
decrease the notes fanout to vary with the objects being annotated.
Since the object ids generated by the test environment are
deterministic (by design), the notes generated and tested by t3305
are always the same, and we therefore happen to see the same fanout
behavior from one run to the next.

Coincidentally, if we were to change the test environment slightly
(say by making a test commit on an unrelated branch before we start
the t3305 test proper), we not only see the fanout switch happen at
different points, we also manage to trigger a _bug_ in the notes
code where the fanout 1 -> 0 switch is not applied uniformly across
the notes tree, but instead yields a notes tree like this:

  ...
  bdeafb301e44b0e4db0f738a2d2a7beefdb70b70
  bff2d39b4f7122bd4c5caee3de353a774d1e632a
  d3/8ec8f851adf470131178085bfbaab4b12ad2a7
  e0b173960431a3e692ae929736df3c9b73a11d5b
  eb3c3aede523d729990ac25c62a93eb47c21e2e3
  ...

The bug occurs when we are writing out a notes tree with a newly
decreased fanout, and the notes tree contains unexpanded subtrees
that should be consolidated into the parent tree as a consequence of
the decreased fanout):

Subtrees that happen to sit at an _even_ level in the internal notes
16-tree structure (in other words: subtrees whose path - "d3" in the
example above - is unique in the first nibble - i.e. there are no
other note paths that start with "d") are _not_ unpacked as part of
the tree writeout. This error will repeat itself in subsequent note
trees until the subtree is forced to be unpacked. In t3305 this only
happens when the d38ec8f8 note is itself removed from the tree.

The error is not severe (no information is lost, and the notes code
is able to read/decode this tree and manipulate it correctly), but
this is nonetheless a bug in the current implementation that should
be fixed.

That said, fixing the off-by-one error is not without complications:
We must take into account that the load_subtree() call from
for_each_note_helper() (that is now done to correctly unpack the
subtree while we're writing out the notes tree) may end up inserting
unpacked non-notes into the linked list of non_note entries held by
the struct notes_tree. Since we are in the process of writing out the
notes tree, this linked list is currently in the process of being
traversed by write_each_non_note_until(). The unpacked non-notes are
necessarily inserted between the last non-note we wrote out, and the
next non-note to be written. Hence, we cannot simply hold the
next_non_note to write in struct write_each_note_data (as we would
then silently skip these newly inserted notes), but must instead
always follow the ->next pointer from the last non-note we wrote.
(This part was caught by an existing test in t3304.)

Cc: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Cc: Brian M. Carlson <sandals@crustytoothpaste.net>
Signed-off-by: Johan Herland <johan@herland.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-04 12:20:43 -08:00
e1c5253951 t3305: check notes fanout more carefully and robustly
In short, before this patch, this test script:
 - creates many notes
 - verifies that all notes in the notes tree has a fanout of 1
 - removes most notes
 - verifies that the notes in the notes tree now has a fanout of 0

The fanout verification only happened twice: after creating all the
notes, and after removing most of them.

This patch strengthens the test by checking the fanout after _each_
added/removed note: We assert that the switch from fanout 0 -> 1
happens exactly once while adding notes (and that the switch pervades
the entire notes tree). Likewise, we assert that the switch from
fanout 1 -> 0 happens exactly once while removing notes.

Additionally, we decrease the number of notes left after removal,
from 50 to 15 notes, in order to ensure that fanout 1 -> 0 transition
keeps happening regardless of external factors[1].

[1]: Currently (with the SHA1 hash function and the deterministic
object ids of the test environment) the fanout heuristic in the notes
code happens to switch from 0 -> 1 at 109 notes, and from 1 -> 0 at
59 notes. However, changing the hash function or other external
factors will vary these numbers, and the latter may - in theory - go
as low as 15. For more details, please see the discussion at
https://public-inbox.org/git/20200125230035.136348-4-sandals@crustytoothpaste.net/

Cc: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Cc: Brian M. Carlson <sandals@crustytoothpaste.net>
Signed-off-by: Johan Herland <johan@herland.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-04 12:20:27 -08:00
cf82bff73f obstack: avoid computing offsets from NULL pointer
As with the previous two commits, UBSan with clang-11 complains about
computing offsets from a NULL pointer. The failures in t4013 (and
elsewhere) look like this:

  kwset.c:102:23: runtime error: applying non-zero offset 107820859019600 to null pointer
  ...
  not ok 79 - git log -SF master # magic is (not used)

That line is not enlightening:

  ... = obstack_alloc(&kwset->obstack, sizeof (struct trie));

because obstack is implemented almost entirely in macros, and the actual
problem is five macros deep (I temporarily converted them to inline
functions to get better compiler errors, which was tedious but worked
reasonably well).

The actual problem is in these pointer-alignment macros:

  /* If B is the base of an object addressed by P, return the result of
     aligning P to the next multiple of A + 1.  B and P must be of type
     char *.  A + 1 must be a power of 2.  */

  #define __BPTR_ALIGN(B, P, A) ((B) + (((P) - (B) + (A)) & ~(A)))

  /* Similar to _BPTR_ALIGN (B, P, A), except optimize the common case
     where pointers can be converted to integers, aligned as integers,
     and converted back again.  If PTR_INT_TYPE is narrower than a
     pointer (e.g., the AS/400), play it safe and compute the alignment
     relative to B.  Otherwise, use the faster strategy of computing the
     alignment relative to 0.  */

  #define __PTR_ALIGN(B, P, A)                                                \
    __BPTR_ALIGN (sizeof (PTR_INT_TYPE) < sizeof (void *) ? (B) : (char *) 0, \
                  P, A)

If we have a sufficiently-large integer pointer type, then we do the
computation using a NULL pointer constant. That turns __BPTR_ALIGN()
into something like:

  NULL + (P - NULL + A) & ~A

and UBSan is complaining about adding the full value of P to that
initial NULL. We can fix this by doing our math as an integer type, and
then casting the result back to a pointer. The problem case only happens
when we know that the integer type is large enough, so there should be
no issue with truncation.

Another option would be just simplify out all the 0's from
__BPTR_ALIGN() for the NULL-pointer case. That probably wouldn't work
for a platform where the NULL pointer isn't all-zeroes, but Git already
wouldn't work on such a platform (due to our use of memset to set
pointers in structs to NULL). But I tried here to keep as close to the
original as possible.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-28 23:13:25 -08:00
3cd309c16f xdiff: avoid computing non-zero offset from NULL pointer
As with the previous commit, clang-11's UBSan complains about computing
offsets from a NULL pointer, causing some tests to fail. In this case,
though, we're actually computing a non-zero offset, which is even more
dubious. From t7810:

  xdiff-interface.c:268:14: runtime error: applying non-zero offset 1 to null pointer
  ...
  not ok 131 - grep -p with userdiff

The problem is our parsing of the funcname config. We count the number
of lines in the string, allocate an array, and then loop over our
allocated entries, parsing each line and moving our cursor to one past
the trailing newline for the next iteration.

But the final line will not generally have a trailing newline (since
it's a config value), and hence we go to one past NULL. In practice this
is OK, since our loop should terminate before we look at the value. But
even computing such an invalid pointer technically violates the
standard.

We can fix it by leaving the pointer at NULL if we're at the end, rather
than one-past. And while we're thinking about it, we can also document
the variant by asserting that our initial line-count matches the
second-pass of parsing.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-28 23:13:25 -08:00
d20bc01a51 avoid computing zero offsets from NULL pointer
The Undefined Behavior Sanitizer in clang-11 seems to have learned a new
trick: it complains about computing offsets from a NULL pointer, even if
that offset is 0. This causes numerous test failures. For example, from
t1090:

  unpack-trees.c:1355:41: runtime error: applying zero offset to null pointer
  ...
  not ok 6 - in partial clone, sparse checkout only fetches needed blobs

The code in question looks like this:

  struct cache_entry **cache_end = cache + nr;
  ...
  while (cache != cache_end)

and we sometimes pass in a NULL and 0 for "cache" and "nr". This is
conceptually fine, as "cache_end" would be equal to "cache" in this
case, and we wouldn't enter the loop at all. But computing even a zero
offset violates the C standard. And given the fact that UBSan is
noticing this behavior, this might be a potential problem spot if the
compiler starts making unexpected assumptions based on undefined
behavior.

So let's just avoid it, which is pretty easy. In some cases we can just
switch to iterating with a numeric index (as we do in sequencer.c here).
In other cases (like the cache_end one) the use of an end pointer is
more natural; we can keep that by just explicitly checking for the
NULL/0 case when assigning the end pointer.

Note that there are two ways you can write this latter case, checking
for the pointer:

  cache_end = cache ? cache + nr : cache;

or the size:

  cache_end = nr ? cache + nr : cache;

For the case of a NULL/0 ptr/len combo, they are equivalent. But writing
it the second way (as this patch does) has the property that if somebody
were to incorrectly pass a NULL pointer with a non-zero length, we'd
continue to notice and segfault, rather than silently pretending the
length was zero.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-28 23:12:48 -08:00
7edee32985 git rm submodule: succeed if .gitmodules index stat info is zero
The bug was that ie_match_stat() was used to compare if the stat info
for the file was compatible with the stat info in the index, rather
using ie_modified() to check if the file was in fact different from the
version in the index.

A version of this (with deinit instead of rm) was reported here:
https://lore.kernel.org/git/CAPOqYV+C-P9M2zcUBBkD2LALPm4K3sxSut+BjAkZ9T1AKLEr+A@mail.gmail.com/

It seems that in that case, the user's clone command left the index
with empty stat info.  The mailing list was unable to reproduce this.
But we (Two Sigma) hit the bug while using some plumbing commands, so
I'm fixing it.  I manually confirmed that the fix also repairs deinit
in this scenario.

Signed-off-by: David Turner <dturner@twosigma.com>
Reported-by: Thomas Bétous <th.betous@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-28 14:46:14 -08:00
4c616c2ba1 merge-recursive: use subtraction to flip stage
The flip_stage() helper uses a bit-flipping xor to switch between "2"
and "3". While clever, this relies on a property of those two numbers
that is mostly coincidence. Let's write it as a subtraction; that's more
clear and would extend to other numbers if somebody copies the logic.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-27 11:17:41 -08:00
ee798742bd merge-recursive: silence -Wxor-used-as-pow warning
The merge-recursive code uses stage number constants like this:

  add = &ci->ren1->dst_entry->stages[2 ^ 1];
  ...
  add = &ci->ren2->dst_entry->stages[3 ^ 1];

The xor has the effect of flipping the "1" bit, so that "2 ^ 1" becomes
"3" and "3 ^ 1" becomes "2", which correspond to the "ours" and "theirs"
stages respectively.

Unfortunately, clang-10 and up issue a warning for this code:

  merge-recursive.c:1759:40: error: result of '2 ^ 1' is 3; did you mean '1 << 1' (2)? [-Werror,-Wxor-used-as-pow]
                  add = &ci->ren1->dst_entry->stages[2 ^ 1];
                                                     ~~^~~
                                                     1 << 1
  merge-recursive.c:1759:40: note: replace expression with '0x2 ^ 1' to silence this warning

We could silence it by using 0x2, as the compiler mentions. Or by just
using the constants "2" and "3" directly. But after digging into it, I
do think this bit-flip is telling us something. If we just wrote:

  add = &ci->ren2->dst_entry->stages[2];

for the second one, you might think that "ren2" and "2" correspond. But
they don't. The logic is: ren2 is theirs, which is stage 3, but we
are interested in the opposite side's stage, so flip it to 2.

So let's keep the bit-flipping, but let's also put it behind a named
function, which will make its purpose a bit clearer. This also has the
side effect of suppressing the warning (and an optimizing compiler
should be able to easily turn it into a constant as before).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-27 11:15:35 -08:00
26027625dd rebase -i: also avoid SHA-1 collisions with missingCommitsCheck
When `rebase.missingCommitsCheck` is in effect, we use the backup of the
todo list that was copied just before the user was allowed to edit it.

That backup is, of course, just as susceptible to the hash collision as
the todo list itself: a reworded commit could make a previously
unambiguous short commit ID ambiguous all of a sudden.

So let's not just copy the todo list, but let's instead write out the
backup with expanded commit IDs.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-23 12:48:12 -08:00
b6992261de rebase -i: re-fix short SHA-1 collision
In 66ae9a57b8 (t3404: rebase -i: demonstrate short SHA-1 collision,
2013-08-23), we added a test case that demonstrated how it is possible
that a previously unambiguous short commit ID could become ambiguous
*during* a rebase.

In 75c6976655 (rebase -i: fix short SHA-1 collision, 2013-08-23), we
fixed that problem simply by writing out the todo list with expanded
commit IDs (except *right* before letting the user edit the todo list,
in which case we shorten them, but we expand them right after the file
was edited).

However, the bug resurfaced as a side effect of 393adf7a6f (sequencer:
directly call pick_commits() from complete_action(), 2019-11-24): as of
this commit, the sequencer no longer re-reads the todo list after
writing it out with expanded commit IDs.

The only redeeming factor is that the todo list is already parsed at
that stage, including all the commits corresponding to the commands,
therefore the sequencer can continue even if the internal todo list has
short commit IDs.

That does not prevent problems, though: the sequencer writes out the
`done` and `git-rebase-todo` files incrementally (i.e. overwriting the
todo list with a version that has _short_ commit IDs), and if a merge
conflict happens, or if an `edit` or a `break` command is encountered, a
subsequent `git rebase --continue` _will_ re-read the todo list, opening
an opportunity for the "short SHA-1 collision" bug again.

To avoid that, let's make sure that we do expand the commit IDs in the
todo list as soon as we have parsed it after letting the user edit it.

Additionally, we improve the 'short SHA-1 collide' test case in t3404 to
test specifically for the case where the rebase is resumed. We also
hard-code the expected colliding short SHA-1s, to document the
expectation (and to make it easier on future readers).

Note that we specifically test that the short commit ID is used in the
`git-rebase-todo.tmp` file: this file is created by the fake editor in
the test script and reflects the state that would have been presented to
the user to edit.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-23 12:48:11 -08:00
d859dcad94 parse_insn_line(): improve error message when parsing failed
In the case that a `get_oid()` call failed, we showed some rather bogus
part of the line instead of the precise string we sent to said function.
That makes it rather hard for users to understand what is going wrong,
so let's fix that.

While at it, return a negative value from `parse_insn_line()` in case of
an error, as per our convention. This function's only caller,
`todo_list_parse_insn_buffer()`, cares only whether that return value is
non-zero or not, i.e. does not need to be changed.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-23 12:48:05 -08:00
a9472afb63 submodule.c: use get_git_dir() instead of get_git_common_dir()
Ever since df56607dff (git-common-dir: make "modules/"
per-working-directory directory, 2014-11-30), submodules in linked worktrees
are cloned to $GIT_DIR/modules, i.e. $GIT_COMMON_DIR/worktrees/<name>/modules.

However, this convention was not followed when the worktree updater commands
checkout, reset and read-tree learned to recurse into submodules. Specifically,
submodule.c::submodule_move_head, introduced in 6e3c1595c6 (update submodules:
add submodule_move_head, 2017-03-14) and submodule.c::submodule_unset_core_worktree,
(re)introduced in 898c2e65b7 (submodule: unset core.worktree if no working tree
is present, 2018-12-14) use get_git_common_dir() instead of get_git_dir()
to get the path of the submodule repository.

This means that, for example, 'git checkout --recurse-submodules <branch>'
in a linked worktree will correctly checkout <branch>, detach the submodule's HEAD
at the commit recorded in <branch> and update the submodule working tree, but the
submodule HEAD that will be moved is the one in $GIT_COMMON_DIR/modules/<name>/,
i.e. the submodule repository of the main superproject working tree.
It will also rewrite the gitfile in the submodule working tree of the linked worktree
to point to $GIT_COMMON_DIR/modules/<name>/.
This leads to an incorrect (and confusing!) state in the submodule working tree
of the main superproject worktree.

Additionally, if switching to a commit where the submodule is not present,
submodule_unset_core_worktree will be called and will incorrectly remove
'core.wortree' from the config file of the submodule in the main superproject worktree,
$GIT_COMMON_DIR/modules/<name>/config.

Fix this by constructing the path to the submodule repository using get_git_dir()
in both submodule_move_head and submodule_unset_core_worktree.

Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-22 13:39:01 -08:00
129510a067 t2405: clarify test descriptions and simplify test
When 'checkout --to' functionality was moved to 'worktree add', tests were adapted
in f194b1ef6e (tests: worktree: retrofit "checkout --to" tests for "worktree add",
2015-07-06).

The calls were changed to 'worktree add' in this test (then t7410), but the test
descriptions were not updated, keeping 'checkout' instead of using the new
terminology (linked worktrees).

Also, in the test each worktree is created in
$TRASH_DIRECTORY/<leading-directory>/main, where the name of <leading-directory>
carries some information about what behavior each test verifies. This directory
structure is not mandatory for the tests; the worktrees can live next to one
another in the trash directory.

Clarify the tests by using the right terminology, and remove the unnecessary
leading directories such that all superproject worktrees are directly next to one
another in the trash directory.

Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-22 13:37:04 -08:00
4eaadc8493 t2405: use git -C and test_commit -C instead of subshells
The subshells used in the setup phase of this test are unnecessary.

Remove them by using 'git -C' and 'test_commit -C'.

Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-22 13:36:05 -08:00
773c60a45e t7410: rename to t2405-worktree-submodule.sh
This test was added in df56607dff (git-common-dir: make "modules/"
per-working-directory directory, 2014-11-30), back when the 'git worktree' command
did not exist and 'git checkout --to' was used to create supplementary worktrees.

Since this file contains tests for the interaction of 'git worktree' with
submodules, rename it to t2405-worktree-submodule.sh, following the naming scheme for
tests checking the behavior of various commands with submodules.

Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-22 13:36:03 -08:00
849e43cc18 built-in add -i: accept open-ended ranges again
The interactive `add` command allows selecting multiple files for some
of its sub-commands, via unique prefixes, indices or index ranges.

When re-implementing `git add -i` in C, we even added a code comment
talking about ranges with a missing end index, such as `2-`, but the
code did not actually accept those, as pointed out in
https://github.com/git-for-windows/git/issues/2466#issuecomment-574142760.

Let's fix this, and add a test case to verify that this stays fixed
forever.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-16 14:10:23 -08:00
d660a30ceb built-in add -i: do not try to patch/diff an empty list of files
When the user does not select any files to `patch` or `diff`, there is
no need to call `run_add_p()` on them.

Even worse: we _have_ to avoid calling `parse_pathspec()` with an empty
list because that would trigger this error:

	BUG: pathspec.c:557: PATHSPEC_PREFER_CWD requires arguments

So let's avoid doing any work on a list of files that is empty anyway.

This fixes https://github.com/git-for-windows/git/issues/2466.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-16 14:10:21 -08:00
26f924d50e unpack-trees: exit check_updates() early if updates are not wanted
check_updates() has a lot of code that repeatedly checks whether
o->update or o->dry_run are set.  (Note that o->dry_run is a
near-synonym for !o->update, but not quite as per commit 2c9078d05b
("unpack-trees: add the dry_run flag to unpack_trees_options",
2011-05-25).)  In fact, this function almost turns into a no-op whenever
the condition
   !o->update || o->dry_run
is met.  Simplify the code by checking this condition at the beginning
of the function, and when it is true, do the few things that are
relevant and return early.

There are a few things that make the conversion not quite obvious:
  * The fact that check_updates() does not actually turn into a no-op
    when updates are not wanted may be slightly surprising.  However,
    commit 33ecf7eb61 (Discard "deleted" cache entries after using them
    to update the working tree, 2008-02-07) put the discarding of
    unused cache entries in check_updates() so we still need to keep
    the call to remove_marked_cache_entries().  It's possible this
    call belongs in another function, but it is certainly needed as
    tests will fail if it is removed.
  * The original called remove_scheduled_dirs() unconditionally.
    Technically, commit 7847892716 (unlink_entry(): introduce
    schedule_dir_for_removal(), 2009-02-09) should have made that call
    conditional, but it didn't matter in practice because
    remove_scheduled_dirs() becomes a no-op when all the calls to
    unlink_entry() are skipped.  As such, we do not need to call it.
  * When (o->dry_run && o->update), the original would have two calls
    to git_attr_set_direction() surrounding a bunch of skipped updates.
    These two calls to git_attr_set_direction() cancel each other out
    and thus can be omitted when o->dry_run is true just as they
    already are when !o->update.
  * The code would previously call setup_collided_checkout_detection()
    and report_collided_checkout() even when o->dry_run.  However, this
    was just an expensive no-op because
    setup_collided_checkout_detection() merely cleared the CE_MATCHED
    flag for each cache entry, and report_collided_checkout() reported
    which ones had it set.  Since a dry-run would skip all the
    checkout_entry() calls, CE_MATCHED would never get set and thus
    no collisions would be reported.  Since we can't detect the
    collisions anyway without doing updates, skipping the collisions
    detection setup and reporting is an optimization.
  * The code previously would call get_progress() and
    display_progress() even when (!o->update || o->dry_run).  This
    served to show how long it took to skip all the updates, which is
    somewhat useless.  Since we are skipping the updates, we can skip
    showing how long it takes to skip them.

Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-07 08:37:37 -08:00
97 changed files with 1776 additions and 509 deletions

View File

@ -0,0 +1,16 @@
Git v2.17.4 Release Notes
=========================
This release is to address the security issue: CVE-2020-5260
Fixes since v2.17.3
-------------------
* With a crafted URL that contains a newline in it, the credential
helper machinery can be fooled to give credential information for
a wrong host. The attack has been made impossible by forbidding
a newline character in any value passed via the credential
protocol.
Credit for finding the vulnerability goes to Felix Wilhelm of Google
Project Zero.

View File

@ -0,0 +1,22 @@
Git v2.17.5 Release Notes
=========================
This release is to address a security issue: CVE-2020-11008
Fixes since v2.17.4
-------------------
* With a crafted URL that contains a newline or empty host, or lacks
a scheme, the credential helper machinery can be fooled into
providing credential information that is not appropriate for the
protocol in use and host being contacted.
Unlike the vulnerability CVE-2020-5260 fixed in v2.17.4, the
credentials are not for a host of the attacker's choosing; instead,
they are for some unspecified host (based on how the configured
credential helper handles an absent "host" parameter).
The attack has been made impossible by refusing to work with
under-specified credential patterns.
Credit for finding the vulnerability goes to Carlo Arenas.

View File

@ -0,0 +1,16 @@
Git v2.17.6 Release Notes
=========================
This release addresses the security issues CVE-2021-21300.
Fixes since v2.17.5
-------------------
* CVE-2021-21300:
On case-insensitive file systems with support for symbolic links,
if Git is configured globally to apply delay-capable clean/smudge
filters (such as Git LFS), Git could be fooled into running
remote code during a clone.
Credit for finding and fixing this vulnerability goes to Matheus
Tavares, helped by Johannes Schindelin.

View File

@ -0,0 +1,5 @@
Git v2.18.3 Release Notes
=========================
This release merges the security fix that appears in v2.17.4; see
the release notes for that version for details.

View File

@ -0,0 +1,5 @@
Git v2.18.4 Release Notes
=========================
This release merges the security fix that appears in v2.17.5; see
the release notes for that version for details.

View File

@ -0,0 +1,6 @@
Git v2.18.5 Release Notes
=========================
This release merges up the fixes that appear in v2.17.6 to address
the security issue CVE-2021-21300; see the release notes for that
version for details.

View File

@ -0,0 +1,5 @@
Git v2.19.4 Release Notes
=========================
This release merges the security fix that appears in v2.17.4; see
the release notes for that version for details.

View File

@ -0,0 +1,5 @@
Git v2.19.5 Release Notes
=========================
This release merges the security fix that appears in v2.17.5; see
the release notes for that version for details.

View File

@ -0,0 +1,6 @@
Git v2.19.6 Release Notes
=========================
This release merges up the fixes that appear in v2.17.6 and
v2.18.5 to address the security issue CVE-2021-21300; see the
release notes for these versions for details.

View File

@ -0,0 +1,5 @@
Git v2.20.3 Release Notes
=========================
This release merges the security fix that appears in v2.17.4; see
the release notes for that version for details.

View File

@ -0,0 +1,5 @@
Git v2.20.4 Release Notes
=========================
This release merges the security fix that appears in v2.17.5; see
the release notes for that version for details.

View File

@ -0,0 +1,6 @@
Git v2.20.5 Release Notes
=========================
This release merges up the fixes that appear in v2.17.6, v2.18.5
and v2.19.6 to address the security issue CVE-2021-21300; see
the release notes for these versions for details.

View File

@ -0,0 +1,5 @@
Git v2.21.2 Release Notes
=========================
This release merges the security fix that appears in v2.17.4; see
the release notes for that version for details.

View File

@ -0,0 +1,5 @@
Git v2.21.3 Release Notes
=========================
This release merges the security fix that appears in v2.17.5; see
the release notes for that version for details.

View File

@ -0,0 +1,6 @@
Git v2.21.4 Release Notes
=========================
This release merges up the fixes that appear in v2.17.6, v2.18.5,
v2.19.6 and v2.20.5 to address the security issue CVE-2021-21300;
see the release notes for these versions for details.

View File

@ -0,0 +1,5 @@
Git v2.22.3 Release Notes
=========================
This release merges the security fix that appears in v2.17.4; see
the release notes for that version for details.

View File

@ -0,0 +1,5 @@
Git v2.22.4 Release Notes
=========================
This release merges the security fix that appears in v2.17.5; see
the release notes for that version for details.

View File

@ -0,0 +1,7 @@
Git v2.22.5 Release Notes
=========================
This release merges up the fixes that appear in v2.17.6,
v2.18.5, v2.19.6, v2.20.5 and v2.21.4 to address the security
issue CVE-2021-21300; see the release notes for these versions
for details.

View File

@ -0,0 +1,5 @@
Git v2.23.2 Release Notes
=========================
This release merges the security fix that appears in v2.17.4; see
the release notes for that version for details.

View File

@ -0,0 +1,5 @@
Git v2.23.3 Release Notes
=========================
This release merges the security fix that appears in v2.17.5; see
the release notes for that version for details.

View File

@ -0,0 +1,7 @@
Git v2.23.4 Release Notes
=========================
This release merges up the fixes that appear in v2.17.6, v2.18.5,
v2.19.6, v2.20.5, v2.21.4 and v2.22.5 to address the security
issue CVE-2021-21300; see the release notes for these versions
for details.

View File

@ -0,0 +1,5 @@
Git v2.24.2 Release Notes
=========================
This release merges the security fix that appears in v2.17.4; see
the release notes for that version for details.

View File

@ -0,0 +1,5 @@
Git v2.24.3 Release Notes
=========================
This release merges the security fix that appears in v2.17.5; see
the release notes for that version for details.

View File

@ -0,0 +1,7 @@
Git v2.24.4 Release Notes
=========================
This release merges up the fixes that appear in v2.17.6, v2.18.5,
v2.19.6, v2.20.5, v2.21.4, v2.22.5 and v2.23.4 to address the
security issue CVE-2021-21300; see the release notes for these
versions for details.

View File

@ -0,0 +1,60 @@
Git 2.25.2 Release Notes
========================
Fixes since v2.25.1
-------------------
* Minor bugfixes to "git add -i" that has recently been rewritten in C.
* An earlier update to show the location of working tree in the error
message did not consider the possibility that a git command may be
run in a bare repository, which has been corrected.
* The "--recurse-submodules" option of various subcommands did not
work well when run in an alternate worktree, which has been
corrected.
* Running "git rm" on a submodule failed unnecessarily when
.gitmodules is only cache-dirty, which has been corrected.
* "git rebase -i" identifies existing commits in its todo file with
their abbreviated object name, which could become ambigous as it
goes to create new commits, and has a mechanism to avoid ambiguity
in the main part of its execution. A few other cases however were
not covered by the protection against ambiguity, which has been
corrected.
* The index-pack code now diagnoses a bad input packstream that
records the same object twice when it is used as delta base; the
code used to declare a software bug when encountering such an
input, but it is an input error.
* The code to automatically shrink the fan-out in the notes tree had
an off-by-one bug, which has been killed.
* "git check-ignore" did not work when the given path is explicitly
marked as not ignored with a negative entry in the .gitignore file.
* The merge-recursive machinery failed to refresh the cache entry for
a merge result in a couple of places, resulting in an unnecessary
merge failure, which has been fixed.
* Fix for a bug revealed by a recent change to make the protocol v2
the default.
* "git merge signed-tag" while lacking the public key started to say
"No signature", which was utterly wrong. This regression has been
reverted.
* MinGW's poll() emulation has been improved.
* "git show" and others gave an object name in raw format in its
error output, which has been corrected to give it in hex.
* Both "git ls-remote -h" and "git grep -h" give short usage help,
like any other Git subcommand, but it is not unreasonable to expect
that the former would behave the same as "git ls-remote --head"
(there is no other sensible behaviour for the latter). The
documentation has been updated in an attempt to clarify this.
Also contains various documentation updates, code clean-ups and minor fixups.

View File

@ -0,0 +1,5 @@
Git v2.25.3 Release Notes
=========================
This release merges the security fix that appears in v2.17.4; see
the release notes for that version for details.

View File

@ -0,0 +1,5 @@
Git v2.25.4 Release Notes
=========================
This release merges the security fix that appears in v2.17.5; see
the release notes for that version for details.

View File

@ -0,0 +1,7 @@
Git v2.25.5 Release Notes
=========================
This release merges up the fixes that appear in v2.17.6, v2.18.5,
v2.19.6, v2.20.5, v2.21.4, v2.22.5, v2.23.4 and v2.24.4 to address
the security issue CVE-2021-21300; see the release notes for
these versions for details.

View File

@ -79,7 +79,7 @@ higher priority configuration file (e.g. `.git/config` in a
repository) to clear the values inherited from a lower priority
configuration files (e.g. `$HOME/.gitconfig`).
+
--
----
Example:
@ -96,7 +96,7 @@ repo/.git/config
This will result in only b (a and c are cleared).
--
----
push.recurseSubmodules::
Make sure all submodule commits used by the revisions to be pushed

View File

@ -127,7 +127,7 @@ generate_render_makefile () {
while read src
do
dst=$2/${src#$1/}
printf 'all:: %s\n' "$dst"
printf 'all: %s\n' "$dst"
printf '%s: %s\n' "$dst" "$src"
printf '\t@echo >&2 " RENDER $(notdir $@)" && \\\n'
printf '\tmkdir -p $(dir $@) && \\\n'

View File

@ -30,9 +30,15 @@ OPTIONS
valid with a single pathname.
-v, --verbose::
Also output details about the matching pattern (if any)
for each given pathname. For precedence rules within and
between exclude sources, see linkgit:gitignore[5].
Instead of printing the paths that are excluded, for each path
that matches an exclude pattern, print the exclude pattern
together with the path. (Matching an exclude pattern usually
means the path is excluded, but if the pattern begins with '!'
then it is a negated pattern and matching it means the path is
NOT excluded.)
+
For precedence rules within and between exclude sources, see
linkgit:gitignore[5].
--stdin::
Read pathnames from the standard input, one per line,

View File

@ -28,7 +28,9 @@ OPTIONS
Limit to only refs/heads and refs/tags, respectively.
These options are _not_ mutually exclusive; when given
both, references stored in refs/heads and refs/tags are
displayed.
displayed. Note that `git ls-remote -h` used without
anything else on the command line gives help, consistent
with other git subcommands.
--refs::
Do not show peeled tags or pseudorefs like `HEAD` in the output.

View File

@ -126,6 +126,11 @@ usage: git describe [<options>] <commit-ish>*
--long always use long format
--abbrev[=<n>] use <n> digits to display SHA-1s
---------------------------------------------
+
Note that some subcommand (e.g. `git grep`) may behave differently
when there are things on the command line other than `-h`, but `git
subcmd -h` without anything else on the command line is meant to
consistently give the usage.
--help-all::
Some Git commands take options that are only used for plumbing or that

View File

@ -186,7 +186,94 @@ CUSTOM HELPERS
--------------
You can write your own custom helpers to interface with any system in
which you keep credentials. See credential.h for details.
which you keep credentials.
Credential helpers are programs executed by Git to fetch or save
credentials from and to long-term storage (where "long-term" is simply
longer than a single Git process; e.g., credentials may be stored
in-memory for a few minutes, or indefinitely on disk).
Each helper is specified by a single string in the configuration
variable `credential.helper` (and others, see linkgit:git-config[1]).
The string is transformed by Git into a command to be executed using
these rules:
1. If the helper string begins with "!", it is considered a shell
snippet, and everything after the "!" becomes the command.
2. Otherwise, if the helper string begins with an absolute path, the
verbatim helper string becomes the command.
3. Otherwise, the string "git credential-" is prepended to the helper
string, and the result becomes the command.
The resulting command then has an "operation" argument appended to it
(see below for details), and the result is executed by the shell.
Here are some example specifications:
----------------------------------------------------
# run "git credential-foo"
foo
# same as above, but pass an argument to the helper
foo --bar=baz
# the arguments are parsed by the shell, so use shell
# quoting if necessary
foo --bar="whitespace arg"
# you can also use an absolute path, which will not use the git wrapper
/path/to/my/helper --with-arguments
# or you can specify your own shell snippet
!f() { echo "password=`cat $HOME/.secret`"; }; f
----------------------------------------------------
Generally speaking, rule (3) above is the simplest for users to specify.
Authors of credential helpers should make an effort to assist their
users by naming their program "git-credential-$NAME", and putting it in
the `$PATH` or `$GIT_EXEC_PATH` during installation, which will allow a
user to enable it with `git config credential.helper $NAME`.
When a helper is executed, it will have one "operation" argument
appended to its command line, which is one of:
`get`::
Return a matching credential, if any exists.
`store`::
Store the credential, if applicable to the helper.
`erase`::
Remove a matching credential, if any, from the helper's storage.
The details of the credential will be provided on the helper's stdin
stream. The exact format is the same as the input/output format of the
`git credential` plumbing command (see the section `INPUT/OUTPUT
FORMAT` in linkgit:git-credential[1] for a detailed specification).
For a `get` operation, the helper should produce a list of attributes on
stdout in the same format (see linkgit:git-credential[1] for common
attributes). A helper is free to produce a subset, or even no values at
all if it has nothing useful to provide. Any provided attributes will
overwrite those already known about by Git. If a helper outputs a
`quit` attribute with a value of `true` or `1`, no further helpers will
be consulted, nor will the user be prompted (if no credential has been
provided, the operation will then fail).
For a `store` or `erase` operation, the helper's output is ignored.
If it fails to perform the requested operation, it may complain to
stderr to inform the user. If it does not support the requested
operation (e.g., a read-only store), it should silently ignore the
request.
If a helper receives any other operation, it should silently ignore the
request. This leaves room for future operations to be added (older
helpers will just ignore the new requests).
GIT
---

View File

@ -1,7 +1,7 @@
#!/bin/sh
GVF=GIT-VERSION-FILE
DEF_VER=v2.25.1
DEF_VER=v2.25.5
LF='
'

View File

@ -1 +1 @@
Documentation/RelNotes/2.25.1.txt
Documentation/RelNotes/2.25.5.txt

View File

@ -326,7 +326,10 @@ static ssize_t list_and_choose(struct add_i_state *s,
if (endp == p + sep)
to = from + 1;
else if (*endp == '-') {
to = strtoul(++endp, &endp, 10);
if (isdigit(*(++endp)))
to = strtoul(endp, &endp, 10);
else
to = items->items.nr;
/* extra characters after the range? */
if (endp != p + sep)
from = -1;
@ -913,7 +916,7 @@ static int run_patch(struct add_i_state *s, const struct pathspec *ps,
opts->prompt = N_("Patch update");
count = list_and_choose(s, files, opts);
if (count >= 0) {
if (count > 0) {
struct argv_array args = ARGV_ARRAY_INIT;
struct pathspec ps_selected = { 0 };
@ -954,7 +957,7 @@ static int run_diff(struct add_i_state *s, const struct pathspec *ps,
opts->flags = IMMEDIATE;
count = list_and_choose(s, files, opts);
opts->flags = 0;
if (count >= 0) {
if (count > 0) {
struct argv_array args = ARGV_ARRAY_INIT;
argv_array_pushl(&args, "git", "diff", "-p", "--cached",

View File

@ -5,7 +5,8 @@ jobs:
- job: windows_build
displayName: Windows Build
condition: succeeded()
pool: Hosted
pool:
vmImage: windows-latest
timeoutInMinutes: 240
steps:
- powershell: |
@ -61,7 +62,8 @@ jobs:
displayName: Windows Test
dependsOn: windows_build
condition: succeeded()
pool: Hosted
pool:
vmImage: windows-latest
timeoutInMinutes: 240
strategy:
parallel: 10
@ -133,7 +135,8 @@ jobs:
- job: vs_build
displayName: Visual Studio Build
condition: succeeded()
pool: Hosted VS2017
pool:
vmImage: windows-latest
timeoutInMinutes: 240
steps:
- powershell: |
@ -181,6 +184,7 @@ jobs:
platform: x64
configuration: Release
maximumCpuCount: 4
msbuildArguments: /p:PlatformToolset=v142
- powershell: |
& compat\vcbuild\vcpkg_copy_dlls.bat release
if (!$?) { exit(1) }
@ -224,7 +228,8 @@ jobs:
displayName: Visual Studio Test
dependsOn: vs_build
condition: succeeded()
pool: Hosted
pool:
vmImage: windows-latest
timeoutInMinutes: 240
strategy:
parallel: 10
@ -292,7 +297,8 @@ jobs:
- job: linux_clang
displayName: linux-clang
condition: succeeded()
pool: Hosted Ubuntu 1604
pool:
vmImage: ubuntu-latest
steps:
- bash: |
test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || ci/mount-fileshare.sh //gitfileshare.file.core.windows.net/test-cache gitfileshare "$GITFILESHAREPWD" "$HOME/test-cache" || exit 1
@ -330,7 +336,8 @@ jobs:
- job: linux_gcc
displayName: linux-gcc
condition: succeeded()
pool: Hosted Ubuntu 1604
pool:
vmImage: ubuntu-latest
steps:
- bash: |
test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || ci/mount-fileshare.sh //gitfileshare.file.core.windows.net/test-cache gitfileshare "$GITFILESHAREPWD" "$HOME/test-cache" || exit 1
@ -367,7 +374,8 @@ jobs:
- job: osx_clang
displayName: osx-clang
condition: succeeded()
pool: Hosted macOS
pool:
vmImage: macOS-latest
steps:
- bash: |
test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || ci/mount-fileshare.sh //gitfileshare.file.core.windows.net/test-cache gitfileshare "$GITFILESHAREPWD" "$HOME/test-cache" || exit 1
@ -402,7 +410,8 @@ jobs:
- job: osx_gcc
displayName: osx-gcc
condition: succeeded()
pool: Hosted macOS
pool:
vmImage: macOS-latest
steps:
- bash: |
test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || ci/mount-fileshare.sh //gitfileshare.file.core.windows.net/test-cache gitfileshare "$GITFILESHAREPWD" "$HOME/test-cache" || exit 1
@ -435,7 +444,8 @@ jobs:
- job: gettext_poison
displayName: GETTEXT_POISON
condition: succeeded()
pool: Hosted Ubuntu 1604
pool:
vmImage: ubuntu-latest
steps:
- bash: |
test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || ci/mount-fileshare.sh //gitfileshare.file.core.windows.net/test-cache gitfileshare "$GITFILESHAREPWD" "$HOME/test-cache" || exit 1
@ -472,7 +482,8 @@ jobs:
- job: linux32
displayName: Linux32
condition: succeeded()
pool: Hosted Ubuntu 1604
pool:
vmImage: ubuntu-latest
steps:
- bash: |
test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || ci/mount-fileshare.sh //gitfileshare.file.core.windows.net/test-cache gitfileshare "$GITFILESHAREPWD" "$HOME/test-cache" || exit 1
@ -506,7 +517,8 @@ jobs:
- job: static_analysis
displayName: StaticAnalysis
condition: succeeded()
pool: Hosted Ubuntu 1604
pool:
vmImage: ubuntu-latest
steps:
- bash: |
test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || ci/mount-fileshare.sh //gitfileshare.file.core.windows.net/test-cache gitfileshare "$GITFILESHAREPWD" "$HOME/test-cache" || exit 1
@ -526,7 +538,8 @@ jobs:
- job: documentation
displayName: Documentation
condition: succeeded()
pool: Hosted Ubuntu 1604
pool:
vmImage: ubuntu-latest
steps:
- bash: |
test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || ci/mount-fileshare.sh //gitfileshare.file.core.windows.net/test-cache gitfileshare "$GITFILESHAREPWD" "$HOME/test-cache" || exit 1

View File

@ -108,6 +108,9 @@ static int check_ignore(struct dir_struct *dir,
int dtype = DT_UNKNOWN;
pattern = last_matching_pattern(dir, &the_index,
full_path, &dtype);
if (!verbose && pattern &&
pattern->flags & PATTERN_FLAG_NEGATIVE)
pattern = NULL;
}
if (!quiet && (pattern || show_non_matching))
output_pattern(pathspec.items[i].original, pattern);

View File

@ -335,6 +335,7 @@ static void find_non_local_tags(const struct ref *refs,
struct string_list_item *remote_ref_item;
const struct ref *ref;
struct refname_hash_entry *item = NULL;
const int quick_flags = OBJECT_INFO_QUICK | OBJECT_INFO_SKIP_FETCH_OBJECT;
refname_hash_init(&existing_refs);
refname_hash_init(&remote_refs);
@ -353,10 +354,9 @@ static void find_non_local_tags(const struct ref *refs,
*/
if (ends_with(ref->name, "^{}")) {
if (item &&
!has_object_file_with_flags(&ref->old_oid,
OBJECT_INFO_QUICK) &&
!has_object_file_with_flags(&ref->old_oid, quick_flags) &&
!oidset_contains(&fetch_oids, &ref->old_oid) &&
!has_object_file_with_flags(&item->oid, OBJECT_INFO_QUICK) &&
!has_object_file_with_flags(&item->oid, quick_flags) &&
!oidset_contains(&fetch_oids, &item->oid))
clear_item(item);
item = NULL;
@ -370,7 +370,7 @@ static void find_non_local_tags(const struct ref *refs,
* fetch.
*/
if (item &&
!has_object_file_with_flags(&item->oid, OBJECT_INFO_QUICK) &&
!has_object_file_with_flags(&item->oid, quick_flags) &&
!oidset_contains(&fetch_oids, &item->oid))
clear_item(item);
@ -391,7 +391,7 @@ static void find_non_local_tags(const struct ref *refs,
* checked to see if it needs fetching.
*/
if (item &&
!has_object_file_with_flags(&item->oid, OBJECT_INFO_QUICK) &&
!has_object_file_with_flags(&item->oid, quick_flags) &&
!oidset_contains(&fetch_oids, &item->oid))
clear_item(item);

View File

@ -494,7 +494,6 @@ static void fmt_merge_msg_sigs(struct strbuf *out)
enum object_type type;
unsigned long size, len;
char *buf = read_object_file(oid, &type, &size);
struct signature_check sigc = { 0 };
struct strbuf sig = STRBUF_INIT;
if (!buf || type != OBJ_TAG)
@ -503,12 +502,10 @@ static void fmt_merge_msg_sigs(struct strbuf *out)
if (size == len)
; /* merely annotated */
else if (!check_signature(buf, len, buf + len, size - len,
&sigc)) {
strbuf_addstr(&sig, sigc.gpg_output);
signature_check_clear(&sigc);
} else
strbuf_addstr(&sig, "gpg verification failed.\n");
else if (verify_signed_buffer(buf, len, buf + len, size - len, &sig, NULL)) {
if (!sig.len)
strbuf_addstr(&sig, "gpg verification failed.\n");
}
if (!tag_number++) {
fmt_tag_signature(&tagbuf, &sig, buf, len);

View File

@ -1003,7 +1003,9 @@ static struct base_data *find_unresolved_deltas_1(struct base_data *base,
if (!compare_and_swap_type(&child->real_type, OBJ_REF_DELTA,
base->obj->real_type))
BUG("child->real_type != OBJ_REF_DELTA");
die("REF_DELTA at offset %"PRIuMAX" already resolved (duplicate base %s?)",
(uintmax_t)child->idx.offset,
oid_to_hex(&base->obj->idx.oid));
resolve_delta(child, base, result);
if (base->ref_first == base->ref_last && base->ofs_last == -1)

View File

@ -1712,6 +1712,7 @@ int has_symlink_leading_path(const char *name, int len);
int threaded_has_symlink_leading_path(struct cache_def *, const char *, int);
int check_leading_path(const char *name, int len);
int has_dirs_only_path(const char *name, int len, int prefix_len);
void invalidate_lstat_cache(void);
void schedule_dir_for_removal(const char *name, int len);
void remove_scheduled_dirs(void);

View File

@ -40,11 +40,11 @@ osx-clang|osx-gcc)
test -z "$BREW_INSTALL_PACKAGES" ||
brew install $BREW_INSTALL_PACKAGES
brew link --force gettext
brew cask install perforce || {
brew cask install --no-quarantine perforce || {
# Update the definitions and try again
cask_repo="$(brew --repository)"/Library/Taps/homebrew/homebrew-cask &&
git -C "$cask_repo" pull --no-stat &&
brew cask install perforce
brew cask install --no-quarantine perforce
} ||
brew install caskroom/cask/perforce
case "$jobname" in

View File

@ -13,6 +13,19 @@
static const int delay[] = { 0, 1, 10, 20, 40 };
void open_in_gdb(void)
{
static struct child_process cp = CHILD_PROCESS_INIT;
extern char *_pgmptr;
argv_array_pushl(&cp.args, "mintty", "gdb", NULL);
argv_array_pushf(&cp.args, "--pid=%d", getpid());
cp.clean_on_exit = 1;
if (start_command(&cp) < 0)
die_errno("Could not start gdb");
sleep(1);
}
int err_win_to_posix(DWORD winerr)
{
int error = ENOSYS;
@ -351,6 +364,8 @@ int mingw_rmdir(const char *pathname)
ask_yes_no_if_possible("Deletion of directory '%s' failed. "
"Should I try again?", pathname))
ret = _wrmdir(wpathname);
if (!ret)
invalidate_lstat_cache();
return ret;
}

View File

@ -598,6 +598,16 @@ extern CRITICAL_SECTION pinfo_cs;
int wmain(int argc, const wchar_t **w_argv);
int main(int argc, const char **argv);
/*
* For debugging: if a problem occurs, say, in a Git process that is spawned
* from another Git process which in turn is spawned from yet another Git
* process, it can be quite daunting to figure out what is going on.
*
* Call this function to open a new MinTTY (this assumes you are in Git for
* Windows' SDK) with a GDB that attaches to the current process right away.
*/
extern void open_in_gdb(void);
/*
* Used by Pthread API implementation for Windows
*/

View File

@ -135,8 +135,10 @@ extern "C" {
alignment relative to 0. */
#define __PTR_ALIGN(B, P, A) \
__BPTR_ALIGN (sizeof (PTR_INT_TYPE) < sizeof (void *) ? (B) : (char *) 0, \
P, A)
(sizeof (PTR_INT_TYPE) < sizeof(void *) ? \
__BPTR_ALIGN((B), (P), (A)) : \
(void *)__BPTR_ALIGN((PTR_INT_TYPE)(void *)0, (PTR_INT_TYPE)(P), (A)) \
)
#include <string.h>

View File

@ -139,22 +139,10 @@ win32_compute_revents (HANDLE h, int *p_sought)
INPUT_RECORD *irbuffer;
DWORD avail, nbuffer;
BOOL bRet;
IO_STATUS_BLOCK iosb;
FILE_PIPE_LOCAL_INFORMATION fpli;
static PNtQueryInformationFile NtQueryInformationFile;
static BOOL once_only;
switch (GetFileType (h))
{
case FILE_TYPE_PIPE:
if (!once_only)
{
NtQueryInformationFile = (PNtQueryInformationFile)(void (*)(void))
GetProcAddress (GetModuleHandleW (L"ntdll.dll"),
"NtQueryInformationFile");
once_only = TRUE;
}
happened = 0;
if (PeekNamedPipe (h, NULL, 0, NULL, &avail, NULL) != 0)
{
@ -166,22 +154,9 @@ win32_compute_revents (HANDLE h, int *p_sought)
else
{
/* It was the write-end of the pipe. Check if it is writable.
If NtQueryInformationFile fails, optimistically assume the pipe is
writable. This could happen on Win9x, where NtQueryInformationFile
is not available, or if we inherit a pipe that doesn't permit
FILE_READ_ATTRIBUTES access on the write end (I think this should
not happen since WinXP SP2; WINE seems fine too). Otherwise,
ensure that enough space is available for atomic writes. */
memset (&iosb, 0, sizeof (iosb));
memset (&fpli, 0, sizeof (fpli));
if (!NtQueryInformationFile
|| NtQueryInformationFile (h, &iosb, &fpli, sizeof (fpli),
FilePipeLocalInformation)
|| fpli.WriteQuotaAvailable >= PIPE_BUF
|| (fpli.OutboundQuota < PIPE_BUF &&
fpli.WriteQuotaAvailable == fpli.OutboundQuota))
/* It was the write-end of the pipe. Unfortunately there is no
reliable way of knowing if it can be written without blocking.
Just say that it's all good. */
happened |= *p_sought & (POLLOUT | POLLWRNORM | POLLWRBAND);
}
return happened;

View File

@ -89,6 +89,11 @@ static int proto_is_http(const char *s)
static void credential_apply_config(struct credential *c)
{
if (!c->host)
die(_("refusing to work with credential missing host field"));
if (!c->protocol)
die(_("refusing to work with credential missing protocol field"));
if (c->configured)
return;
git_config(credential_config_callback, c);
@ -191,20 +196,25 @@ int credential_read(struct credential *c, FILE *fp)
return 0;
}
static void credential_write_item(FILE *fp, const char *key, const char *value)
static void credential_write_item(FILE *fp, const char *key, const char *value,
int required)
{
if (!value && required)
BUG("credential value for %s is missing", key);
if (!value)
return;
if (strchr(value, '\n'))
die("credential value for %s contains newline", key);
fprintf(fp, "%s=%s\n", key, value);
}
void credential_write(const struct credential *c, FILE *fp)
{
credential_write_item(fp, "protocol", c->protocol);
credential_write_item(fp, "host", c->host);
credential_write_item(fp, "path", c->path);
credential_write_item(fp, "username", c->username);
credential_write_item(fp, "password", c->password);
credential_write_item(fp, "protocol", c->protocol, 1);
credential_write_item(fp, "host", c->host, 1);
credential_write_item(fp, "path", c->path, 0);
credential_write_item(fp, "username", c->username, 0);
credential_write_item(fp, "password", c->password, 0);
}
static int run_credential_helper(struct credential *c,
@ -322,7 +332,22 @@ void credential_reject(struct credential *c)
c->approved = 0;
}
void credential_from_url(struct credential *c, const char *url)
static int check_url_component(const char *url, int quiet,
const char *name, const char *value)
{
if (!value)
return 0;
if (!strchr(value, '\n'))
return 0;
if (!quiet)
warning(_("url contains a newline in its %s component: %s"),
name, url);
return -1;
}
int credential_from_url_gently(struct credential *c, const char *url,
int quiet)
{
const char *at, *colon, *cp, *slash, *host, *proto_end;
@ -335,8 +360,11 @@ void credential_from_url(struct credential *c, const char *url)
* (3) proto://<user>:<pass>@<host>/...
*/
proto_end = strstr(url, "://");
if (!proto_end)
return;
if (!proto_end || proto_end == url) {
if (!quiet)
warning(_("url has no scheme: %s"), url);
return -1;
}
cp = proto_end + 3;
at = strchr(cp, '@');
colon = strchr(cp, ':');
@ -357,10 +385,8 @@ void credential_from_url(struct credential *c, const char *url)
host = at + 1;
}
if (proto_end - url > 0)
c->protocol = xmemdupz(url, proto_end - url);
if (slash - host > 0)
c->host = url_decode_mem(host, slash - host);
c->protocol = xmemdupz(url, proto_end - url);
c->host = url_decode_mem(host, slash - host);
/* Trim leading and trailing slashes from path */
while (*slash == '/')
slash++;
@ -371,4 +397,19 @@ void credential_from_url(struct credential *c, const char *url)
while (p > c->path && *p == '/')
*p-- = '\0';
}
if (check_url_component(url, quiet, "username", c->username) < 0 ||
check_url_component(url, quiet, "password", c->password) < 0 ||
check_url_component(url, quiet, "protocol", c->protocol) < 0 ||
check_url_component(url, quiet, "host", c->host) < 0 ||
check_url_component(url, quiet, "path", c->path) < 0)
return -1;
return 0;
}
void credential_from_url(struct credential *c, const char *url)
{
if (credential_from_url_gently(c, url, 0) < 0)
die(_("credential url cannot be parsed: %s"), url);
}

View File

@ -90,96 +90,6 @@
* return status;
* }
* -----------------------------------------------------------------------
*
* Credential Helpers
* ------------------
*
* Credential helpers are programs executed by Git to fetch or save
* credentials from and to long-term storage (where "long-term" is simply
* longer than a single Git process; e.g., credentials may be stored
* in-memory for a few minutes, or indefinitely on disk).
*
* Each helper is specified by a single string in the configuration
* variable `credential.helper` (and others, see Documentation/git-config.txt).
* The string is transformed by Git into a command to be executed using
* these rules:
*
* 1. If the helper string begins with "!", it is considered a shell
* snippet, and everything after the "!" becomes the command.
*
* 2. Otherwise, if the helper string begins with an absolute path, the
* verbatim helper string becomes the command.
*
* 3. Otherwise, the string "git credential-" is prepended to the helper
* string, and the result becomes the command.
*
* The resulting command then has an "operation" argument appended to it
* (see below for details), and the result is executed by the shell.
*
* Here are some example specifications:
*
* ----------------------------------------------------
* # run "git credential-foo"
* foo
*
* # same as above, but pass an argument to the helper
* foo --bar=baz
*
* # the arguments are parsed by the shell, so use shell
* # quoting if necessary
* foo --bar="whitespace arg"
*
* # you can also use an absolute path, which will not use the git wrapper
* /path/to/my/helper --with-arguments
*
* # or you can specify your own shell snippet
* !f() { echo "password=`cat $HOME/.secret`"; }; f
* ----------------------------------------------------
*
* Generally speaking, rule (3) above is the simplest for users to specify.
* Authors of credential helpers should make an effort to assist their
* users by naming their program "git-credential-$NAME", and putting it in
* the $PATH or $GIT_EXEC_PATH during installation, which will allow a user
* to enable it with `git config credential.helper $NAME`.
*
* When a helper is executed, it will have one "operation" argument
* appended to its command line, which is one of:
*
* `get`::
*
* Return a matching credential, if any exists.
*
* `store`::
*
* Store the credential, if applicable to the helper.
*
* `erase`::
*
* Remove a matching credential, if any, from the helper's storage.
*
* The details of the credential will be provided on the helper's stdin
* stream. The exact format is the same as the input/output format of the
* `git credential` plumbing command (see the section `INPUT/OUTPUT
* FORMAT` in Documentation/git-credential.txt for a detailed specification).
*
* For a `get` operation, the helper should produce a list of attributes
* on stdout in the same format. A helper is free to produce a subset, or
* even no values at all if it has nothing useful to provide. Any provided
* attributes will overwrite those already known about by Git. If a helper
* outputs a `quit` attribute with a value of `true` or `1`, no further
* helpers will be consulted, nor will the user be prompted (if no
* credential has been provided, the operation will then fail).
*
* For a `store` or `erase` operation, the helper's output is ignored.
* If it fails to perform the requested operation, it may complain to
* stderr to inform the user. If it does not support the requested
* operation (e.g., a read-only store), it should silently ignore the
* request.
*
* If a helper receives any other operation, it should silently ignore the
* request. This leaves room for future operations to be added (older
* helpers will just ignore the new requests).
*
*/
@ -262,8 +172,21 @@ void credential_reject(struct credential *);
int credential_read(struct credential *, FILE *);
void credential_write(const struct credential *, FILE *);
/* Parse a URL into broken-down credential fields. */
/*
* Parse a url into a credential struct, replacing any existing contents.
*
* If the url can't be parsed (e.g., a missing "proto://" component), the
* resulting credential will be empty but we'll still return success from the
* "gently" form.
*
* If we encounter a component which cannot be represented as a credential
* value (e.g., because it contains a newline), the "gently" form will return
* an error but leave the broken state in the credential object for further
* examination. The non-gentle form will issue a warning to stderr and return
* an empty credential.
*/
void credential_from_url(struct credential *, const char *url);
int credential_from_url_gently(struct credential *, const char *url, int quiet);
int credential_match(const struct credential *have,
const struct credential *want);

147
fsck.c
View File

@ -9,12 +9,14 @@
#include "tag.h"
#include "fsck.h"
#include "refs.h"
#include "url.h"
#include "utf8.h"
#include "decorate.h"
#include "oidset.h"
#include "packfile.h"
#include "submodule-config.h"
#include "config.h"
#include "credential.h"
#include "help.h"
static struct oidset gitmodules_found = OIDSET_INIT;
@ -910,6 +912,149 @@ done:
return ret;
}
/*
* Like builtin/submodule--helper.c's starts_with_dot_slash, but without
* relying on the platform-dependent is_dir_sep helper.
*
* This is for use in checking whether a submodule URL is interpreted as
* relative to the current directory on any platform, since \ is a
* directory separator on Windows but not on other platforms.
*/
static int starts_with_dot_slash(const char *str)
{
return str[0] == '.' && (str[1] == '/' || str[1] == '\\');
}
/*
* Like starts_with_dot_slash, this is a variant of submodule--helper's
* helper of the same name with the twist that it accepts backslash as a
* directory separator even on non-Windows platforms.
*/
static int starts_with_dot_dot_slash(const char *str)
{
return str[0] == '.' && starts_with_dot_slash(str + 1);
}
static int submodule_url_is_relative(const char *url)
{
return starts_with_dot_slash(url) || starts_with_dot_dot_slash(url);
}
/*
* Count directory components that a relative submodule URL should chop
* from the remote_url it is to be resolved against.
*
* In other words, this counts "../" components at the start of a
* submodule URL.
*
* Returns the number of directory components to chop and writes a
* pointer to the next character of url after all leading "./" and
* "../" components to out.
*/
static int count_leading_dotdots(const char *url, const char **out)
{
int result = 0;
while (1) {
if (starts_with_dot_dot_slash(url)) {
result++;
url += strlen("../");
continue;
}
if (starts_with_dot_slash(url)) {
url += strlen("./");
continue;
}
*out = url;
return result;
}
}
/*
* Check whether a transport is implemented by git-remote-curl.
*
* If it is, returns 1 and writes the URL that would be passed to
* git-remote-curl to the "out" parameter.
*
* Otherwise, returns 0 and leaves "out" untouched.
*
* Examples:
* http::https://example.com/repo.git -> 1, https://example.com/repo.git
* https://example.com/repo.git -> 1, https://example.com/repo.git
* git://example.com/repo.git -> 0
*
* This is for use in checking for previously exploitable bugs that
* required a submodule URL to be passed to git-remote-curl.
*/
static int url_to_curl_url(const char *url, const char **out)
{
/*
* We don't need to check for case-aliases, "http.exe", and so
* on because in the default configuration, is_transport_allowed
* prevents URLs with those schemes from being cloned
* automatically.
*/
if (skip_prefix(url, "http::", out) ||
skip_prefix(url, "https::", out) ||
skip_prefix(url, "ftp::", out) ||
skip_prefix(url, "ftps::", out))
return 1;
if (starts_with(url, "http://") ||
starts_with(url, "https://") ||
starts_with(url, "ftp://") ||
starts_with(url, "ftps://")) {
*out = url;
return 1;
}
return 0;
}
static int check_submodule_url(const char *url)
{
const char *curl_url;
if (looks_like_command_line_option(url))
return -1;
if (submodule_url_is_relative(url)) {
char *decoded;
const char *next;
int has_nl;
/*
* This could be appended to an http URL and url-decoded;
* check for malicious characters.
*/
decoded = url_decode(url);
has_nl = !!strchr(decoded, '\n');
free(decoded);
if (has_nl)
return -1;
/*
* URLs which escape their root via "../" can overwrite
* the host field and previous components, resolving to
* URLs like https::example.com/submodule.git and
* https:///example.com/submodule.git that were
* susceptible to CVE-2020-11008.
*/
if (count_leading_dotdots(url, &next) > 0 &&
(*next == ':' || *next == '/'))
return -1;
}
else if (url_to_curl_url(url, &curl_url)) {
struct credential c = CREDENTIAL_INIT;
int ret = 0;
if (credential_from_url_gently(&c, curl_url, 1) ||
!*c.host)
ret = -1;
credential_clear(&c);
return ret;
}
return 0;
}
struct fsck_gitmodules_data {
const struct object_id *oid;
struct fsck_options *options;
@ -935,7 +1080,7 @@ static int fsck_gitmodules_fn(const char *var, const char *value, void *vdata)
"disallowed submodule name: %s",
name);
if (!strcmp(key, "url") && value &&
looks_like_command_line_option(value))
check_submodule_url(value) < 0)
data->ret |= report(data->options,
data->oid, OBJ_BLOB,
FSCK_MSG_GITMODULES_URL,

View File

@ -345,6 +345,11 @@ static inline int noop_core_config(const char *var, const char *value, void *cb)
#define platform_core_config noop_core_config
#endif
int lstat_cache_aware_rmdir(const char *path);
#if !defined(__MINGW32__) && !defined(_MSC_VER)
#define rmdir lstat_cache_aware_rmdir
#endif
#ifndef has_dos_drive_prefix
static inline int git_has_dos_drive_prefix(const char *path)
{

View File

@ -207,55 +207,6 @@ found_duplicate_status:
FREE_AND_NULL(sigc->key);
}
static int verify_signed_buffer(const char *payload, size_t payload_size,
const char *signature, size_t signature_size,
struct strbuf *gpg_output,
struct strbuf *gpg_status)
{
struct child_process gpg = CHILD_PROCESS_INIT;
struct gpg_format *fmt;
struct tempfile *temp;
int ret;
struct strbuf buf = STRBUF_INIT;
temp = mks_tempfile_t(".git_vtag_tmpXXXXXX");
if (!temp)
return error_errno(_("could not create temporary file"));
if (write_in_full(temp->fd, signature, signature_size) < 0 ||
close_tempfile_gently(temp) < 0) {
error_errno(_("failed writing detached signature to '%s'"),
temp->filename.buf);
delete_tempfile(&temp);
return -1;
}
fmt = get_format_by_sig(signature);
if (!fmt)
BUG("bad signature '%s'", signature);
argv_array_push(&gpg.args, fmt->program);
argv_array_pushv(&gpg.args, fmt->verify_args);
argv_array_pushl(&gpg.args,
"--status-fd=1",
"--verify", temp->filename.buf, "-",
NULL);
if (!gpg_status)
gpg_status = &buf;
sigchain_push(SIGPIPE, SIG_IGN);
ret = pipe_command(&gpg, payload, payload_size,
gpg_status, 0, gpg_output, 0);
sigchain_pop(SIGPIPE);
delete_tempfile(&temp);
ret |= !strstr(gpg_status->buf, "\n[GNUPG:] GOODSIG ");
strbuf_release(&buf); /* no matter it was used or not */
return ret;
}
int check_signature(const char *payload, size_t plen, const char *signature,
size_t slen, struct signature_check *sigc)
{
@ -400,3 +351,51 @@ int sign_buffer(struct strbuf *buffer, struct strbuf *signature, const char *sig
return 0;
}
int verify_signed_buffer(const char *payload, size_t payload_size,
const char *signature, size_t signature_size,
struct strbuf *gpg_output, struct strbuf *gpg_status)
{
struct child_process gpg = CHILD_PROCESS_INIT;
struct gpg_format *fmt;
struct tempfile *temp;
int ret;
struct strbuf buf = STRBUF_INIT;
temp = mks_tempfile_t(".git_vtag_tmpXXXXXX");
if (!temp)
return error_errno(_("could not create temporary file"));
if (write_in_full(temp->fd, signature, signature_size) < 0 ||
close_tempfile_gently(temp) < 0) {
error_errno(_("failed writing detached signature to '%s'"),
temp->filename.buf);
delete_tempfile(&temp);
return -1;
}
fmt = get_format_by_sig(signature);
if (!fmt)
BUG("bad signature '%s'", signature);
argv_array_push(&gpg.args, fmt->program);
argv_array_pushv(&gpg.args, fmt->verify_args);
argv_array_pushl(&gpg.args,
"--status-fd=1",
"--verify", temp->filename.buf, "-",
NULL);
if (!gpg_status)
gpg_status = &buf;
sigchain_push(SIGPIPE, SIG_IGN);
ret = pipe_command(&gpg, payload, payload_size,
gpg_status, 0, gpg_output, 0);
sigchain_pop(SIGPIPE);
delete_tempfile(&temp);
ret |= !strstr(gpg_status->buf, "\n[GNUPG:] GOODSIG ");
strbuf_release(&buf); /* no matter it was used or not */
return ret;
}

View File

@ -46,6 +46,15 @@ size_t parse_signature(const char *buf, size_t size);
int sign_buffer(struct strbuf *buffer, struct strbuf *signature,
const char *signing_key);
/*
* Run "gpg" to see if the payload matches the detached signature.
* gpg_output, when set, receives the diagnostic output from GPG.
* gpg_status, when set, receives the status output from GPG.
*/
int verify_signed_buffer(const char *payload, size_t payload_size,
const char *signature, size_t signature_size,
struct strbuf *gpg_output, struct strbuf *gpg_status);
int git_gpg_config(const char *, const char *, void *);
void set_signing_key(const char *);
const char *get_signing_key(void);

1
http.c
View File

@ -558,6 +558,7 @@ static int has_cert_password(void)
return 0;
if (!cert_auth.password) {
cert_auth.protocol = xstrdup("cert");
cert_auth.host = xstrdup("");
cert_auth.username = xstrdup("");
cert_auth.path = xstrdup(ssl_cert);
credential_fill(&cert_auth);

View File

@ -449,22 +449,22 @@ static void show_signature(struct rev_info *opt, struct commit *commit)
{
struct strbuf payload = STRBUF_INIT;
struct strbuf signature = STRBUF_INIT;
struct signature_check sigc = { 0 };
struct strbuf gpg_output = STRBUF_INIT;
int status;
if (parse_signed_commit(commit, &payload, &signature) <= 0)
goto out;
status = check_signature(payload.buf, payload.len, signature.buf,
signature.len, &sigc);
if (status && sigc.result == 'N')
show_sig_lines(opt, status, "No signature\n");
else {
show_sig_lines(opt, status, sigc.gpg_output);
signature_check_clear(&sigc);
}
status = verify_signed_buffer(payload.buf, payload.len,
signature.buf, signature.len,
&gpg_output, NULL);
if (status && !gpg_output.len)
strbuf_addstr(&gpg_output, "No signature\n");
show_sig_lines(opt, status, gpg_output.buf);
out:
strbuf_release(&gpg_output);
strbuf_release(&payload);
strbuf_release(&signature);
}
@ -497,7 +497,6 @@ static int show_one_mergetag(struct commit *commit,
struct object_id oid;
struct tag *tag;
struct strbuf verify_message;
struct signature_check sigc = { 0 };
int status, nth;
size_t payload_size, gpg_message_offset;
@ -516,7 +515,7 @@ static int show_one_mergetag(struct commit *commit,
"merged tag '%s'\n", tag->tag);
else if ((nth = which_parent(&tag->tagged->oid, commit)) < 0)
strbuf_addf(&verify_message, "tag %s names a non-parent %s\n",
tag->tag, tag->tagged->oid.hash);
tag->tag, oid_to_hex(&tag->tagged->oid));
else
strbuf_addf(&verify_message,
"parent #%d, tagged '%s'\n", nth + 1, tag->tag);
@ -526,13 +525,12 @@ static int show_one_mergetag(struct commit *commit,
status = -1;
if (extra->len > payload_size) {
/* could have a good signature */
if (!check_signature(extra->value, payload_size,
extra->value + payload_size,
extra->len - payload_size, &sigc)) {
strbuf_addstr(&verify_message, sigc.gpg_output);
signature_check_clear(&sigc);
if (!verify_signed_buffer(extra->value, payload_size,
extra->value + payload_size,
extra->len - payload_size,
&verify_message, NULL))
status = 0; /* good */
} else if (verify_message.len <= gpg_message_offset)
else if (verify_message.len <= gpg_message_offset)
strbuf_addstr(&verify_message, "No signature\n");
/* otherwise we couldn't verify, which is shown as bad */
}

View File

@ -998,10 +998,13 @@ static int update_file_flags(struct merge_options *opt,
free(buf);
}
update_index:
if (!ret && update_cache)
if (add_cacheinfo(opt, contents, path, 0, update_wd,
if (!ret && update_cache) {
int refresh = (!opt->priv->call_depth &&
contents->mode != S_IFGITLINK);
if (add_cacheinfo(opt, contents, path, 0, refresh,
ADD_CACHE_OK_TO_ADD))
return -1;
}
return ret;
}
@ -1712,6 +1715,14 @@ static char *find_path_for_conflict(struct merge_options *opt,
return new_path;
}
/*
* Toggle the stage number between "ours" and "theirs" (2 and 3).
*/
static inline int flip_stage(int stage)
{
return (2 + 3) - stage;
}
static int handle_rename_rename_1to2(struct merge_options *opt,
struct rename_conflict_info *ci)
{
@ -1756,14 +1767,14 @@ static int handle_rename_rename_1to2(struct merge_options *opt,
* such cases, we should keep the added file around,
* resolving the conflict at that path in its favor.
*/
add = &ci->ren1->dst_entry->stages[2 ^ 1];
add = &ci->ren1->dst_entry->stages[flip_stage(2)];
if (is_valid(add)) {
if (update_file(opt, 0, add, a->path))
return -1;
}
else
remove_file_from_index(opt->repo->index, a->path);
add = &ci->ren2->dst_entry->stages[3 ^ 1];
add = &ci->ren2->dst_entry->stages[flip_stage(3)];
if (is_valid(add)) {
if (update_file(opt, 0, add, b->path))
return -1;
@ -1776,7 +1787,7 @@ static int handle_rename_rename_1to2(struct merge_options *opt,
* rename/add collision. If not, we can write the file out
* to the specified location.
*/
add = &ci->ren1->dst_entry->stages[2 ^ 1];
add = &ci->ren1->dst_entry->stages[flip_stage(2)];
if (is_valid(add)) {
add->path = mfi.blob.path = a->path;
if (handle_file_collision(opt, a->path,
@ -1797,7 +1808,7 @@ static int handle_rename_rename_1to2(struct merge_options *opt,
return -1;
}
add = &ci->ren2->dst_entry->stages[3 ^ 1];
add = &ci->ren2->dst_entry->stages[flip_stage(3)];
if (is_valid(add)) {
add->path = mfi.blob.path = b->path;
if (handle_file_collision(opt, b->path,
@ -1846,7 +1857,7 @@ static int handle_rename_rename_2to1(struct merge_options *opt,
path_side_1_desc = xstrfmt("version of %s from %s", path, a->path);
path_side_2_desc = xstrfmt("version of %s from %s", path, b->path);
ostage1 = ci->ren1->branch == opt->branch1 ? 3 : 2;
ostage2 = ostage1 ^ 1;
ostage2 = flip_stage(ostage1);
ci->ren1->src_entry->stages[ostage1].path = a->path;
ci->ren2->src_entry->stages[ostage2].path = b->path;
if (merge_mode_and_contents(opt, a, c1,

20
notes.c
View File

@ -576,16 +576,16 @@ redo:
* the note tree that have not yet been explored. There
* is a direct relationship between subtree entries at
* level 'n' in the tree, and the 'fanout' variable:
* Subtree entries at level 'n <= 2 * fanout' should be
* Subtree entries at level 'n < 2 * fanout' should be
* preserved, since they correspond exactly to a fanout
* directory in the on-disk structure. However, subtree
* entries at level 'n > 2 * fanout' should NOT be
* entries at level 'n >= 2 * fanout' should NOT be
* preserved, but rather consolidated into the above
* notes tree level. We achieve this by unconditionally
* unpacking subtree entries that exist below the
* threshold level at 'n = 2 * fanout'.
*/
if (n <= 2 * fanout &&
if (n < 2 * fanout &&
flags & FOR_EACH_NOTE_YIELD_SUBTREES) {
/* invoke callback with subtree */
unsigned int path_len =
@ -602,7 +602,7 @@ redo:
path,
cb_data);
}
if (n > fanout * 2 ||
if (n >= 2 * fanout ||
!(flags & FOR_EACH_NOTE_DONT_UNPACK_SUBTREES)) {
/* unpack subtree and resume traversal */
tree->a[i] = NULL;
@ -723,13 +723,15 @@ static int write_each_note_helper(struct tree_write_stack *tws,
struct write_each_note_data {
struct tree_write_stack *root;
struct non_note *next_non_note;
struct non_note **nn_list;
struct non_note *nn_prev;
};
static int write_each_non_note_until(const char *note_path,
struct write_each_note_data *d)
{
struct non_note *n = d->next_non_note;
struct non_note *p = d->nn_prev;
struct non_note *n = p ? p->next : *d->nn_list;
int cmp = 0, ret;
while (n && (!note_path || (cmp = strcmp(n->path, note_path)) <= 0)) {
if (note_path && cmp == 0)
@ -740,9 +742,10 @@ static int write_each_non_note_until(const char *note_path,
if (ret)
return ret;
}
p = n;
n = n->next;
}
d->next_non_note = n;
d->nn_prev = p;
return 0;
}
@ -1177,7 +1180,8 @@ int write_notes_tree(struct notes_tree *t, struct object_id *result)
strbuf_init(&root.buf, 256 * (32 + the_hash_algo->hexsz)); /* assume 256 entries */
root.path[0] = root.path[1] = '\0';
cb_data.root = &root;
cb_data.next_non_note = t->first_non_note;
cb_data.nn_list = &(t->first_non_note);
cb_data.nn_prev = NULL;
/* Write tree objects representing current notes tree */
flags = FOR_EACH_NOTE_DONT_UNPACK_SUBTREES |

View File

@ -438,8 +438,13 @@ static void init_pathspec_item(struct pathspec_item *item, unsigned flags,
} else {
match = prefix_path_gently(prefix, prefixlen,
&prefixlen, copyfrom);
if (!match)
die(_("%s: '%s' is outside repository"), elt, copyfrom);
if (!match) {
const char *hint_path = get_git_work_tree();
if (!hint_path)
hint_path = get_git_dir();
die(_("%s: '%s' is outside repository at '%s'"), elt,
copyfrom, absolute_path(hint_path));
}
}
item->match = match;

View File

@ -104,9 +104,11 @@ int edit_todo_list(struct repository *r, struct todo_list *todo_list,
-1, flags | TODO_LIST_SHORTEN_IDS | TODO_LIST_APPEND_TODO_HELP))
return error_errno(_("could not write '%s'"), todo_file);
if (initial && copy_file(rebase_path_todo_backup(), todo_file, 0666))
return error(_("could not copy '%s' to '%s'."), todo_file,
rebase_path_todo_backup());
if (initial &&
todo_list_write_to_file(r, todo_list, rebase_path_todo_backup(),
shortrevisions, shortonto, -1,
(flags | TODO_LIST_APPEND_TODO_HELP) & ~TODO_LIST_SHORTEN_IDS) < 0)
return error(_("could not write '%s'."), rebase_path_todo_backup());
if (launch_sequence_editor(todo_file, &new_todo->buf, NULL))
return -2;

View File

@ -989,6 +989,7 @@ int finish_command(struct child_process *cmd)
int ret = wait_or_whine(cmd->pid, cmd->argv[0], 0);
trace2_child_exit(cmd, ret);
child_process_clear(cmd);
invalidate_lstat_cache();
return ret;
}
@ -1289,13 +1290,19 @@ error:
int finish_async(struct async *async)
{
#ifdef NO_PTHREADS
return wait_or_whine(async->pid, "child process", 0);
int ret = wait_or_whine(async->pid, "child process", 0);
invalidate_lstat_cache();
return ret;
#else
void *ret = (void *)(intptr_t)(-1);
if (pthread_join(async->tid, &ret))
error("pthread_join failed");
invalidate_lstat_cache();
return (int)(intptr_t)ret;
#endif
}

View File

@ -116,7 +116,7 @@ struct child_process {
unsigned no_stdin:1;
unsigned no_stdout:1;
unsigned no_stderr:1;
unsigned git_cmd:1; /* if this is to be git sub-command */
unsigned git_cmd:1; /* if this is to be git sub-command */
/**
* If the program cannot be found, the functions return -1 and set

View File

@ -588,7 +588,7 @@ static int do_recursive_merge(struct repository *r,
struct merge_options o;
struct tree *next_tree, *base_tree, *head_tree;
int clean;
char **xopt;
int i;
struct lock_file index_lock = LOCK_INIT;
if (repo_hold_locked_index(r, &index_lock, LOCK_REPORT_ON_ERROR) < 0)
@ -608,8 +608,8 @@ static int do_recursive_merge(struct repository *r,
next_tree = next ? get_commit_tree(next) : empty_tree(r);
base_tree = base ? get_commit_tree(base) : empty_tree(r);
for (xopt = opts->xopts; xopt != opts->xopts + opts->xopts_nr; xopt++)
parse_merge_opt(&o, *xopt);
for (i = 0; i < opts->xopts_nr; i++)
parse_merge_opt(&o, opts->xopts[i]);
clean = merge_trees(&o,
head_tree,
@ -2118,6 +2118,8 @@ static int parse_insn_line(struct repository *r, struct todo_item *item,
saved = *end_of_object_name;
*end_of_object_name = '\0';
status = get_oid(bol, &commit_oid);
if (status < 0)
error(_("could not parse '%s'"), bol); /* return later */
*end_of_object_name = saved;
bol = end_of_object_name + strspn(end_of_object_name, " \t");
@ -2125,11 +2127,10 @@ static int parse_insn_line(struct repository *r, struct todo_item *item,
item->arg_len = (int)(eol - bol);
if (status < 0)
return error(_("could not parse '%.*s'"),
(int)(end_of_object_name - bol), bol);
return status;
item->commit = lookup_commit_reference(r, &commit_oid);
return !item->commit;
return item->commit ? 0 : -1;
}
int sequencer_get_last_command(struct repository *r, enum replay_action *action)
@ -5075,7 +5076,7 @@ int complete_action(struct repository *r, struct replay_opts *opts, unsigned fla
{
const char *shortonto, *todo_file = rebase_path_todo();
struct todo_list new_todo = TODO_LIST_INIT;
struct strbuf *buf = &todo_list->buf;
struct strbuf *buf = &todo_list->buf, buf2 = STRBUF_INIT;
struct object_id oid = onto->object.oid;
int res;
@ -5127,6 +5128,15 @@ int complete_action(struct repository *r, struct replay_opts *opts, unsigned fla
return -1;
}
/* Expand the commit IDs */
todo_list_to_strbuf(r, &new_todo, &buf2, -1, 0);
strbuf_swap(&new_todo.buf, &buf2);
strbuf_release(&buf2);
new_todo.total_nr -= new_todo.nr;
if (todo_list_parse_insn_buffer(r, new_todo.buf.buf, &new_todo) < 0)
BUG("invalid todo list after expanding IDs:\n%s",
new_todo.buf.buf);
if (opts->allow_ff && skip_unnecessary_picks(r, &new_todo, &oid)) {
todo_list_release(&new_todo);
return error(_("could not skip unnecessary pick commands"));

View File

@ -120,8 +120,13 @@ char *prefix_path_gently(const char *prefix, int len,
char *prefix_path(const char *prefix, int len, const char *path)
{
char *r = prefix_path_gently(prefix, len, NULL, path);
if (!r)
die(_("'%s' is outside repository"), path);
if (!r) {
const char *hint_path = get_git_work_tree();
if (!hint_path)
hint_path = get_git_dir();
die(_("'%s' is outside repository at '%s'"), path,
absolute_path(hint_path));
}
return r;
}

View File

@ -82,7 +82,7 @@ int is_staging_gitmodules_ok(struct index_state *istate)
if ((pos >= 0) && (pos < istate->cache_nr)) {
struct stat st;
if (lstat(GITMODULES_FILE, &st) == 0 &&
ie_match_stat(istate, istate->cache[pos], &st, 0) & DATA_CHANGED)
ie_modified(istate, istate->cache[pos], &st, 0) & DATA_CHANGED)
return 0;
}
@ -1811,7 +1811,7 @@ out:
void submodule_unset_core_worktree(const struct submodule *sub)
{
char *config_path = xstrfmt("%s/modules/%s/config",
get_git_common_dir(), sub->name);
get_git_dir(), sub->name);
if (git_config_set_in_file_gently(config_path, "core.worktree", NULL))
warning(_("Could not unset core.worktree setting in submodule '%s'"),
@ -1914,7 +1914,7 @@ int submodule_move_head(const char *path,
ABSORB_GITDIR_RECURSE_SUBMODULES);
} else {
char *gitdir = xstrfmt("%s/modules/%s",
get_git_common_dir(), sub->name);
get_git_dir(), sub->name);
connect_work_tree_and_git_dir(path, gitdir, 0);
free(gitdir);
@ -1924,7 +1924,7 @@ int submodule_move_head(const char *path,
if (old_head && (flags & SUBMODULE_MOVE_HEAD_FORCE)) {
char *gitdir = xstrfmt("%s/modules/%s",
get_git_common_dir(), sub->name);
get_git_dir(), sub->name);
connect_work_tree_and_git_dir(path, gitdir, 1);
free(gitdir);
}

View File

@ -267,6 +267,13 @@ int has_dirs_only_path(const char *name, int len, int prefix_len)
*/
static int threaded_has_dirs_only_path(struct cache_def *cache, const char *name, int len, int prefix_len)
{
/*
* Note: this function is used by the checkout machinery, which also
* takes care to properly reset the cache when it performs an operation
* that would leave the cache outdated. If this function starts caching
* anything else besides FL_DIR, remember to also invalidate the cache
* when creating or deleting paths that might be in the cache.
*/
return lstat_cache(cache, name, len,
FL_DIR|FL_FULLPATH, prefix_len) &
FL_DIR;
@ -321,3 +328,20 @@ void remove_scheduled_dirs(void)
{
do_remove_scheduled_dirs(0);
}
void invalidate_lstat_cache(void)
{
reset_lstat_cache(&default_cache);
}
#undef rmdir
int lstat_cache_aware_rmdir(const char *path)
{
/* Any change in this function must be made also in `mingw_rmdir()` */
int ret = rmdir(path);
if (!ret)
invalidate_lstat_cache();
return ret;
}

View File

@ -19,7 +19,7 @@ check() {
false
fi &&
test_cmp expect-stdout stdout &&
test_cmp expect-stderr stderr
test_i18ncmp expect-stderr stderr
}
read_chunk() {

View File

@ -132,7 +132,7 @@ prepare_httpd() {
install_script broken-smart-http.sh
install_script error-smart-http.sh
install_script error.sh
install_script apply-one-time-sed.sh
install_script apply-one-time-perl.sh
ln -s "$LIB_HTTPD_MODULE_PATH" "$HTTPD_ROOT_PATH/modules"

View File

@ -113,7 +113,7 @@ Alias /auth/dumb/ www/auth/dumb/
SetEnv GIT_EXEC_PATH ${GIT_EXEC_PATH}
SetEnv GIT_HTTP_EXPORT_ALL
</LocationMatch>
<LocationMatch /one_time_sed/>
<LocationMatch /one_time_perl/>
SetEnv GIT_EXEC_PATH ${GIT_EXEC_PATH}
SetEnv GIT_HTTP_EXPORT_ALL
</LocationMatch>
@ -122,7 +122,7 @@ ScriptAliasMatch /smart_*[^/]*/(.*) ${GIT_EXEC_PATH}/git-http-backend/$1
ScriptAlias /broken_smart/ broken-smart-http.sh/
ScriptAlias /error_smart/ error-smart-http.sh/
ScriptAlias /error/ error.sh/
ScriptAliasMatch /one_time_sed/(.*) apply-one-time-sed.sh/$1
ScriptAliasMatch /one_time_perl/(.*) apply-one-time-perl.sh/$1
<Directory ${GIT_EXEC_PATH}>
Options FollowSymlinks
</Directory>
@ -135,7 +135,7 @@ ScriptAliasMatch /one_time_sed/(.*) apply-one-time-sed.sh/$1
<Files error.sh>
Options ExecCGI
</Files>
<Files apply-one-time-sed.sh>
<Files apply-one-time-perl.sh>
Options ExecCGI
</Files>
<Files ${GIT_EXEC_PATH}/git-http-backend>

View File

@ -0,0 +1,27 @@
#!/bin/sh
# If "one-time-perl" exists in $HTTPD_ROOT_PATH, run perl on the HTTP response,
# using the contents of "one-time-perl" as the perl command to be run. If the
# response was modified as a result, delete "one-time-perl" so that subsequent
# HTTP responses are no longer modified.
#
# This can be used to simulate the effects of the repository changing in
# between HTTP request-response pairs.
if test -f one-time-perl
then
LC_ALL=C
export LC_ALL
"$GIT_EXEC_PATH/git-http-backend" >out
perl -pe "$(cat one-time-perl)" out >out_modified
if cmp -s out out_modified
then
cat out
else
cat out_modified
rm one-time-perl
fi
else
"$GIT_EXEC_PATH/git-http-backend"
fi

View File

@ -1,24 +0,0 @@
#!/bin/sh
# If "one-time-sed" exists in $HTTPD_ROOT_PATH, run sed on the HTTP response,
# using the contents of "one-time-sed" as the sed command to be run. If the
# response was modified as a result, delete "one-time-sed" so that subsequent
# HTTP responses are no longer modified.
#
# This can be used to simulate the effects of the repository changing in
# between HTTP request-response pairs.
if test -f one-time-sed
then
"$GIT_EXEC_PATH/git-http-backend" >out
sed "$(cat one-time-sed)" out >out_modified
if cmp -s out out_modified
then
cat out
else
cat out_modified
rm one-time-sed
fi
else
"$GIT_EXEC_PATH/git-http-backend"
fi

View File

@ -424,9 +424,24 @@ test_expect_success 'local ignore inside a sub-directory with --verbose' '
)
'
test_expect_success_multi 'nested include' \
'a/b/.gitignore:8:!on* a/b/one' '
test_check_ignore "a/b/one"
test_expect_success 'nested include of negated pattern' '
expect "" &&
test_check_ignore "a/b/one" 1
'
test_expect_success 'nested include of negated pattern with -q' '
expect "" &&
test_check_ignore "-q a/b/one" 1
'
test_expect_success 'nested include of negated pattern with -v' '
expect "a/b/.gitignore:8:!on* a/b/one" &&
test_check_ignore "-v a/b/one" 0
'
test_expect_success 'nested include of negated pattern with -v -n' '
expect "a/b/.gitignore:8:!on* a/b/one" &&
test_check_ignore "-v -n a/b/one" 0
'
############################################################################
@ -460,7 +475,6 @@ test_expect_success 'cd to ignored sub-directory' '
expect_from_stdin <<-\EOF &&
foo
twoooo
../one
seven
../../one
EOF
@ -543,7 +557,6 @@ test_expect_success 'global ignore' '
globalthree
a/globalthree
a/per-repo
globaltwo
EOF
test_check_ignore "globalone per-repo globalthree a/globalthree a/per-repo not-ignored globaltwo"
'
@ -586,17 +599,7 @@ EOF
cat <<-\EOF >expected-default
one
a/one
a/b/on
a/b/one
a/b/one one
a/b/one two
"a/b/one\"three"
a/b/two
a/b/twooo
globaltwo
a/globaltwo
a/b/globaltwo
b/globaltwo
EOF
cat <<-EOF >expected-verbose
.gitignore:1:one one
@ -696,8 +699,12 @@ cat <<-EOF >expected-all
$global_excludes:2:!globaltwo ../b/globaltwo
:: c/not-ignored
EOF
cat <<-EOF >expected-default
../one
one
b/twooo
EOF
grep -v '^:: ' expected-all >expected-verbose
sed -e 's/.* //' expected-verbose >expected-default
broken_c_unquote stdin >stdin0

View File

@ -820,4 +820,85 @@ test_expect_success PERL 'invalid file in delayed checkout' '
grep "error: external filter .* signaled that .unfiltered. is now available although it has not been delayed earlier" git-stderr.log
'
for mode in 'case' 'utf-8'
do
case "$mode" in
case) dir='A' symlink='a' mode_prereq='CASE_INSENSITIVE_FS' ;;
utf-8)
dir=$(printf "\141\314\210") symlink=$(printf "\303\244")
mode_prereq='UTF8_NFD_TO_NFC' ;;
esac
test_expect_success PERL,SYMLINKS,$mode_prereq \
"delayed checkout with $mode-collision don't write to the wrong place" '
test_config_global filter.delay.process \
"\"$TEST_ROOT/rot13-filter.pl\" --always-delay delayed.log clean smudge delay" &&
test_config_global filter.delay.required true &&
git init $mode-collision &&
(
cd $mode-collision &&
mkdir target-dir &&
empty_oid=$(printf "" | git hash-object -w --stdin) &&
symlink_oid=$(printf "%s" "$PWD/target-dir" | git hash-object -w --stdin) &&
attr_oid=$(echo "$dir/z filter=delay" | git hash-object -w --stdin) &&
cat >objs <<-EOF &&
100644 blob $empty_oid $dir/x
100644 blob $empty_oid $dir/y
100644 blob $empty_oid $dir/z
120000 blob $symlink_oid $symlink
100644 blob $attr_oid .gitattributes
EOF
git update-index --index-info <objs &&
git commit -m "test commit"
) &&
git clone $mode-collision $mode-collision-cloned &&
# Make sure z was really delayed
grep "IN: smudge $dir/z .* \\[DELAYED\\]" $mode-collision-cloned/delayed.log &&
# Should not create $dir/z at $symlink/z
test_path_is_missing $mode-collision/target-dir/z
'
done
test_expect_success PERL,SYMLINKS,CASE_INSENSITIVE_FS \
"delayed checkout with submodule collision don't write to the wrong place" '
git init collision-with-submodule &&
(
cd collision-with-submodule &&
git config filter.delay.process "\"$TEST_ROOT/rot13-filter.pl\" --always-delay delayed.log clean smudge delay" &&
git config filter.delay.required true &&
# We need Git to treat the submodule "a" and the
# leading dir "A" as different paths in the index.
git config --local core.ignoreCase false &&
empty_oid=$(printf "" | git hash-object -w --stdin) &&
attr_oid=$(echo "A/B/y filter=delay" | git hash-object -w --stdin) &&
cat >objs <<-EOF &&
100644 blob $empty_oid A/B/x
100644 blob $empty_oid A/B/y
100644 blob $attr_oid .gitattributes
EOF
git update-index --index-info <objs &&
git init a &&
mkdir target-dir &&
symlink_oid=$(printf "%s" "$PWD/target-dir" | git -C a hash-object -w --stdin) &&
echo "120000 blob $symlink_oid b" >objs &&
git -C a update-index --index-info <objs &&
git -C a commit -m sub &&
git submodule add ./a &&
git commit -m super &&
git checkout --recurse-submodules . &&
grep "IN: smudge A/B/y .* \\[DELAYED\\]" delayed.log &&
test_path_is_missing target-dir/y
)
'
test_done

View File

@ -2,9 +2,15 @@
# Example implementation for the Git filter protocol version 2
# See Documentation/gitattributes.txt, section "Filter Protocol"
#
# The first argument defines a debug log file that the script write to.
# All remaining arguments define a list of supported protocol
# capabilities ("clean", "smudge", etc).
# Usage: rot13-filter.pl [--always-delay] <log path> <capabilities>
#
# Log path defines a debug log file that the script writes to. The
# subsequent arguments define a list of supported protocol capabilities
# ("clean", "smudge", etc).
#
# When --always-delay is given all pathnames with the "can-delay" flag
# that don't appear on the list bellow are delayed with a count of 1
# (see more below).
#
# This implementation supports special test cases:
# (1) If data with the pathname "clean-write-fail.r" is processed with
@ -53,6 +59,13 @@ use IO::File;
use Git::Packet;
my $MAX_PACKET_CONTENT_SIZE = 65516;
my $always_delay = 0;
if ( $ARGV[0] eq '--always-delay' ) {
$always_delay = 1;
shift @ARGV;
}
my $log_file = shift @ARGV;
my @capabilities = @ARGV;
@ -134,6 +147,8 @@ while (1) {
if ( $buffer eq "can-delay=1" ) {
if ( exists $DELAY{$pathname} and $DELAY{$pathname}{"requested"} == 0 ) {
$DELAY{$pathname}{"requested"} = 1;
} elsif ( !exists $DELAY{$pathname} and $always_delay ) {
$DELAY{$pathname} = { "requested" => 1, "count" => 1 };
}
} else {
die "Unknown message '$buffer'";

View File

@ -22,6 +22,11 @@ test_expect_success 'setup helper scripts' '
exit 0
EOF
write_script git-credential-quit <<-\EOF &&
. ./dump
echo quit=1
EOF
write_script git-credential-verbatim <<-\EOF &&
user=$1; shift
pass=$1; shift
@ -35,43 +40,71 @@ test_expect_success 'setup helper scripts' '
test_expect_success 'credential_fill invokes helper' '
check fill "verbatim foo bar" <<-\EOF
protocol=http
host=example.com
--
protocol=http
host=example.com
username=foo
password=bar
--
verbatim: get
verbatim: protocol=http
verbatim: host=example.com
EOF
'
test_expect_success 'credential_fill invokes multiple helpers' '
check fill useless "verbatim foo bar" <<-\EOF
protocol=http
host=example.com
--
protocol=http
host=example.com
username=foo
password=bar
--
useless: get
useless: protocol=http
useless: host=example.com
verbatim: get
verbatim: protocol=http
verbatim: host=example.com
EOF
'
test_expect_success 'credential_fill stops when we get a full response' '
check fill "verbatim one two" "verbatim three four" <<-\EOF
protocol=http
host=example.com
--
protocol=http
host=example.com
username=one
password=two
--
verbatim: get
verbatim: protocol=http
verbatim: host=example.com
EOF
'
test_expect_success 'credential_fill continues through partial response' '
check fill "verbatim one \"\"" "verbatim two three" <<-\EOF
protocol=http
host=example.com
--
protocol=http
host=example.com
username=two
password=three
--
verbatim: get
verbatim: protocol=http
verbatim: host=example.com
verbatim: get
verbatim: protocol=http
verbatim: host=example.com
verbatim: username=one
EOF
'
@ -97,14 +130,20 @@ test_expect_success 'credential_fill passes along metadata' '
test_expect_success 'credential_approve calls all helpers' '
check approve useless "verbatim one two" <<-\EOF
protocol=http
host=example.com
username=foo
password=bar
--
--
useless: store
useless: protocol=http
useless: host=example.com
useless: username=foo
useless: password=bar
verbatim: store
verbatim: protocol=http
verbatim: host=example.com
verbatim: username=foo
verbatim: password=bar
EOF
@ -112,6 +151,8 @@ test_expect_success 'credential_approve calls all helpers' '
test_expect_success 'do not bother storing password-less credential' '
check approve useless <<-\EOF
protocol=http
host=example.com
username=foo
--
--
@ -121,14 +162,20 @@ test_expect_success 'do not bother storing password-less credential' '
test_expect_success 'credential_reject calls all helpers' '
check reject useless "verbatim one two" <<-\EOF
protocol=http
host=example.com
username=foo
password=bar
--
--
useless: erase
useless: protocol=http
useless: host=example.com
useless: username=foo
useless: password=bar
verbatim: erase
verbatim: protocol=http
verbatim: host=example.com
verbatim: username=foo
verbatim: password=bar
EOF
@ -136,33 +183,49 @@ test_expect_success 'credential_reject calls all helpers' '
test_expect_success 'usernames can be preserved' '
check fill "verbatim \"\" three" <<-\EOF
protocol=http
host=example.com
username=one
--
protocol=http
host=example.com
username=one
password=three
--
verbatim: get
verbatim: protocol=http
verbatim: host=example.com
verbatim: username=one
EOF
'
test_expect_success 'usernames can be overridden' '
check fill "verbatim two three" <<-\EOF
protocol=http
host=example.com
username=one
--
protocol=http
host=example.com
username=two
password=three
--
verbatim: get
verbatim: protocol=http
verbatim: host=example.com
verbatim: username=one
EOF
'
test_expect_success 'do not bother completing already-full credential' '
check fill "verbatim three four" <<-\EOF
protocol=http
host=example.com
username=one
password=two
--
protocol=http
host=example.com
username=one
password=two
--
@ -174,23 +237,31 @@ test_expect_success 'do not bother completing already-full credential' '
# askpass helper is run, we know the internal getpass is working.
test_expect_success 'empty helper list falls back to internal getpass' '
check fill <<-\EOF
protocol=http
host=example.com
--
protocol=http
host=example.com
username=askpass-username
password=askpass-password
--
askpass: Username:
askpass: Password:
askpass: Username for '\''http://example.com'\'':
askpass: Password for '\''http://askpass-username@example.com'\'':
EOF
'
test_expect_success 'internal getpass does not ask for known username' '
check fill <<-\EOF
protocol=http
host=example.com
username=foo
--
protocol=http
host=example.com
username=foo
password=askpass-password
--
askpass: Password:
askpass: Password for '\''http://foo@example.com'\'':
EOF
'
@ -202,7 +273,11 @@ HELPER="!f() {
test_expect_success 'respect configured credentials' '
test_config credential.helper "$HELPER" &&
check fill <<-\EOF
protocol=http
host=example.com
--
protocol=http
host=example.com
username=foo
password=bar
--
@ -291,21 +366,85 @@ test_expect_success 'http paths can be part of context' '
test_expect_success 'helpers can abort the process' '
test_must_fail git \
-c credential.helper="!f() { echo quit=1; }; f" \
-c credential.helper=quit \
-c credential.helper="verbatim foo bar" \
credential fill >stdout &&
test_must_be_empty stdout
credential fill >stdout 2>stderr <<-\EOF &&
protocol=http
host=example.com
EOF
test_must_be_empty stdout &&
cat >expect <<-\EOF &&
quit: get
quit: protocol=http
quit: host=example.com
fatal: credential helper '\''quit'\'' told us to quit
EOF
test_i18ncmp expect stderr
'
test_expect_success 'empty helper spec resets helper list' '
test_config credential.helper "verbatim file file" &&
check fill "" "verbatim cmdline cmdline" <<-\EOF
protocol=http
host=example.com
--
protocol=http
host=example.com
username=cmdline
password=cmdline
--
verbatim: get
verbatim: protocol=http
verbatim: host=example.com
EOF
'
test_expect_success 'url parser rejects embedded newlines' '
test_must_fail git credential fill 2>stderr <<-\EOF &&
url=https://one.example.com?%0ahost=two.example.com/
EOF
cat >expect <<-\EOF &&
warning: url contains a newline in its host component: https://one.example.com?%0ahost=two.example.com/
fatal: credential url cannot be parsed: https://one.example.com?%0ahost=two.example.com/
EOF
test_i18ncmp expect stderr
'
test_expect_success 'host-less URLs are parsed as empty host' '
check fill "verbatim foo bar" <<-\EOF
url=cert:///path/to/cert.pem
--
protocol=cert
host=
path=path/to/cert.pem
username=foo
password=bar
--
verbatim: get
verbatim: protocol=cert
verbatim: host=
verbatim: path=path/to/cert.pem
EOF
'
test_expect_success 'credential system refuses to work with missing host' '
test_must_fail git credential fill 2>stderr <<-\EOF &&
protocol=http
EOF
cat >expect <<-\EOF &&
fatal: refusing to work with credential missing host field
EOF
test_i18ncmp expect stderr
'
test_expect_success 'credential system refuses to work with missing protocol' '
test_must_fail git credential fill 2>stderr <<-\EOF &&
host=example.com
EOF
cat >expect <<-\EOF &&
fatal: refusing to work with credential missing protocol field
EOF
test_i18ncmp expect stderr
'
test_done

View File

@ -21,4 +21,50 @@ test_expect_success 'checkout-index -h in broken repository' '
test_i18ngrep "[Uu]sage" broken/usage
'
for mode in 'case' 'utf-8'
do
case "$mode" in
case) dir='A' symlink='a' mode_prereq='CASE_INSENSITIVE_FS' ;;
utf-8)
dir=$(printf "\141\314\210") symlink=$(printf "\303\244")
mode_prereq='UTF8_NFD_TO_NFC' ;;
esac
test_expect_success SYMLINKS,$mode_prereq \
"checkout-index with $mode-collision don't write to the wrong place" '
git init $mode-collision &&
(
cd $mode-collision &&
mkdir target-dir &&
empty_obj_hex=$(git hash-object -w --stdin </dev/null) &&
symlink_hex=$(printf "%s" "$PWD/target-dir" | git hash-object -w --stdin) &&
cat >objs <<-EOF &&
100644 blob ${empty_obj_hex} ${dir}/x
100644 blob ${empty_obj_hex} ${dir}/y
100644 blob ${empty_obj_hex} ${dir}/z
120000 blob ${symlink_hex} ${symlink}
EOF
git update-index --index-info <objs &&
# Note: the order is important here to exercise the
# case where the file at ${dir} has its type changed by
# the time Git tries to check out ${dir}/z.
#
# Also, we use core.precomposeUnicode=false because we
# want Git to treat the UTF-8 paths transparently on
# Mac OS, matching what is in the index.
#
git -c core.precomposeUnicode=false checkout-index -f \
${dir}/x ${dir}/y ${symlink} ${dir}/z &&
# Should not create ${dir}/z at ${symlink}/z
test_path_is_missing target-dir/z
)
'
done
test_done

90
t/t2405-worktree-submodule.sh Executable file
View File

@ -0,0 +1,90 @@
#!/bin/sh
test_description='Combination of submodules and multiple worktrees'
. ./test-lib.sh
base_path=$(pwd -P)
test_expect_success 'setup: create origin repos' '
git init origin/sub &&
test_commit -C origin/sub file1 &&
git init origin/main &&
test_commit -C origin/main first &&
git -C origin/main submodule add ../sub &&
git -C origin/main commit -m "add sub" &&
test_commit -C origin/sub "file1 updated" file1 file1updated file1updated &&
git -C origin/main/sub pull &&
git -C origin/main add sub &&
git -C origin/main commit -m "sub updated"
'
test_expect_success 'setup: clone superproject to create main worktree' '
git clone --recursive "$base_path/origin/main" main
'
rev1_hash_main=$(git --git-dir=origin/main/.git show --pretty=format:%h -q "HEAD~1")
rev1_hash_sub=$(git --git-dir=origin/sub/.git show --pretty=format:%h -q "HEAD~1")
test_expect_success 'add superproject worktree' '
git -C main worktree add "$base_path/worktree" "$rev1_hash_main"
'
test_expect_failure 'submodule is checked out just after worktree add' '
git -C worktree diff --submodule master"^!" >out &&
grep "file1 updated" out
'
test_expect_success 'add superproject worktree and initialize submodules' '
git -C main worktree add "$base_path/worktree-submodule-update" "$rev1_hash_main" &&
git -C worktree-submodule-update submodule update
'
test_expect_success 'submodule is checked out just after submodule update in linked worktree' '
git -C worktree-submodule-update diff --submodule master"^!" >out &&
grep "file1 updated" out
'
test_expect_success 'add superproject worktree and manually add submodule worktree' '
git -C main worktree add "$base_path/linked_submodule" "$rev1_hash_main" &&
git -C main/sub worktree add "$base_path/linked_submodule/sub" "$rev1_hash_sub"
'
test_expect_success 'submodule is checked out after manually adding submodule worktree' '
git -C linked_submodule diff --submodule master"^!" >out &&
grep "file1 updated" out
'
test_expect_success 'checkout --recurse-submodules uses $GIT_DIR for submodules in a linked worktree' '
git -C main worktree add "$base_path/checkout-recurse" --detach &&
git -C checkout-recurse submodule update --init &&
echo "gitdir: ../../main/.git/worktrees/checkout-recurse/modules/sub" >expect-gitfile &&
cat checkout-recurse/sub/.git >actual-gitfile &&
test_cmp expect-gitfile actual-gitfile &&
git -C main/sub rev-parse HEAD >expect-head-main &&
git -C checkout-recurse checkout --recurse-submodules HEAD~1 &&
cat checkout-recurse/sub/.git >actual-gitfile &&
git -C main/sub rev-parse HEAD >actual-head-main &&
test_cmp expect-gitfile actual-gitfile &&
test_cmp expect-head-main actual-head-main
'
test_expect_success 'core.worktree is removed in $GIT_DIR/modules/<name>/config, not in $GIT_COMMON_DIR/modules/<name>/config' '
echo "../../../sub" >expect-main &&
git -C main/sub config --get core.worktree >actual-main &&
test_cmp expect-main actual-main &&
echo "../../../../../../checkout-recurse/sub" >expect-linked &&
git -C checkout-recurse/sub config --get core.worktree >actual-linked &&
test_cmp expect-linked actual-linked &&
git -C checkout-recurse checkout --recurse-submodules first &&
test_expect_code 1 git -C main/.git/worktrees/checkout-recurse/modules/sub config --get core.worktree >linked-config &&
test_must_be_empty linked-config &&
git -C main/sub config --get core.worktree >actual-main &&
test_cmp expect-main actual-main
'
test_expect_success 'unsetting core.worktree does not prevent running commands directly against the submodule repository' '
git -C main/.git/worktrees/checkout-recurse/modules/sub log
'
test_done

View File

@ -4,6 +4,38 @@ test_description='Test that adding/removing many notes triggers automatic fanout
. ./test-lib.sh
path_has_fanout() {
path=$1 &&
fanout=$2 &&
after_last_slash=$((40 - $fanout * 2)) &&
echo $path | grep -q "^\([0-9a-f]\{2\}/\)\{$fanout\}[0-9a-f]\{$after_last_slash\}$"
}
touched_one_note_with_fanout() {
notes_commit=$1 &&
modification=$2 && # 'A' for addition, 'D' for deletion
fanout=$3 &&
diff=$(git diff-tree --no-commit-id --name-status --root -r $notes_commit) &&
path=$(echo $diff | sed -e "s/^$modification[\t ]//") &&
path_has_fanout "$path" $fanout;
}
all_notes_have_fanout() {
notes_commit=$1 &&
fanout=$2 &&
git ls-tree -r --name-only $notes_commit 2>/dev/null |
while read path
do
path_has_fanout $path $fanout || return 1
done
}
test_expect_success 'tweak test environment' '
git checkout -b nondeterminism &&
test_commit A &&
git checkout --orphan with_notes;
'
test_expect_success 'creating many notes with git-notes' '
num_notes=300 &&
i=0 &&
@ -20,7 +52,7 @@ test_expect_success 'creating many notes with git-notes' '
test_expect_success 'many notes created correctly with git-notes' '
git log | grep "^ " > output &&
i=300 &&
i=$num_notes &&
while test $i -gt 0
do
echo " commit #$i" &&
@ -30,34 +62,46 @@ test_expect_success 'many notes created correctly with git-notes' '
test_cmp expect output
'
test_expect_success 'many notes created with git-notes triggers fanout' '
# Expect entire notes tree to have a fanout == 1
git ls-tree -r --name-only refs/notes/commits |
while read path
test_expect_success 'stable fanout 0 is followed by stable fanout 1' '
i=$num_notes &&
fanout=0 &&
while test $i -gt 0
do
echo $path | grep "^../[0-9a-f]*$" || {
echo "Invalid path \"$path\"" &&
return 1;
}
done
i=$(($i - 1)) &&
if touched_one_note_with_fanout refs/notes/commits~$i A $fanout
then
continue
elif test $fanout -eq 0
then
fanout=1 &&
if all_notes_have_fanout refs/notes/commits~$i $fanout
then
echo "Fanout 0 -> 1 at refs/notes/commits~$i" &&
continue
fi
fi &&
echo "Failed fanout=$fanout check at refs/notes/commits~$i" &&
git ls-tree -r --name-only refs/notes/commits~$i &&
return 1
done &&
all_notes_have_fanout refs/notes/commits 1
'
test_expect_success 'deleting most notes with git-notes' '
num_notes=250 &&
remove_notes=285 &&
i=0 &&
git rev-list HEAD |
while test $i -lt $num_notes && read sha1
while test $i -lt $remove_notes && read sha1
do
i=$(($i + 1)) &&
test_tick &&
git notes remove "$sha1" ||
exit 1
git notes remove "$sha1" 2>/dev/null || return 1
done
'
test_expect_success 'most notes deleted correctly with git-notes' '
git log HEAD~250 | grep "^ " > output &&
i=50 &&
git log HEAD~$remove_notes | grep "^ " > output &&
i=$(($num_notes - $remove_notes)) &&
while test $i -gt 0
do
echo " commit #$i" &&
@ -67,16 +111,29 @@ test_expect_success 'most notes deleted correctly with git-notes' '
test_cmp expect output
'
test_expect_success 'deleting most notes triggers fanout consolidation' '
# Expect entire notes tree to have a fanout == 0
git ls-tree -r --name-only refs/notes/commits |
while read path
test_expect_success 'stable fanout 1 is followed by stable fanout 0' '
i=$remove_notes &&
fanout=1 &&
while test $i -gt 0
do
echo $path | grep -v "^../.*" || {
echo "Invalid path \"$path\"" &&
return 1;
}
done
i=$(($i - 1)) &&
if touched_one_note_with_fanout refs/notes/commits~$i D $fanout
then
continue
elif test $fanout -eq 1
then
fanout=0 &&
if all_notes_have_fanout refs/notes/commits~$i $fanout
then
echo "Fanout 1 -> 0 at refs/notes/commits~$i" &&
continue
fi
fi &&
echo "Failed fanout=$fanout check at refs/notes/commits~$i" &&
git ls-tree -r --name-only refs/notes/commits~$i &&
return 1
done &&
all_notes_have_fanout refs/notes/commits 0
'
test_done

View File

@ -1264,13 +1264,26 @@ test_expect_success SHA1 'short SHA-1 setup' '
test_expect_success SHA1 'short SHA-1 collide' '
test_when_finished "reset_rebase && git checkout master" &&
git checkout collide &&
colliding_sha1=6bcda37 &&
test $colliding_sha1 = "$(git rev-parse HEAD | cut -c 1-7)" &&
(
unset test_tick &&
test_tick &&
set_fake_editor &&
FAKE_COMMIT_MESSAGE="collide2 ac4f2ee" \
FAKE_LINES="reword 1 2" git rebase -i HEAD~2
)
FAKE_LINES="reword 1 break 2" git rebase -i HEAD~2 &&
test $colliding_sha1 = "$(git rev-parse HEAD | cut -c 1-7)" &&
grep "^pick $colliding_sha1 " \
.git/rebase-merge/git-rebase-todo.tmp &&
grep "^pick [0-9a-f]\{40\}" \
.git/rebase-merge/git-rebase-todo &&
grep "^pick [0-9a-f]\{40\}" \
.git/rebase-merge/git-rebase-todo.backup &&
git rebase --continue
) &&
collide2="$(git rev-parse HEAD~1 | cut -c 1-4)" &&
collide3="$(git rev-parse collide3 | cut -c 1-4)" &&
test "$collide2" = "$collide3"
'
test_expect_success 'respect core.abbrev' '

View File

@ -0,0 +1,48 @@
#!/bin/sh
test_description='git rebase across mode change'
. ./test-lib.sh
test_expect_success 'setup' '
mkdir DS &&
>DS/whatever &&
git add DS &&
git commit -m base &&
git branch side1 &&
git branch side2 &&
git checkout side1 &&
git rm -rf DS &&
test_ln_s_add unrelated DS &&
git commit -m side1 &&
git checkout side2 &&
>unrelated &&
git add unrelated &&
git commit -m commit1 &&
echo >>unrelated &&
git commit -am commit2
'
test_expect_success 'rebase changes with the apply backend' '
test_when_finished "git rebase --abort || true" &&
git checkout -b apply-backend side2 &&
git rebase side1
'
test_expect_success 'rebase changes with the merge backend' '
test_when_finished "git rebase --abort || true" &&
git checkout -b merge-backend side2 &&
git rebase -m side1
'
test_expect_success 'rebase changes with the merge backend with a delay' '
test_when_finished "git rebase --abort || true" &&
git checkout -b merge-delay-backend side2 &&
git rebase -m --exec "sleep 1" side1
'
test_done

View File

@ -425,6 +425,13 @@ test_expect_success 'rm will error out on a modified .gitmodules file unless sta
git status -s -uno >actual &&
test_cmp expect actual
'
test_expect_success 'rm will not error out on .gitmodules file with zero stat data' '
git reset --hard &&
git submodule update &&
git read-tree HEAD &&
git rm submod &&
test_path_is_missing submod
'
test_expect_success 'rm issues a warning when section is not found in .gitmodules' '
git reset --hard &&

View File

@ -68,6 +68,15 @@ test_expect_success 'revert works (initial)' '
! grep . output
'
test_expect_success 'add untracked (multiple)' '
test_when_finished "git reset && rm [1-9]" &&
touch $(test_seq 9) &&
test_write_lines a "2-5 8-" | git add -i -- [1-9] &&
test_write_lines 2 3 4 5 8 9 >expected &&
git ls-files [1-9] >output &&
test_cmp expected output
'
test_expect_success 'setup (commit)' '
echo baseline >file &&
git add file &&

View File

@ -1631,6 +1631,26 @@ test_expect_success GPG 'log --graph --show-signature for merged tag' '
grep "^| | gpg: Good signature" actual
'
test_expect_success GPG 'log --graph --show-signature for merged tag in shallow clone' '
test_when_finished "git reset --hard && git checkout master" &&
git checkout -b plain-shallow master &&
echo aaa >bar &&
git add bar &&
git commit -m bar_commit &&
git checkout --detach master &&
echo bbb >baz &&
git add baz &&
git commit -m baz_commit &&
git tag -s -m signed_tag_msg signed_tag_shallow &&
hash=$(git rev-parse HEAD) &&
git checkout plain-shallow &&
git merge --no-ff -m msg signed_tag_shallow &&
git clone --depth 1 --no-local . shallow &&
test_when_finished "rm -rf shallow" &&
git -C shallow log --graph --show-signature -n1 plain-shallow >actual &&
grep "tag signed_tag_shallow names a non-parent $hash" actual
'
test_expect_success GPGSM 'log --graph --show-signature for merged tag x509' '
test_when_finished "git reset --hard && git checkout master" &&
test_config gpg.format x509 &&

View File

@ -62,13 +62,13 @@ test_expect_success 'index-pack detects REF_DELTA cycles' '
test_must_fail git index-pack --fix-thin --stdin <cycle.pack
'
test_expect_failure 'failover to an object in another pack' '
test_expect_success 'failover to an object in another pack' '
clear_packs &&
git index-pack --stdin <ab.pack &&
git index-pack --stdin --fix-thin <cycle.pack
test_must_fail git index-pack --stdin --fix-thin <cycle.pack
'
test_expect_failure 'failover to a duplicate object in the same pack' '
test_expect_success 'failover to a duplicate object in the same pack' '
clear_packs &&
{
pack_header 3 &&
@ -77,7 +77,7 @@ test_expect_failure 'failover to a duplicate object in the same pack' '
pack_obj $A
} >recoverable.pack &&
pack_trailer recoverable.pack &&
git index-pack --fix-thin --stdin <recoverable.pack
test_must_fail git index-pack --fix-thin --stdin <recoverable.pack
'
test_done

View File

@ -233,7 +233,7 @@ test_expect_success 'shallow fetches check connectivity before writing shallow f
git -C "$REPO" config protocol.version 2 &&
git -C client config protocol.version 2 &&
git -C client fetch --depth=2 "$HTTPD_URL/one_time_sed/repo" master:a_branch &&
git -C client fetch --depth=2 "$HTTPD_URL/one_time_perl/repo" master:a_branch &&
# Craft a situation in which the server sends back an unshallow request
# with an empty packfile. This is done by refetching with a shorter
@ -242,13 +242,13 @@ test_expect_success 'shallow fetches check connectivity before writing shallow f
printf "s/0034shallow %s/0036unshallow %s/" \
"$(git -C "$REPO" rev-parse HEAD)" \
"$(git -C "$REPO" rev-parse HEAD^)" \
>"$HTTPD_ROOT_PATH/one-time-sed" &&
>"$HTTPD_ROOT_PATH/one-time-perl" &&
test_must_fail env GIT_TEST_SIDEBAND_ALL=0 git -C client \
fetch --depth=1 "$HTTPD_URL/one_time_sed/repo" \
fetch --depth=1 "$HTTPD_URL/one_time_perl/repo" \
master:a_branch &&
# Ensure that the one-time-sed script was used.
! test -e "$HTTPD_ROOT_PATH/one-time-sed" &&
# Ensure that the one-time-perl script was used.
! test -e "$HTTPD_ROOT_PATH/one-time-perl" &&
# Ensure that the resulting repo is consistent, despite our failure to
# fetch.

View File

@ -321,11 +321,17 @@ test_expect_success 'git client does not send an empty Accept-Language' '
'
test_expect_success 'remote-http complains cleanly about malformed urls' '
# do not actually issue "list" or other commands, as we do not
# want to rely on what curl would actually do with such a broken
# URL. This is just about making sure we do not segfault during
# initialization.
test_must_fail git remote-http http::/example.com/repo.git
test_must_fail git remote-http http::/example.com/repo.git 2>stderr &&
test_i18ngrep "url has no scheme" stderr
'
# NEEDSWORK: Writing commands to git-remote-curl can race against the latter
# erroring out, producing SIGPIPE. Remove "ok=sigpipe" once transport-helper has
# learned to handle early remote helper failures more cleanly.
test_expect_success 'remote-http complains cleanly about empty scheme' '
test_must_fail ok=sigpipe git ls-remote \
http::${HTTPD_URL#http}/dumb/repo.git 2>stderr &&
test_i18ngrep "url has no scheme" stderr
'
test_expect_success 'redirects can be forbidden/allowed' '

View File

@ -40,11 +40,23 @@ test_expect_success clone '
git clone "file://$UNCPATH" clone
'
test_expect_success 'clone without file://' '
git clone "$UNCPATH" clone-without-file
'
test_expect_success 'clone with backslashed path' '
BACKSLASHED="$(echo "$UNCPATH" | tr / \\\\)" &&
git clone "$BACKSLASHED" backslashed
'
test_expect_success fetch '
git init to-fetch &&
(
cd to-fetch &&
git fetch "$UNCPATH" master
)
'
test_expect_success push '
(
cd clone &&

View File

@ -384,6 +384,37 @@ test_expect_success 'fetch lazy-fetches only to resolve deltas, protocol v2' '
grep "want $(cat hash)" trace
'
# The following two tests must be in this order, or else
# the first will not fail. It is important that the srv.bare
# repository did not have tags during clone, but has tags
# in the fetch.
test_expect_failure 'verify fetch succeeds when asking for new tags' '
git clone --filter=blob:none "file://$(pwd)/srv.bare" tag-test &&
for i in I J K
do
test_commit -C src $i &&
git -C src branch $i || return 1
done &&
git -C srv.bare fetch --tags origin +refs/heads/*:refs/heads/* &&
git -C tag-test -c protocol.version=2 fetch --tags origin
'
test_expect_success 'verify fetch downloads only one pack when updating refs' '
git clone --filter=blob:none "file://$(pwd)/srv.bare" pack-test &&
ls pack-test/.git/objects/pack/*pack >pack-list &&
test_line_count = 2 pack-list &&
for i in A B C
do
test_commit -C src $i &&
git -C src branch $i || return 1
done &&
git -C srv.bare fetch origin +refs/heads/*:refs/heads/* &&
git -C pack-test fetch origin &&
ls pack-test/.git/objects/pack/*pack >pack-list &&
test_line_count = 3 pack-list
'
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
@ -398,14 +429,18 @@ intersperse () {
sed 's/\(..\)/'$1'\1/g'
}
# Create a one-time-sed command to replace the existing packfile with $1.
# Create a one-time-perl command to replace the existing packfile with $1.
replace_packfile () {
# The protocol requires that the packfile be sent in sideband 1, hence
# the extra \x01 byte at the beginning.
printf "1,/packfile/!c %04x\\\\x01%s0000" \
"$(($(wc -c <$1) + 5))" \
"$(hex_unpack <$1 | intersperse '\\x')" \
>"$HTTPD_ROOT_PATH/one-time-sed"
cp $1 "$HTTPD_ROOT_PATH/one-time-pack" &&
echo 'if (/packfile/) {
print;
my $length = -s "one-time-pack";
printf "%04x\x01", $length + 5;
print `cat one-time-pack` . "0000";
last
}' >"$HTTPD_ROOT_PATH/one-time-perl"
}
test_expect_success 'upon cloning, check that all refs point to objects' '
@ -429,16 +464,16 @@ test_expect_success 'upon cloning, check that all refs point to objects' '
# \x01 byte at the beginning.
replace_packfile incomplete.pack &&
# Use protocol v2 because the sed command looks for the "packfile"
# Use protocol v2 because the perl command looks for the "packfile"
# section header.
test_config -C "$SERVER" protocol.version 2 &&
test_must_fail git -c protocol.version=2 clone \
--filter=blob:none $HTTPD_URL/one_time_sed/server repo 2>err &&
--filter=blob:none $HTTPD_URL/one_time_perl/server repo 2>err &&
test_i18ngrep "did not send all necessary objects" err &&
# Ensure that the one-time-sed script was used.
! test -e "$HTTPD_ROOT_PATH/one-time-sed"
# Ensure that the one-time-perl script was used.
! test -e "$HTTPD_ROOT_PATH/one-time-perl"
'
test_expect_success 'when partial cloning, tolerate server not sending target of tag' '
@ -469,17 +504,17 @@ test_expect_success 'when partial cloning, tolerate server not sending target of
# \x01 byte at the beginning.
replace_packfile incomplete.pack &&
# Use protocol v2 because the sed command looks for the "packfile"
# Use protocol v2 because the perl command looks for the "packfile"
# section header.
test_config -C "$SERVER" protocol.version 2 &&
# Exercise to make sure it works.
git -c protocol.version=2 clone \
--filter=blob:none $HTTPD_URL/one_time_sed/server repo 2> err &&
--filter=blob:none $HTTPD_URL/one_time_perl/server repo 2> err &&
! grep "missing object referenced by" err &&
# Ensure that the one-time-sed script was used.
! test -e "$HTTPD_ROOT_PATH/one-time-sed"
# Ensure that the one-time-perl script was used.
! test -e "$HTTPD_ROOT_PATH/one-time-perl"
'
test_expect_success 'tolerate server sending REF_DELTA against missing promisor objects' '
@ -502,7 +537,7 @@ test_expect_success 'tolerate server sending REF_DELTA against missing promisor
# Clone. The client has deltabase_have but not deltabase_missing.
git -c protocol.version=2 clone --no-checkout \
--filter=blob:none $HTTPD_URL/one_time_sed/server repo &&
--filter=blob:none $HTTPD_URL/one_time_perl/server repo &&
git -C repo hash-object -w -- "$SERVER/have.txt" &&
# Sanity check to ensure that the client does not have
@ -543,7 +578,7 @@ test_expect_success 'tolerate server sending REF_DELTA against missing promisor
replace_packfile thin.pack &&
# Use protocol v2 because the sed command looks for the "packfile"
# Use protocol v2 because the perl command looks for the "packfile"
# section header.
test_config -C "$SERVER" protocol.version 2 &&
@ -556,8 +591,8 @@ test_expect_success 'tolerate server sending REF_DELTA against missing promisor
grep "want $(cat deltabase_missing)" trace &&
! grep "want $(cat deltabase_have)" trace &&
# Ensure that the one-time-sed script was used.
! test -e "$HTTPD_ROOT_PATH/one-time-sed"
# Ensure that the one-time-perl script was used.
! test -e "$HTTPD_ROOT_PATH/one-time-perl"
'
# DO NOT add non-httpd-specific tests here, because the last part of this

View File

@ -712,11 +712,11 @@ test_expect_success 'when server sends "ready", expect DELIM' '
# After "ready" in the acknowledgments section, pretend that a FLUSH
# (0000) was sent instead of a DELIM (0001).
printf "/ready/,$ s/0001/0000/" \
>"$HTTPD_ROOT_PATH/one-time-sed" &&
printf "\$ready = 1 if /ready/; \$ready && s/0001/0000/" \
>"$HTTPD_ROOT_PATH/one-time-perl" &&
test_must_fail git -C http_child -c protocol.version=2 \
fetch "$HTTPD_URL/one_time_sed/http_parent" 2> err &&
fetch "$HTTPD_URL/one_time_perl/http_parent" 2> err &&
test_i18ngrep "expected packfile to be sent after .ready." err
'
@ -737,12 +737,12 @@ test_expect_success 'when server does not send "ready", expect FLUSH' '
# After the acknowledgments section, pretend that a DELIM
# (0001) was sent instead of a FLUSH (0000).
printf "/acknowledgments/,$ s/0000/0001/" \
>"$HTTPD_ROOT_PATH/one-time-sed" &&
printf "\$ack = 1 if /acknowledgments/; \$ack && s/0000/0001/" \
>"$HTTPD_ROOT_PATH/one-time-perl" &&
test_must_fail env GIT_TRACE_PACKET="$(pwd)/log" git -C http_child \
-c protocol.version=2 \
fetch "$HTTPD_URL/one_time_sed/http_parent" 2> err &&
fetch "$HTTPD_URL/one_time_perl/http_parent" 2> err &&
grep "fetch< .*acknowledgments" log &&
! grep "fetch< .*ready" log &&
test_i18ngrep "expected no other sections to be sent after no .ready." err

View File

@ -313,7 +313,7 @@ test_expect_success 'setup repos for change-while-negotiating test' '
test_commit m3 &&
git tag -d m2 m3
) &&
git -C "$LOCAL_PRISTINE" remote set-url origin "http://127.0.0.1:$LIB_HTTPD_PORT/one_time_sed/repo" &&
git -C "$LOCAL_PRISTINE" remote set-url origin "http://127.0.0.1:$LIB_HTTPD_PORT/one_time_perl/repo" &&
git -C "$LOCAL_PRISTINE" config protocol.version 2
'
@ -326,7 +326,7 @@ inconsistency () {
# RPCs during a single negotiation.
oid1=$(git -C "$REPO" rev-parse $1) &&
oid2=$(git -C "$REPO" rev-parse $2) &&
echo "s/$oid1/$oid2/" >"$HTTPD_ROOT_PATH/one-time-sed"
echo "s/$oid1/$oid2/" >"$HTTPD_ROOT_PATH/one-time-perl"
}
test_expect_success 'server is initially ahead - no ref in want' '
@ -378,7 +378,7 @@ test_expect_success 'server loses a ref - ref in want' '
git -C "$REPO" config uploadpack.allowRefInWant true &&
rm -rf local &&
cp -r "$LOCAL_PRISTINE" local &&
echo "s/master/raster/" >"$HTTPD_ROOT_PATH/one-time-sed" &&
echo "s/master/raster/" >"$HTTPD_ROOT_PATH/one-time-perl" &&
test_must_fail git -C local fetch 2>err &&
test_i18ngrep "fatal: remote error: unknown ref refs/heads/raster" err

38
t/t6136-pathspec-in-bare.sh Executable file
View File

@ -0,0 +1,38 @@
#!/bin/sh
test_description='diagnosing out-of-scope pathspec'
. ./test-lib.sh
test_expect_success 'setup a bare and non-bare repository' '
test_commit file1 &&
git clone --bare . bare
'
test_expect_success 'log and ls-files in a bare repository' '
(
cd bare &&
test_must_fail git log -- .. >out 2>err &&
test_must_be_empty out &&
test_i18ngrep "outside repository" err &&
test_must_fail git ls-files -- .. >out 2>err &&
test_must_be_empty out &&
test_i18ngrep "outside repository" err
)
'
test_expect_success 'log and ls-files in .git directory' '
(
cd .git &&
test_must_fail git log -- .. >out 2>err &&
test_must_be_empty out &&
test_i18ngrep "outside repository" err &&
test_must_fail git ls-files -- .. >out 2>err &&
test_must_be_empty out &&
test_i18ngrep "outside repository" err
)
'
test_done

View File

@ -1,77 +0,0 @@
#!/bin/sh
test_description='Combination of submodules and multiple workdirs'
. ./test-lib.sh
base_path=$(pwd -P)
test_expect_success 'setup: make origin' '
mkdir -p origin/sub &&
(
cd origin/sub && git init &&
echo file1 >file1 &&
git add file1 &&
git commit -m file1
) &&
mkdir -p origin/main &&
(
cd origin/main && git init &&
git submodule add ../sub &&
git commit -m "add sub"
) &&
(
cd origin/sub &&
echo file1updated >file1 &&
git add file1 &&
git commit -m "file1 updated"
) &&
git -C origin/main/sub pull &&
(
cd origin/main &&
git add sub &&
git commit -m "sub updated"
)
'
test_expect_success 'setup: clone' '
mkdir clone &&
git -C clone clone --recursive "$base_path/origin/main"
'
rev1_hash_main=$(git --git-dir=origin/main/.git show --pretty=format:%h -q "HEAD~1")
rev1_hash_sub=$(git --git-dir=origin/sub/.git show --pretty=format:%h -q "HEAD~1")
test_expect_success 'checkout main' '
mkdir default_checkout &&
git -C clone/main worktree add "$base_path/default_checkout/main" "$rev1_hash_main"
'
test_expect_failure 'can see submodule diffs just after checkout' '
git -C default_checkout/main diff --submodule master"^!" >out &&
grep "file1 updated" out
'
test_expect_success 'checkout main and initialize independent clones' '
mkdir fully_cloned_submodule &&
git -C clone/main worktree add "$base_path/fully_cloned_submodule/main" "$rev1_hash_main" &&
git -C fully_cloned_submodule/main submodule update
'
test_expect_success 'can see submodule diffs after independent cloning' '
git -C fully_cloned_submodule/main diff --submodule master"^!" >out &&
grep "file1 updated" out
'
test_expect_success 'checkout sub manually' '
mkdir linked_submodule &&
git -C clone/main worktree add "$base_path/linked_submodule/main" "$rev1_hash_main" &&
git -C clone/main/sub worktree add "$base_path/linked_submodule/main/sub" "$rev1_hash_sub"
'
test_expect_success 'can see submodule diffs after manual checkout of linked submodule' '
git -C linked_submodule/main diff --submodule master"^!" >out &&
grep "file1 updated" out
'
test_done

View File

@ -1,6 +1,6 @@
#!/bin/sh
test_description='check handling of .gitmodule url with dash'
test_description='check handling of disallowed .gitmodule urls'
. ./test-lib.sh
test_expect_success 'create submodule with protected dash in url' '
@ -60,4 +60,145 @@ test_expect_success 'trailing backslash is handled correctly' '
test_i18ngrep ! "unknown option" err
'
test_expect_success 'fsck rejects missing URL scheme' '
git checkout --orphan missing-scheme &&
cat >.gitmodules <<-\EOF &&
[submodule "foo"]
url = http::one.example.com/foo.git
EOF
git add .gitmodules &&
test_tick &&
git commit -m "gitmodules with missing URL scheme" &&
test_when_finished "rm -rf dst" &&
git init --bare dst &&
git -C dst config transfer.fsckObjects true &&
test_must_fail git push dst HEAD 2>err &&
grep gitmodulesUrl err
'
test_expect_success 'fsck rejects relative URL resolving to missing scheme' '
git checkout --orphan relative-missing-scheme &&
cat >.gitmodules <<-\EOF &&
[submodule "foo"]
url = "..\\../.\\../:one.example.com/foo.git"
EOF
git add .gitmodules &&
test_tick &&
git commit -m "gitmodules with relative URL that strips off scheme" &&
test_when_finished "rm -rf dst" &&
git init --bare dst &&
git -C dst config transfer.fsckObjects true &&
test_must_fail git push dst HEAD 2>err &&
grep gitmodulesUrl err
'
test_expect_success 'fsck rejects empty URL scheme' '
git checkout --orphan empty-scheme &&
cat >.gitmodules <<-\EOF &&
[submodule "foo"]
url = http::://one.example.com/foo.git
EOF
git add .gitmodules &&
test_tick &&
git commit -m "gitmodules with empty URL scheme" &&
test_when_finished "rm -rf dst" &&
git init --bare dst &&
git -C dst config transfer.fsckObjects true &&
test_must_fail git push dst HEAD 2>err &&
grep gitmodulesUrl err
'
test_expect_success 'fsck rejects relative URL resolving to empty scheme' '
git checkout --orphan relative-empty-scheme &&
cat >.gitmodules <<-\EOF &&
[submodule "foo"]
url = ../../../:://one.example.com/foo.git
EOF
git add .gitmodules &&
test_tick &&
git commit -m "relative gitmodules URL resolving to empty scheme" &&
test_when_finished "rm -rf dst" &&
git init --bare dst &&
git -C dst config transfer.fsckObjects true &&
test_must_fail git push dst HEAD 2>err &&
grep gitmodulesUrl err
'
test_expect_success 'fsck rejects empty hostname' '
git checkout --orphan empty-host &&
cat >.gitmodules <<-\EOF &&
[submodule "foo"]
url = http:///one.example.com/foo.git
EOF
git add .gitmodules &&
test_tick &&
git commit -m "gitmodules with extra slashes" &&
test_when_finished "rm -rf dst" &&
git init --bare dst &&
git -C dst config transfer.fsckObjects true &&
test_must_fail git push dst HEAD 2>err &&
grep gitmodulesUrl err
'
test_expect_success 'fsck rejects relative url that produced empty hostname' '
git checkout --orphan messy-relative &&
cat >.gitmodules <<-\EOF &&
[submodule "foo"]
url = ../../..//one.example.com/foo.git
EOF
git add .gitmodules &&
test_tick &&
git commit -m "gitmodules abusing relative_path" &&
test_when_finished "rm -rf dst" &&
git init --bare dst &&
git -C dst config transfer.fsckObjects true &&
test_must_fail git push dst HEAD 2>err &&
grep gitmodulesUrl err
'
test_expect_success 'fsck permits embedded newline with unrecognized scheme' '
git checkout --orphan newscheme &&
cat >.gitmodules <<-\EOF &&
[submodule "foo"]
url = "data://acjbkd%0akajfdickajkd"
EOF
git add .gitmodules &&
git commit -m "gitmodules with unrecognized scheme" &&
test_when_finished "rm -rf dst" &&
git init --bare dst &&
git -C dst config transfer.fsckObjects true &&
git push dst HEAD
'
test_expect_success 'fsck rejects embedded newline in url' '
# create an orphan branch to avoid existing .gitmodules objects
git checkout --orphan newline &&
cat >.gitmodules <<-\EOF &&
[submodule "foo"]
url = "https://one.example.com?%0ahost=two.example.com/foo.git"
EOF
git add .gitmodules &&
git commit -m "gitmodules with newline" &&
test_when_finished "rm -rf dst" &&
git init --bare dst &&
git -C dst config transfer.fsckObjects true &&
test_must_fail git push dst HEAD 2>err &&
grep gitmodulesUrl err
'
test_expect_success 'fsck rejects embedded newline in relative url' '
git checkout --orphan relative-newline &&
cat >.gitmodules <<-\EOF &&
[submodule "foo"]
url = "./%0ahost=two.example.com/foo.git"
EOF
git add .gitmodules &&
git commit -m "relative url with newline" &&
test_when_finished "rm -rf dst" &&
git init --bare dst &&
git -C dst config transfer.fsckObjects true &&
test_must_fail git push dst HEAD 2>err &&
grep gitmodulesUrl err
'
test_done

View File

@ -1083,7 +1083,8 @@ finalize_junit_xml () {
# adjust the overall time
junit_time=$(test-tool date getnanos $junit_suite_start)
sed "s/<testsuite [^>]*/& time=\"$junit_time\"/" \
sed -e "s/\(<testsuite.*\) time=\"[^\"]*\"/\1/" \
-e "s/<testsuite [^>]*/& time=\"$junit_time\"/" \
<"$junit_xml_path" >"$junit_xml_path.new"
mv "$junit_xml_path.new" "$junit_xml_path"

View File

@ -59,7 +59,7 @@ static const struct interval zero_width[] = {
{ 0x0B3F, 0x0B3F },
{ 0x0B41, 0x0B44 },
{ 0x0B4D, 0x0B4D },
{ 0x0B56, 0x0B56 },
{ 0x0B55, 0x0B56 },
{ 0x0B62, 0x0B63 },
{ 0x0B82, 0x0B82 },
{ 0x0BC0, 0x0BC0 },
@ -82,6 +82,7 @@ static const struct interval zero_width[] = {
{ 0x0D41, 0x0D44 },
{ 0x0D4D, 0x0D4D },
{ 0x0D62, 0x0D63 },
{ 0x0D81, 0x0D81 },
{ 0x0DCA, 0x0DCA },
{ 0x0DD2, 0x0DD4 },
{ 0x0DD6, 0x0DD6 },
@ -139,7 +140,7 @@ static const struct interval zero_width[] = {
{ 0x1A65, 0x1A6C },
{ 0x1A73, 0x1A7C },
{ 0x1A7F, 0x1A7F },
{ 0x1AB0, 0x1ABE },
{ 0x1AB0, 0x1AC0 },
{ 0x1B00, 0x1B03 },
{ 0x1B34, 0x1B34 },
{ 0x1B36, 0x1B3A },
@ -182,6 +183,7 @@ static const struct interval zero_width[] = {
{ 0xA806, 0xA806 },
{ 0xA80B, 0xA80B },
{ 0xA825, 0xA826 },
{ 0xA82C, 0xA82C },
{ 0xA8C4, 0xA8C5 },
{ 0xA8E0, 0xA8F1 },
{ 0xA8FF, 0xA8FF },
@ -223,6 +225,7 @@ static const struct interval zero_width[] = {
{ 0x10A3F, 0x10A3F },
{ 0x10AE5, 0x10AE6 },
{ 0x10D24, 0x10D27 },
{ 0x10EAB, 0x10EAC },
{ 0x10F46, 0x10F50 },
{ 0x11001, 0x11001 },
{ 0x11038, 0x11046 },
@ -238,6 +241,7 @@ static const struct interval zero_width[] = {
{ 0x11180, 0x11181 },
{ 0x111B6, 0x111BE },
{ 0x111C9, 0x111CC },
{ 0x111CF, 0x111CF },
{ 0x1122F, 0x11231 },
{ 0x11234, 0x11234 },
{ 0x11236, 0x11237 },
@ -273,6 +277,9 @@ static const struct interval zero_width[] = {
{ 0x11727, 0x1172B },
{ 0x1182F, 0x11837 },
{ 0x11839, 0x1183A },
{ 0x1193B, 0x1193C },
{ 0x1193E, 0x1193E },
{ 0x11943, 0x11943 },
{ 0x119D4, 0x119D7 },
{ 0x119DA, 0x119DB },
{ 0x119E0, 0x119E0 },
@ -305,6 +312,7 @@ static const struct interval zero_width[] = {
{ 0x16B30, 0x16B36 },
{ 0x16F4F, 0x16F4F },
{ 0x16F8F, 0x16F92 },
{ 0x16FE4, 0x16FE4 },
{ 0x1BC9D, 0x1BC9E },
{ 0x1BCA0, 0x1BCA3 },
{ 0x1D167, 0x1D169 },
@ -376,8 +384,7 @@ static const struct interval double_width[] = {
{ 0x3099, 0x30FF },
{ 0x3105, 0x312F },
{ 0x3131, 0x318E },
{ 0x3190, 0x31BA },
{ 0x31C0, 0x31E3 },
{ 0x3190, 0x31E3 },
{ 0x31F0, 0x321E },
{ 0x3220, 0x3247 },
{ 0x3250, 0x4DBF },
@ -392,9 +399,11 @@ static const struct interval double_width[] = {
{ 0xFE68, 0xFE6B },
{ 0xFF01, 0xFF60 },
{ 0xFFE0, 0xFFE6 },
{ 0x16FE0, 0x16FE3 },
{ 0x16FE0, 0x16FE4 },
{ 0x16FF0, 0x16FF1 },
{ 0x17000, 0x187F7 },
{ 0x18800, 0x18AF2 },
{ 0x18800, 0x18CD5 },
{ 0x18D00, 0x18D08 },
{ 0x1B000, 0x1B11E },
{ 0x1B150, 0x1B152 },
{ 0x1B164, 0x1B167 },
@ -429,20 +438,22 @@ static const struct interval double_width[] = {
{ 0x1F680, 0x1F6C5 },
{ 0x1F6CC, 0x1F6CC },
{ 0x1F6D0, 0x1F6D2 },
{ 0x1F6D5, 0x1F6D5 },
{ 0x1F6D5, 0x1F6D7 },
{ 0x1F6EB, 0x1F6EC },
{ 0x1F6F4, 0x1F6FA },
{ 0x1F6F4, 0x1F6FC },
{ 0x1F7E0, 0x1F7EB },
{ 0x1F90D, 0x1F971 },
{ 0x1F973, 0x1F976 },
{ 0x1F97A, 0x1F9A2 },
{ 0x1F9A5, 0x1F9AA },
{ 0x1F9AE, 0x1F9CA },
{ 0x1F90C, 0x1F93A },
{ 0x1F93C, 0x1F945 },
{ 0x1F947, 0x1F978 },
{ 0x1F97A, 0x1F9CB },
{ 0x1F9CD, 0x1F9FF },
{ 0x1FA70, 0x1FA73 },
{ 0x1FA70, 0x1FA74 },
{ 0x1FA78, 0x1FA7A },
{ 0x1FA80, 0x1FA82 },
{ 0x1FA90, 0x1FA95 },
{ 0x1FA80, 0x1FA86 },
{ 0x1FA90, 0x1FAA8 },
{ 0x1FAB0, 0x1FAB6 },
{ 0x1FAC0, 0x1FAC2 },
{ 0x1FAD0, 0x1FAD6 },
{ 0x20000, 0x2FFFD },
{ 0x30000, 0x3FFFD }
};

View File

@ -372,15 +372,23 @@ static int check_updates(struct unpack_trees_options *o)
state.refresh_cache = 1;
state.istate = index;
if (!o->update || o->dry_run) {
remove_marked_cache_entries(index, 0);
trace_performance_leave("check_updates");
return 0;
}
if (o->clone)
setup_collided_checkout_detection(&state, index);
progress = get_progress(o);
if (o->update)
git_attr_set_direction(GIT_ATTR_CHECKOUT);
/* Start with clean cache to avoid using any possibly outdated info. */
invalidate_lstat_cache();
if (should_update_submodules() && o->update && !o->dry_run)
git_attr_set_direction(GIT_ATTR_CHECKOUT);
if (should_update_submodules())
load_gitmodules_file(index, NULL);
for (i = 0; i < index->cache_nr; i++) {
@ -388,18 +396,18 @@ static int check_updates(struct unpack_trees_options *o)
if (ce->ce_flags & CE_WT_REMOVE) {
display_progress(progress, ++cnt);
if (o->update && !o->dry_run)
unlink_entry(ce);
unlink_entry(ce);
}
}
remove_marked_cache_entries(index, 0);
remove_scheduled_dirs();
if (should_update_submodules() && o->update && !o->dry_run)
if (should_update_submodules())
load_gitmodules_file(index, &state);
enable_delayed_checkout(&state);
if (has_promisor_remote() && o->update && !o->dry_run) {
if (has_promisor_remote()) {
/*
* Prefetch the objects that are to be checked out in the loop
* below.
@ -431,15 +439,12 @@ static int check_updates(struct unpack_trees_options *o)
ce->name);
display_progress(progress, ++cnt);
ce->ce_flags &= ~CE_UPDATE;
if (o->update && !o->dry_run) {
errs |= checkout_entry(ce, &state, NULL, NULL);
}
errs |= checkout_entry(ce, &state, NULL, NULL);
}
}
stop_progress(&progress);
errs |= finish_delayed_checkout(&state, NULL);
if (o->update)
git_attr_set_direction(GIT_ATTR_CHECKIN);
git_attr_set_direction(GIT_ATTR_CHECKIN);
if (o->clone)
report_collided_checkout(index);
@ -1350,7 +1355,7 @@ static int clear_ce_flags_1(struct index_state *istate,
enum pattern_match_result default_match,
int progress_nr)
{
struct cache_entry **cache_end = cache + nr;
struct cache_entry **cache_end = nr ? cache + nr : cache;
/*
* Process all entries that have the given prefix and meet

View File

@ -84,8 +84,8 @@ static void trim_common_tail(mmfile_t *a, mmfile_t *b)
{
const int blk = 1024;
long trimmed = 0, recovered = 0;
char *ap = a->ptr + a->size;
char *bp = b->ptr + b->size;
char *ap = a->size ? a->ptr + a->size : a->ptr;
char *bp = b->size ? b->ptr + b->size : b->ptr;
long smaller = (a->size < b->size) ? a->size : b->size;
while (blk + trimmed <= smaller && !memcmp(ap - blk, bp - blk, blk)) {
@ -250,9 +250,13 @@ void xdiff_set_find_func(xdemitconf_t *xecfg, const char *value, int cflags)
ALLOC_ARRAY(regs->array, regs->nr);
for (i = 0; i < regs->nr; i++) {
struct ff_reg *reg = regs->array + i;
const char *ep = strchr(value, '\n'), *expression;
const char *ep, *expression;
char *buffer = NULL;
if (!value)
BUG("mismatch between line count and parsing");
ep = strchr(value, '\n');
reg->negate = (*value == '!');
if (reg->negate && i == regs->nr - 1)
die("Last expression must not be negated: %s", value);
@ -265,7 +269,7 @@ void xdiff_set_find_func(xdemitconf_t *xecfg, const char *value, int cflags)
if (regcomp(&reg->re, expression, cflags))
die("Invalid regexp to look for hunk header: %s", expression);
free(buffer);
value = ep + 1;
value = ep ? ep + 1 : NULL;
}
}