* maint-2.36: (30 commits)
Git 2.36.6
Git 2.35.8
Git 2.34.8
Git 2.33.8
Git 2.32.7
Git 2.31.8
tests: avoid using `test_i18ncmp`
Git 2.30.9
gettext: avoid using gettext if the locale dir is not present
apply --reject: overwrite existing `.rej` symlink if it exists
http.c: clear the 'finished' member once we are done with it
clone.c: avoid "exceeds maximum object size" error with GCC v12.x
range-diff: use ssize_t for parsed "len" in read_patches()
range-diff: handle unterminated lines in read_patches()
range-diff: drop useless "offset" variable from read_patches()
t5604: GETTEXT_POISON fix, conclusion
t5604: GETTEXT_POISON fix, part 1
t5619: GETTEXT_POISON fix
t0003: GETTEXT_POISON fix, conclusion
t0003: GETTEXT_POISON fix, part 1
t0033: GETTEXT_POISON fix
...
* maint-2.35: (29 commits)
Git 2.35.8
Git 2.34.8
Git 2.33.8
Git 2.32.7
Git 2.31.8
tests: avoid using `test_i18ncmp`
Git 2.30.9
gettext: avoid using gettext if the locale dir is not present
apply --reject: overwrite existing `.rej` symlink if it exists
http.c: clear the 'finished' member once we are done with it
clone.c: avoid "exceeds maximum object size" error with GCC v12.x
range-diff: use ssize_t for parsed "len" in read_patches()
range-diff: handle unterminated lines in read_patches()
range-diff: drop useless "offset" variable from read_patches()
t5604: GETTEXT_POISON fix, conclusion
t5604: GETTEXT_POISON fix, part 1
t5619: GETTEXT_POISON fix
t0003: GETTEXT_POISON fix, conclusion
t0003: GETTEXT_POISON fix, part 1
t0033: GETTEXT_POISON fix
http: support CURLOPT_PROTOCOLS_STR
...
* maint-2.34: (28 commits)
Git 2.34.8
Git 2.33.8
Git 2.32.7
Git 2.31.8
tests: avoid using `test_i18ncmp`
Git 2.30.9
gettext: avoid using gettext if the locale dir is not present
apply --reject: overwrite existing `.rej` symlink if it exists
http.c: clear the 'finished' member once we are done with it
clone.c: avoid "exceeds maximum object size" error with GCC v12.x
range-diff: use ssize_t for parsed "len" in read_patches()
range-diff: handle unterminated lines in read_patches()
range-diff: drop useless "offset" variable from read_patches()
t5604: GETTEXT_POISON fix, conclusion
t5604: GETTEXT_POISON fix, part 1
t5619: GETTEXT_POISON fix
t0003: GETTEXT_POISON fix, conclusion
t0003: GETTEXT_POISON fix, part 1
t0033: GETTEXT_POISON fix
http: support CURLOPT_PROTOCOLS_STR
http: prefer CURLOPT_SEEKFUNCTION to CURLOPT_IOCTLFUNCTION
...
* maint-2.33: (27 commits)
Git 2.33.8
Git 2.32.7
Git 2.31.8
tests: avoid using `test_i18ncmp`
Git 2.30.9
gettext: avoid using gettext if the locale dir is not present
apply --reject: overwrite existing `.rej` symlink if it exists
http.c: clear the 'finished' member once we are done with it
clone.c: avoid "exceeds maximum object size" error with GCC v12.x
range-diff: use ssize_t for parsed "len" in read_patches()
range-diff: handle unterminated lines in read_patches()
range-diff: drop useless "offset" variable from read_patches()
t5604: GETTEXT_POISON fix, conclusion
t5604: GETTEXT_POISON fix, part 1
t5619: GETTEXT_POISON fix
t0003: GETTEXT_POISON fix, conclusion
t0003: GETTEXT_POISON fix, part 1
t0033: GETTEXT_POISON fix
http: support CURLOPT_PROTOCOLS_STR
http: prefer CURLOPT_SEEKFUNCTION to CURLOPT_IOCTLFUNCTION
http-push: prefer CURLOPT_UPLOAD to CURLOPT_PUT
...
* maint-2.32: (26 commits)
Git 2.32.7
Git 2.31.8
tests: avoid using `test_i18ncmp`
Git 2.30.9
gettext: avoid using gettext if the locale dir is not present
apply --reject: overwrite existing `.rej` symlink if it exists
http.c: clear the 'finished' member once we are done with it
clone.c: avoid "exceeds maximum object size" error with GCC v12.x
range-diff: use ssize_t for parsed "len" in read_patches()
range-diff: handle unterminated lines in read_patches()
range-diff: drop useless "offset" variable from read_patches()
t5604: GETTEXT_POISON fix, conclusion
t5604: GETTEXT_POISON fix, part 1
t5619: GETTEXT_POISON fix
t0003: GETTEXT_POISON fix, conclusion
t0003: GETTEXT_POISON fix, part 1
t0033: GETTEXT_POISON fix
http: support CURLOPT_PROTOCOLS_STR
http: prefer CURLOPT_SEEKFUNCTION to CURLOPT_IOCTLFUNCTION
http-push: prefer CURLOPT_UPLOAD to CURLOPT_PUT
ci: install python on ubuntu
...
* maint-2.31: (25 commits)
Git 2.31.8
tests: avoid using `test_i18ncmp`
Git 2.30.9
gettext: avoid using gettext if the locale dir is not present
apply --reject: overwrite existing `.rej` symlink if it exists
http.c: clear the 'finished' member once we are done with it
clone.c: avoid "exceeds maximum object size" error with GCC v12.x
range-diff: use ssize_t for parsed "len" in read_patches()
range-diff: handle unterminated lines in read_patches()
range-diff: drop useless "offset" variable from read_patches()
t5604: GETTEXT_POISON fix, conclusion
t5604: GETTEXT_POISON fix, part 1
t5619: GETTEXT_POISON fix
t0003: GETTEXT_POISON fix, conclusion
t0003: GETTEXT_POISON fix, part 1
t0033: GETTEXT_POISON fix
http: support CURLOPT_PROTOCOLS_STR
http: prefer CURLOPT_SEEKFUNCTION to CURLOPT_IOCTLFUNCTION
http-push: prefer CURLOPT_UPLOAD to CURLOPT_PUT
ci: install python on ubuntu
ci: use the same version of p4 on both Linux and macOS
...
Since `test_i18ncmp` was deprecated in v2.31.*, the instances added in
v2.30.9 needed to be converted to `test_cmp` calls.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
* maint-2.30: (23 commits)
Git 2.30.9
gettext: avoid using gettext if the locale dir is not present
apply --reject: overwrite existing `.rej` symlink if it exists
http.c: clear the 'finished' member once we are done with it
clone.c: avoid "exceeds maximum object size" error with GCC v12.x
range-diff: use ssize_t for parsed "len" in read_patches()
range-diff: handle unterminated lines in read_patches()
range-diff: drop useless "offset" variable from read_patches()
t5604: GETTEXT_POISON fix, conclusion
t5604: GETTEXT_POISON fix, part 1
t5619: GETTEXT_POISON fix
t0003: GETTEXT_POISON fix, conclusion
t0003: GETTEXT_POISON fix, part 1
t0033: GETTEXT_POISON fix
http: support CURLOPT_PROTOCOLS_STR
http: prefer CURLOPT_SEEKFUNCTION to CURLOPT_IOCTLFUNCTION
http-push: prefer CURLOPT_UPLOAD to CURLOPT_PUT
ci: install python on ubuntu
ci: use the same version of p4 on both Linux and macOS
ci: remove the pipe after "p4 -V" to catch errors
github-actions: run gcc-8 on ubuntu-20.04 image
...
Avoids issues with renaming or deleting sections with long lines, where
configuration values may be interpreted as sections, leading to
configuration injection. Addresses CVE-2023-29007.
* tb/config-copy-or-rename-in-file-injection:
config.c: disallow overly-long lines in `copy_or_rename_section_in_file()`
config.c: avoid integer truncation in `copy_or_rename_section_in_file()`
config: avoid fixed-sized buffer when renaming/deleting a section
t1300: demonstrate failure when renaming sections with long lines
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Avoids the overhead of calling `gettext` when initialization of the
translated messages was skipped. Addresses CVE-2023-25815.
* avoid-using-uninitialized-gettext: (1 commit)
gettext: avoid using gettext if the locale dir is not present
Address CVE-2023-25652 by deleting any existing `.rej` symbolic links
instead of following them.
* js/apply-overwrite-rej-symlink-if-exists:
apply --reject: overwrite existing `.rej` symlink if it exists
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
As a defense-in-depth measure to guard against any potentially-unknown
buffer overflows in `copy_or_rename_section_in_file()`, refuse to work
with overly-long lines in a gitconfig.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
There are a couple of spots within `copy_or_rename_section_in_file()`
that incorrectly use an `int` to track an offset within a string, which
may truncate or wrap around to a negative value.
Historically it was impossible to have a line longer than 1024 bytes
anyway, since we used fgets() with a fixed-size buffer of exactly that
length. But the recent change to use a strbuf permits us to read lines
of arbitrary length, so it's possible for a malicious input to cause us
to overflow past INT_MAX and do an out-of-bounds array read.
Practically speaking, however, this should never happen, since it
requires 2GB section names or values, which are unrealistic in
non-malicious circumstances.
Co-authored-by: Jeff King <peff@peff.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
When renaming (or deleting) a section of configuration, Git uses the
function `git_config_copy_or_rename_section_in_file()` to rewrite the
configuration file after applying the rename or deletion to the given
section.
To do this, Git repeatedly calls `fgets()` to read the existing
configuration data into a fixed size buffer.
When the configuration value under `old_name` exceeds the size of the
buffer, we will call `fgets()` an additional time even if there is no
newline in the configuration file, since our read length is capped at
`sizeof(buf)`.
If the first character of the buffer (after zero or more characters
satisfying `isspace()`) is a '[', Git will incorrectly treat it as
beginning a new section when the original section is being removed. In
other words, a configuration value satisfying this criteria can
incorrectly be considered as a new secftion instead of a variable in the
original section.
Avoid this issue by using a variable-width buffer in the form of a
strbuf rather than a fixed-with region on the stack. A couple of small
points worth noting:
- Using a strbuf will cause us to allocate arbitrary sizes to match
the length of each line. In practice, we don't expect any
reasonable configuration files to have lines that long, and a
bandaid will be introduced in a later patch to ensure that this is
the case.
- We are using strbuf_getwholeline() here instead of strbuf_getline()
in order to match `fgets()`'s behavior of leaving the trailing LF
character on the buffer (as well as a trailing NUL).
This could be changed later, but using strbuf_getwholeline() changes
the least about this function's implementation, so it is picked as
the safest path.
- It is temping to want to replace the loop to skip over characters
matching isspace() at the beginning of the buffer with a convenience
function like `strbuf_ltrim()`. But this is the wrong approach for a
couple of reasons:
First, it involves a potentially large and expensive `memmove()`
which we would like to avoid. Second, and more importantly, we also
*do* want to preserve those spaces to avoid changing the output of
other sections.
In all, this patch is a minimal replacement of the fixed-width buffer in
`git_config_copy_or_rename_section_in_file()` to instead use a `struct
strbuf`.
Reported-by: André Baptista <andre@ethiack.com>
Reported-by: Vítor Pinho <vitor@ethiack.com>
Helped-by: Patrick Steinhardt <ps@pks.im>
Co-authored-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
In cc5e1bf992 (gettext: avoid initialization if the locale dir is not
present, 2018-04-21) Git was taught to avoid a costly gettext start-up
when there are not even any localized messages to work with.
But we still called `gettext()` and `ngettext()` functions.
Which caused a problem in Git for Windows when the libgettext that is
consumed from the MSYS2 project stopped using a runtime prefix in
https://github.com/msys2/MINGW-packages/pull/10461
Due to that change, we now use an unintialized gettext machinery that
might get auto-initialized _using an unintended locale directory_:
`C:\mingw64\share\locale`.
Let's record the fact when the gettext initialization was skipped, and
skip calling the gettext functions accordingly.
This addresses CVE-2023-25815.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
When renaming a configuration section which has an entry whose length
exceeds the size of our buffer in config.c's implementation of
`git_config_copy_or_rename_section_in_file()`, Git will incorrectly
form a new configuration section with part of the data in the section
being removed.
In this instance, our first configuration file looks something like:
[b]
c = d <spaces> [a] e = f
[a]
g = h
Here, we have two configuration values, "b.c", and "a.g". The value "[a]
e = f" belongs to the configuration value "b.c", and does not form its
own section.
However, when renaming the section 'a' to 'xyz', Git will write back
"[xyz]\ne = f", but "[xyz]" is still attached to the value of "b.c",
which is why "e = f" on its own line becomes a new entry called "b.e".
A slightly different example embeds the section being renamed within
another section.
Demonstrate this failure in a test in t1300, which we will fix in the
following commit.
Co-authored-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
The `git apply --reject` is expected to write out `.rej` files in case
one or more hunks fail to apply cleanly. Historically, the command
overwrites any existing `.rej` files. The idea being that
apply/reject/edit cycles are relatively common, and the generated `.rej`
files are not considered precious.
But the command does not overwrite existing `.rej` symbolic links, and
instead follows them. This is unsafe because the same patch could
potentially create such a symbolic link and point at arbitrary paths
outside the current worktree, and `git apply` would write the contents
of the `.rej` file into that location.
Therefore, let's make sure that any existing `.rej` file or symbolic
link is removed before writing it.
Reported-by: RyotaK <ryotak.mail@gmail.com>
Helped-by: Taylor Blau <me@ttaylorr.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Linus Torvalds <torvalds@linuxfoundation.org>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The `maint-2.30` branch accumulated quite a few fixes over the past two
years. Most of those fixes were originally based on newer versions, and
while the patches cherry-picked cleanly, we weren't diligent enough to
pay attention to the CI builds and the GETTEXT_POISON job regressed.
This topic branch fixes that.
* js/gettext-poison-fixes
t0033: GETTEXT_POISON fix
t0003: GETTEXT_POISON fix, part 1
t0003: GETTEXT_POISON fix, conclusion
t5619: GETTEXT_POISON fix
t5604: GETTEXT_POISON fix, part 1
t5604: GETTEXT_POISON fix, conclusion
Update the version of Ubuntu used for GitHub Actions CI from 18.04
to 22.04.
* ds/github-actions-use-newer-ubuntu:
ci: update 'static-analysis' to Ubuntu 22.04
GitHub Actions scheduled a brownout of Ubuntu 18.04, which canceled all
runs of the 'static-analysis' job in our CI runs. Update to 22.04 to
avoid this as the brownout later turns into a complete deprecation.
The use of 18.04 was set in d051ed77ee (.github/workflows/main.yml: run
static-analysis on bionic, 2021-02-08) due to the lack of Coccinelle
being available on 20.04 (which continues today).
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Cherry pick commit d3775de0 (Makefile: force -O0 when compiling with
SANITIZE=leak, 2022-10-18), as otherwise the leak checker at GitHub
Actions CI seems to fail with a false positive.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
"git range-diff" code clean-up. Needed to pacify modern GCC versions.
* jk/range-diff-fixes:
range-diff: use ssize_t for parsed "len" in read_patches()
range-diff: handle unterminated lines in read_patches()
range-diff: drop useless "offset" variable from read_patches()
Deal with a few deprecation warning from cURL library.
* jk/curl-avoid-deprecated-api:
http: support CURLOPT_PROTOCOLS_STR
http: prefer CURLOPT_SEEKFUNCTION to CURLOPT_IOCTLFUNCTION
http-push: prefer CURLOPT_UPLOAD to CURLOPT_PUT
Adjust the GitHub CI to newer ubuntu release.
* jx/ci-ubuntu-fix:
github-actions: run gcc-8 on ubuntu-20.04 image
ci: install python on ubuntu
ci: use the same version of p4 on both Linux and macOS
ci: remove the pipe after "p4 -V" to catch errors
Meant to go with js/ci-gcc-12-fixes.
source: <xmqq7d68ytj8.fsf_-_@gitster.g>
* jc/http-clear-finished-pointer:
http.c: clear the 'finished' member once we are done with it
Fixes real problems noticed by gcc 12 and works around false
positives.
* js/ci-gcc-12-fixes:
nedmalloc: avoid new compile error
compat/win32/syslog: fix use-after-realloc
In http.c, the run_active_slot() function allows the given "slot" to
make progress by calling step_active_slots() in a loop repeatedly,
and the loop is not left until the request held in the slot
completes.
Ages ago, we used to use the slot->in_use member to get out of the
loop, which misbehaved when the request in "slot" completes (at
which time, the result of the request is copied away from the slot,
and the in_use member is cleared, making the slot ready to be
reused), and the "slot" gets reused to service a different request
(at which time, the "slot" becomes in_use again, even though it is
for a different request). The loop terminating condition mistakenly
thought that the original request has yet to be completed.
Today's code, after baa7b67d (HTTP slot reuse fixes, 2006-03-10)
fixed this issue, uses a separate "slot->finished" member that is
set in run_active_slot() to point to an on-stack variable, and the
code that completes the request in finish_active_slot() clears the
on-stack variable via the pointer to signal that the particular
request held by the slot has completed. It also clears the in_use
member (as before that fix), so that the slot itself can safely be
reused for an unrelated request.
One thing that is not quite clean in this arrangement is that,
unless the slot gets reused, at which point the finished member is
reset to NULL, the member keeps the value of &finished, which
becomes a dangling pointer into the stack when run_active_slot()
returns. Clear the finished member before the control leaves the
function, which has a side effect of unconfusing compilers like
recent GCC 12 that is over-eager to warn against such an assignment.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Technically, the pointer difference `end - start` _could_ be negative,
and when cast to an (unsigned) `size_t` that would cause problems. In
this instance, the symptom is:
dir.c: In function 'git_url_basename':
dir.c:3087:13: error: 'memchr' specified bound [9223372036854775808, 0]
exceeds maximum object size 9223372036854775807
[-Werror=stringop-overread]
CC ewah/bitmap.o
3087 | if (memchr(start, '/', end - start) == NULL
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
While it is a bit far-fetched to think that `end` (which is defined as
`repo + strlen(repo)`) and `start` (which starts at `repo` and never
steps beyond the NUL terminator) could result in such a negative
difference, GCC has no way of knowing that.
See also https://gcc.gnu.org/bugzilla//show_bug.cgi?id=85783.
Let's just add a safety check, primarily for GCC's benefit.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In fade728df1 (apply: fix writing behind newly created symbolic links,
2023-02-02), we backported a patch onto v2.30.* that was originally
based on a much newer version. The v2.30.* release train still has the
GETTEXT_POISON CI job, though, and hence needs `test_i18n*` in its
tests.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
In bffc762f87 (dir-iterator: prevent top-level symlinks without
FOLLOW_SYMLINKS, 2023-01-24), we backported a patch onto v2.30.* that
was originally based on a much newer version. The v2.30.* release train
still has the GETTEXT_POISON CI job, though, and hence needs
`test_i18n*` in its tests.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
In cf8f6ce02a (clone: delay picking a transport until after
get_repo_path(), 2023-01-24), we backported a patch onto v2.30.* that
was originally based on a much newer version. The v2.30.* release train
still has the GETTEXT_POISON CI job, though, and hence needs
`test_i18n*` in its tests.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
As we iterate through the buffer containing git-log output, parsing
lines, we use an "int" to store the size of an individual line. This
should be a size_t, as we have no guarantee that there is not a
malicious 2GB+ commit-message line in the output.
Overflowing this integer probably doesn't do anything _too_ terrible. We
are not using the value to size a buffer, so the worst case is probably
an out-of-bounds read from before the array. But it's easy enough to
fix.
Note that we have to use ssize_t here, since we also store the length
result from parse_git_diff_header(), which may return a negative value
for error. That function actually returns an int itself, which has a
similar overflow problem, but I'll leave that for another day. Much
of the apply.c code uses ints and should be converted as a whole; in the
meantime, a negative return from parse_git_diff_header() will be
interpreted as an error, and we'll bail (so we can't handle such a case,
but given that it's likely to be malicious anyway, the important thing
is we don't have any memory errors).
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 3c50032ff5 (attr: ignore overly large gitattributes files,
2022-12-01), we backported a patch onto v2.30.* that was originally
based on a much newer version. The v2.30.* release train still has the
GETTEXT_POISON CI job, though, and hence needs `test_i18n*` in its
tests.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
When parsing our buffer of output from git-log, we have a
find_end_of_line() helper that finds the next newline, and gives us the
number of bytes to move past it, or the size of the whole remaining
buffer if there is no newline.
But trying to handle both those cases leads to some oddities:
- we try to overwrite the newline with NUL in the caller, by writing
over line[len-1]. This is at best redundant, since the helper will
already have done so if it saw a newline. But if it didn't see a
newline, it's actively wrong; we'll overwrite the byte at the end of
the (unterminated) line.
We could solve this just dropping the extra NUL assignment in the
caller and just letting the helper do the right thing. But...
- if we see a "diff --git" line, we'll restore the newline on top of
the NUL byte, so we can pass the string to parse_git_diff_header().
But if there was no newline in the first place, we can't do this.
There's no place to put it (the current code writes a newline
over whatever byte we obliterated earlier). The best we can do is
feed the complete remainder of the buffer to the function (which is,
in fact, a string, by virtue of being a strbuf).
To solve this, the caller needs to know whether we actually found a
newline or not. We could modify find_end_of_line() to return that
information, but we can further observe that it has only one caller.
So let's just inline it in that caller.
Nobody seems to have noticed this case, probably because git-log would
never produce input that doesn't end with a newline. Arguably we could
just return an error as soon as we see that the output does not end in a
newline. But the code to do so actually ends up _longer_, mostly because
of the cleanup we have to do in handling the error.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In dfa6b32b5e (attr: ignore attribute lines exceeding 2048 bytes,
2022-12-01), we backported a patch onto v2.30.* that was originally
based on a much newer version. The v2.30.* release train still has the
GETTEXT_POISON CI job, though, and hence needs `test_i18n*` in its
tests.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
In e47363e5a8 (t0033: add tests for safe.directory, 2022-04-13), we
backported a patch onto v2.30.* that was originally based on a much
newer version. The v2.30.* release train still has the GETTEXT_POISON
CI job, though, and hence needs `test_i18n*` in its tests.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The CURLOPT_PROTOCOLS (and matching CURLOPT_REDIR_PROTOCOLS) flag was
deprecated in curl 7.85.0, and using it generate compiler warnings as of
curl 7.87.0. The path forward is to use CURLOPT_PROTOCOLS_STR, but we
can't just do so unilaterally, as it was only introduced less than a
year ago in 7.85.0.
Until that version becomes ubiquitous, we have to either disable the
deprecation warning or conditionally use the "STR" variant on newer
versions of libcurl. This patch switches to the new variant, which is
nice for two reasons:
- we don't have to worry that silencing curl's deprecation warnings
might cause us to miss other more useful ones
- we'd eventually want to move to the new variant anyway, so this gets
us set up (albeit with some extra ugly boilerplate for the
conditional)
There are a lot of ways to split up the two cases. One way would be to
abstract the storage type (strbuf versus a long), how to append
(strbuf_addstr vs bitwise OR), how to initialize, which CURLOPT to use,
and so on. But the resulting code looks pretty magical:
GIT_CURL_PROTOCOL_TYPE allowed = GIT_CURL_PROTOCOL_TYPE_INIT;
if (...http is allowed...)
GIT_CURL_PROTOCOL_APPEND(&allowed, "http", CURLOPT_HTTP);
and you end up with more "#define GIT_CURL_PROTOCOL_TYPE" macros than
actual code.
On the other end of the spectrum, we could just implement two separate
functions, one that handles a string list and one that handles bits. But
then we end up repeating our list of protocols (http, https, ftp, ftp).
This patch takes the middle ground. The run-time code is always there to
handle both types, and we just choose which one to feed to curl.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Python is missing from the default ubuntu-22.04 runner image, which
prevents git-p4 from working. To install python on ubuntu, we need
to provide the correct package names:
* On Ubuntu 18.04 (bionic), "/usr/bin/python2" is provided by the
"python" package, and "/usr/bin/python3" is provided by the "python3"
package.
* On Ubuntu 20.04 (focal) and above, "/usr/bin/python2" is provided by
the "python2" package which has a different name from bionic, and
"/usr/bin/python3" is provided by "python3".
Since the "ubuntu-latest" runner image has a higher version, its
safe to use "python2" or "python3" package name.
Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "offset" variable was was introduced in 44b67cb62b (range-diff:
split lines manually, 2019-07-11), but it has never done anything
useful. We use it to count up the number of bytes we've consumed, but we
never look at the result. It was probably copied accidentally from an
almost-identical loop in apply.c:find_header() (and the point of that
commit was to make use of the parse_git_diff_header() function which
underlies both).
Because the variable was set but not used, most compilers didn't seem to
notice, but the upcoming clang-14 does complain about it, via its
-Wunused-but-set-variable warning.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The IOCTLFUNCTION option has been deprecated, and generates a compiler
warning in recent versions of curl. We can switch to using SEEKFUNCTION
instead. It was added in 2008 via curl 7.18.0; our INSTALL file already
indicates we require at least curl 7.19.4.
But there's one catch: curl says we should use CURL_SEEKFUNC_{OK,FAIL},
and those didn't arrive until 7.19.5. One workaround would be to use a
bare 0/1 here (or define our own macros). But let's just bump the
minimum required version to 7.19.5. That version is only a minor version
bump from our existing requirement, and is only a 2 month time bump for
versions that are almost 13 years old. So it's not likely that anybody
cares about the distinction.
Switching means we have to rewrite the ioctl functions into seek
functions. In some ways they are simpler (seeking is the only
operation), but in some ways more complex (the ioctl allowed only a full
rewind, but now we can seek to arbitrary offsets).
Curl will only ever use SEEK_SET (per their documentation), so I didn't
bother implementing anything else, since it would naturally be
completely untested. This seems unlikely to change, but I added an
assertion just in case.
Likewise, I doubt curl will ever try to seek outside of the buffer sizes
we've told it, but I erred on the defensive side here, rather than do an
out-of-bounds read.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The two options do exactly the same thing, but the latter has been
deprecated and in recent versions of curl may produce a compiler
warning. Since the UPLOAD form is available everywhere (it was
introduced in the year 2000 by curl 7.1), we can just switch to it.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
GCC v12.x complains thusly:
compat/nedmalloc/nedmalloc.c: In function 'DestroyCaches':
compat/nedmalloc/nedmalloc.c:326:12: error: the comparison will always
evaluate as 'true' for the address of 'caches'
will never be NULL [-Werror=address]
326 | if(p->caches)
| ^
compat/nedmalloc/nedmalloc.c:196:22: note: 'caches' declared here
196 | threadcache *caches[THREADCACHEMAXCACHES];
| ^~~~~~
... and it is correct, of course.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There would be a segmentation fault when running p4 v16.2 on ubuntu
22.04 which is the latest version of ubuntu runner image for github
actions.
By checking each version from [1], p4d version 21.1 and above can work
properly on ubuntu 22.04. But version 22.x will break some p4 test
cases. So p4 version 21.x is exactly the version we can use.
With this update, the versions of p4 for Linux and macOS happen to be
the same. So we can add the version number directly into the "P4WHENCE"
variable, and reuse it in p4 installation for macOS.
By removing the "LINUX_P4_VERSION" variable from "ci/lib.sh", the
comment left above has nothing to do with p4, but still applies to
git-lfs. Since we have a fixed version of git-lfs installed on Linux,
we may have a different version on macOS.
[1]: https://cdist2.perforce.com/perforce/
Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When installing p4 as a dependency, we used to pipe output of "p4 -V"
and "p4d -V" to validate the installation and output a condensed version
information. But this would hide potential errors of p4 and would stop
with an empty output. E.g.: p4d version 16.2 running on ubuntu 22.04
causes sigfaults, even before it produces any output.
By removing the pipe after "p4 -V" and "p4d -V", we may get a
verbose output, and stop immediately on errors because we have "set
-e" in "ci/lib.sh". Since we won't look at these trace logs unless
something fails, just including the raw output seems most sensible.
Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
GitHub starts to upgrade its runner image "ubuntu-latest" from version
"ubuntu-20.04" to version "ubuntu-22.04". It will fail to find and
install "gcc-8" package on the new runner image.
Change the runner image of the `linux-gcc` job from "ubuntu-latest" to
"ubuntu-20.04" in order to install "gcc-8" as a dependency.
Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git for Windows' SDK recently upgraded to GCC v12.x which points out
that the `pos` variable might be used even after the corresponding
memory was `realloc()`ed and therefore potentially no longer valid.
Since a subset of this SDK is used in Git's CI/PR builds, we need to fix
this to continue to be able to benefit from the CI/PR runs.
Note: This bug has been with us since 2a6b149c64 (mingw: avoid using
strbuf in syslog, 2011-10-06), and while it looks tempting to replace
the hand-rolled string manipulation with a `strbuf`-based one, that
commit's message explains why we cannot do that: The `syslog()` function
is called as part of the function in `daemon.c` which is set as the
`die()` routine, and since `strbuf_grow()` can call that function if it
runs out of memory, this would cause a nasty infinite loop that we do
not want to re-introduce.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint-2.35:
Git 2.35.7
Git 2.34.7
http: support CURLOPT_PROTOCOLS_STR
http: prefer CURLOPT_SEEKFUNCTION to CURLOPT_IOCTLFUNCTION
http-push: prefer CURLOPT_UPLOAD to CURLOPT_PUT
Git 2.33.7
Git 2.32.6
Git 2.31.7
Git 2.30.8
apply: fix writing behind newly created symbolic links
dir-iterator: prevent top-level symlinks without FOLLOW_SYMLINKS
clone: delay picking a transport until after get_repo_path()
t5619: demonstrate clone_local() with ambiguous transport
* maint-2.34:
Git 2.34.7
http: support CURLOPT_PROTOCOLS_STR
http: prefer CURLOPT_SEEKFUNCTION to CURLOPT_IOCTLFUNCTION
http-push: prefer CURLOPT_UPLOAD to CURLOPT_PUT
Git 2.33.7
Git 2.32.6
Git 2.31.7
Git 2.30.8
apply: fix writing behind newly created symbolic links
dir-iterator: prevent top-level symlinks without FOLLOW_SYMLINKS
clone: delay picking a transport until after get_repo_path()
t5619: demonstrate clone_local() with ambiguous transport
* maint-2.33:
Git 2.33.7
Git 2.32.6
Git 2.31.7
Git 2.30.8
apply: fix writing behind newly created symbolic links
dir-iterator: prevent top-level symlinks without FOLLOW_SYMLINKS
clone: delay picking a transport until after get_repo_path()
t5619: demonstrate clone_local() with ambiguous transport
Deal with a few deprecation warning from cURL library.
* jk/curl-avoid-deprecated-api:
http: support CURLOPT_PROTOCOLS_STR
http: prefer CURLOPT_SEEKFUNCTION to CURLOPT_IOCTLFUNCTION
http-push: prefer CURLOPT_UPLOAD to CURLOPT_PUT
The CURLOPT_PROTOCOLS (and matching CURLOPT_REDIR_PROTOCOLS) flag was
deprecated in curl 7.85.0, and using it generate compiler warnings as of
curl 7.87.0. The path forward is to use CURLOPT_PROTOCOLS_STR, but we
can't just do so unilaterally, as it was only introduced less than a
year ago in 7.85.0.
Until that version becomes ubiquitous, we have to either disable the
deprecation warning or conditionally use the "STR" variant on newer
versions of libcurl. This patch switches to the new variant, which is
nice for two reasons:
- we don't have to worry that silencing curl's deprecation warnings
might cause us to miss other more useful ones
- we'd eventually want to move to the new variant anyway, so this gets
us set up (albeit with some extra ugly boilerplate for the
conditional)
There are a lot of ways to split up the two cases. One way would be to
abstract the storage type (strbuf versus a long), how to append
(strbuf_addstr vs bitwise OR), how to initialize, which CURLOPT to use,
and so on. But the resulting code looks pretty magical:
GIT_CURL_PROTOCOL_TYPE allowed = GIT_CURL_PROTOCOL_TYPE_INIT;
if (...http is allowed...)
GIT_CURL_PROTOCOL_APPEND(&allowed, "http", CURLOPT_HTTP);
and you end up with more "#define GIT_CURL_PROTOCOL_TYPE" macros than
actual code.
On the other end of the spectrum, we could just implement two separate
functions, one that handles a string list and one that handles bits. But
then we end up repeating our list of protocols (http, https, ftp, ftp).
This patch takes the middle ground. The run-time code is always there to
handle both types, and we just choose which one to feed to curl.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The IOCTLFUNCTION option has been deprecated, and generates a compiler
warning in recent versions of curl. We can switch to using SEEKFUNCTION
instead. It was added in 2008 via curl 7.18.0; our INSTALL file already
indicates we require at least curl 7.19.4.
But there's one catch: curl says we should use CURL_SEEKFUNC_{OK,FAIL},
and those didn't arrive until 7.19.5. One workaround would be to use a
bare 0/1 here (or define our own macros). But let's just bump the
minimum required version to 7.19.5. That version is only a minor version
bump from our existing requirement, and is only a 2 month time bump for
versions that are almost 13 years old. So it's not likely that anybody
cares about the distinction.
Switching means we have to rewrite the ioctl functions into seek
functions. In some ways they are simpler (seeking is the only
operation), but in some ways more complex (the ioctl allowed only a full
rewind, but now we can seek to arbitrary offsets).
Curl will only ever use SEEK_SET (per their documentation), so I didn't
bother implementing anything else, since it would naturally be
completely untested. This seems unlikely to change, but I added an
assertion just in case.
Likewise, I doubt curl will ever try to seek outside of the buffer sizes
we've told it, but I erred on the defensive side here, rather than do an
out-of-bounds read.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
The two options do exactly the same thing, but the latter has been
deprecated and in recent versions of curl may produce a compiler
warning. Since the UPLOAD form is available everywhere (it was
introduced in the year 2000 by curl 7.1), we can just switch to it.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
* maint-2.32:
Git 2.32.6
Git 2.31.7
Git 2.30.8
apply: fix writing behind newly created symbolic links
dir-iterator: prevent top-level symlinks without FOLLOW_SYMLINKS
clone: delay picking a transport until after get_repo_path()
t5619: demonstrate clone_local() with ambiguous transport
* maint-2.31:
Git 2.31.7
Git 2.30.8
apply: fix writing behind newly created symbolic links
dir-iterator: prevent top-level symlinks without FOLLOW_SYMLINKS
clone: delay picking a transport until after get_repo_path()
t5619: demonstrate clone_local() with ambiguous transport
* maint-2.30:
Git 2.30.8
apply: fix writing behind newly created symbolic links
dir-iterator: prevent top-level symlinks without FOLLOW_SYMLINKS
clone: delay picking a transport until after get_repo_path()
t5619: demonstrate clone_local() with ambiguous transport
Fix a vulnerability (CVE-2023-23946) that allows crafted input to trick
`git apply` into writing files outside of the working tree.
* ps/apply-beyond-symlink:
dir-iterator: prevent top-level symlinks without FOLLOW_SYMLINKS
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Resolve a security vulnerability (CVE-2023-22490) where `clone_local()`
is used in conjunction with non-local transports, leading to arbitrary
path exfiltration.
* tb/clone-local-symlinks:
dir-iterator: prevent top-level symlinks without FOLLOW_SYMLINKS
clone: delay picking a transport until after get_repo_path()
t5619: demonstrate clone_local() with ambiguous transport
When writing files git-apply(1) initially makes sure that none of the
files it is about to create are behind a symlink:
```
$ git init repo
Initialized empty Git repository in /tmp/repo/.git/
$ cd repo/
$ ln -s dir symlink
$ git apply - <<EOF
diff --git a/symlink/file b/symlink/file
new file mode 100644
index 0000000..e69de29
EOF
error: affected file 'symlink/file' is beyond a symbolic link
```
This safety mechanism is crucial to ensure that we don't write outside
of the repository's working directory. It can be fooled though when the
patch that is being applied creates the symbolic link in the first
place, which can lead to writing files in arbitrary locations.
Fix this by checking whether the path we're about to create is
beyond a symlink or not. Tightening these checks like this should be
fine as we already have these precautions in Git as explained
above. Ideally, we should update the check we do up-front before
starting to reflect the computed changes to the working tree so that
we catch this case as well, but as part of embargoed security work,
adding an equivalent check just before we try to write out a file
should serve us well as a reasonable first step.
Digging back into history shows that this vulnerability has existed
since at least Git v2.9.0. As Git v2.8.0 and older don't build on my
system anymore I cannot tell whether older versions are affected, as
well.
Reported-by: Joern Schneeweisz <jschneeweisz@gitlab.com>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When using the dir_iterator API, we first stat(2) the base path, and
then use that as a starting point to enumerate the directory's contents.
If the directory contains symbolic links, we will immediately die() upon
encountering them without the `FOLLOW_SYMLINKS` flag. The same is not
true when resolving the top-level directory, though.
As explained in a previous commit, this oversight in 6f054f9fb3
(builtin/clone.c: disallow `--local` clones with symlinks, 2022-07-28)
can be used as an attack vector to include arbitrary files on a victim's
filesystem from outside of the repository.
Prevent resolving top-level symlinks unless the FOLLOW_SYMLINKS flag is
given, which will cause clones of a repository with a symlink'd
"$GIT_DIR/objects" directory to fail.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the previous commit, t5619 demonstrates an issue where two calls to
`get_repo_path()` could trick Git into using its local clone mechanism
in conjunction with a non-local transport.
That sequence is:
- the starting state is that the local path https:/example.com/foo is a
symlink that points to ../../../.git/modules/foo. So it's dangling.
- get_repo_path() sees that no such path exists (because it's
dangling), and thus we do not canonicalize it into an absolute path
- because we're using --separate-git-dir, we create .git/modules/foo.
Now our symlink is no longer dangling!
- we pass the url to transport_get(), which sees it as an https URL.
- we call get_repo_path() again, on the url. This second call was
introduced by f38aa83f9a (use local cloning if insteadOf makes a
local URL, 2014-07-17). The idea is that we want to pull the url
fresh from the remote.c API, because it will apply any aliases.
And of course now it sees that there is a local file, which is a
mismatch with the transport we already selected.
The issue in the above sequence is calling `transport_get()` before
deciding whether or not the repository is indeed local, and not passing
in an absolute path if it is local.
This is reminiscent of a similar bug report in [1], where it was
suggested to perform the `insteadOf` lookup earlier. Taking that
approach may not be as straightforward, since the intent is to store the
original URL in the config, but to actually fetch from the insteadOf
one, so conflating the two early on is a non-starter.
Note: we pass the path returned by `get_repo_path(remote->url[0])`,
which should be the same as `repo_name` (aside from any `insteadOf`
rewrites).
We *could* pass `absolute_pathdup()` of the same argument, which
86521acaca (Bring local clone's origin URL in line with that of a remote
clone, 2008-09-01) indicates may differ depending on the presence of
".git/" for a non-bare repo. That matters for forming relative submodule
paths, but doesn't matter for the second call, since we're just feeding
it to the transport code, which is fine either way.
[1]: https://lore.kernel.org/git/CAMoD=Bi41mB3QRn3JdZL-FGHs4w3C2jGpnJB-CqSndO7FMtfzA@mail.gmail.com/
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When cloning a repository, Git must determine (a) what transport
mechanism to use, and (b) whether or not the clone is local.
Since f38aa83f9a (use local cloning if insteadOf makes a local URL,
2014-07-17), the latter check happens after the remote has been
initialized, and references the remote's URL instead of the local path.
This is done to make it possible for a `url.<base>.insteadOf` rule to
convert a remote URL into a local one, in which case the `clone_local()`
mechanism should be used.
However, with a specially crafted repository, Git can be tricked into
using a non-local transport while still setting `is_local` to "1" and
using the `clone_local()` optimization. The below test case
demonstrates such an instance, and shows that it can be used to include
arbitrary (known) paths in the working copy of a cloned repository on a
victim's machine[^1], even if local file clones are forbidden by
`protocol.file.allow`.
This happens in a few parts:
1. We first call `get_repo_path()` to see if the remote is a local
path. If it is, we replace the repo name with its absolute path.
2. We then call `transport_get()` on the repo name and decide how to
access it. If it was turned into an absolute path in the previous
step, then we should always treat it like a file.
3. We use `get_repo_path()` again, and set `is_local` as appropriate.
But it's already too late to rewrite the repo name as an absolute
path, since we've already fed it to the transport code.
The attack works by including a submodule whose URL corresponds to a
path on disk. In the below example, the repository "sub" is reachable
via the dumb HTTP protocol at (something like):
http://127.0.0.1:NNNN/dumb/sub.git
However, the path "http:/127.0.0.1:NNNN/dumb" (that is, a top-level
directory called "http:", then nested directories "127.0.0.1:NNNN", and
"dumb") exists within the repository, too.
To determine this, it first picks the appropriate transport, which is
dumb HTTP. It then uses the remote's URL in order to determine whether
the repository exists locally on disk. However, the malicious repository
also contains an embedded stub repository which is the target of a
symbolic link at the local path corresponding to the "sub" repository on
disk (i.e., there is a symbolic link at "http:/127.0.0.1/dumb/sub.git",
pointing to the stub repository via ".git/modules/sub/../../../repo").
This stub repository fools Git into thinking that a local repository
exists at that URL and thus can be cloned locally. The affected call is
in `get_repo_path()`, which in turn calls `get_repo_path_1()`, which
locates a valid repository at that target.
This then causes Git to set the `is_local` variable to "1", and in turn
instructs Git to clone the repository using its local clone optimization
via the `clone_local()` function.
The exploit comes into play because the stub repository's top-level
"$GIT_DIR/objects" directory is a symbolic link which can point to an
arbitrary path on the victim's machine. `clone_local()` resolves the
top-level "objects" directory through a `stat(2)` call, meaning that we
read through the symbolic link and copy or hardlink the directory
contents at the destination of the link.
In other words, we can get steps (1) and (3) to disagree by leveraging
the dangling symlink to pick a non-local transport in the first step,
and then set is_local to "1" in the third step when cloning with
`--separate-git-dir`, which makes the symlink non-dangling.
This can result in data-exfiltration on the victim's machine when
sensitive data is at a known path (e.g., "/home/$USER/.ssh").
The appropriate fix is two-fold:
- Resolve the transport later on (to avoid using the local
clone optimization with a non-local transport).
- Avoid reading through the top-level "objects" directory when
(correctly) using the clone_local() optimization.
This patch merely demonstrates the issue. The following two patches will
implement each part of the above fix, respectively.
[^1]: Provided that any target directory does not contain symbolic
links, in which case the changes from 6f054f9fb3 (builtin/clone.c:
disallow `--local` clones with symlinks, 2022-07-28) will abort the
clone.
Reported-by: yvvdwf <yvvdwf@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
On platforms where `size_t` does not have the same width as `unsigned
long`, passing a pointer to the former when a pointer to the latter is
expected can lead to problems.
Windows and 32-bit Linux are among the affected platforms.
In this instance, we want to store the size of the blob that was read in
that variable. However, `read_blob_data_from_index()` passes that
pointer to `read_object_file()` which expects an `unsigned long *`.
Which means that on affected platforms, the variable is not fully
populated and part of its value is left uninitialized. (On Big-Endian
platforms, this problem would be even worse.)
The consequence is that depending on the uninitialized memory's
contents, we may erroneously reject perfectly fine attributes.
Let's address this by passing a pointer to a variable of the expected
data type.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Recently, a vulnerability was reported that can lead to an out-of-bounds
write when reading an unreasonably large gitattributes file. The root
cause of this error are multiple integer overflows in different parts of
the code when there are either too many lines, when paths are too long,
when attribute names are too long, or when there are too many attributes
declared for a pattern.
As all of these are related to size, it seems reasonable to restrict the
size of the gitattributes file via git-fsck(1). This allows us to both
stop distributing known-vulnerable objects via common hosting platforms
that have fsck enabled, and users to protect themselves by enabling the
`fetch.fsckObjects` config.
There are basically two checks:
1. We verify that size of the gitattributes file is smaller than
100MB.
2. We verify that the maximum line length does not exceed 2048
bytes.
With the preceding commits, both of these conditions would cause us to
either ignore the complete gitattributes file or blob in the first case,
or the specific line in the second case. Now with these consistency
checks added, we also grow the ability to stop distributing such files
in the first place when `receive.fsckObjects` is enabled.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Move the checks for gitattributes so that they can be extended more
readily.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In `fsck_finish()` we check all blobs for consistency that we have found
during the tree walk, but that haven't yet been checked. This is only
required for gitmodules right now, but will also be required for a new
check for gitattributes.
Pull out a function `fsck_blobs()` that allows the caller to check a set
of blobs for consistency.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In general, we don't need to validate blob contents as they are opaque
blobs about whose content Git doesn't need to care about. There are some
exceptions though when blobs are linked into trees so that they would be
interpreted by Git. We only have a single such check right now though,
which is the one for gitmodules that has been added in the context of
CVE-2018-11235.
Now we have found another vulnerability with gitattributes that can lead
to out-of-bounds writes and reads. So let's refactor `fsck_blob()` so
that it is more extensible and can check different types of blobs.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Both the padding and wrapping formatting directives allow the caller to
specify an integer that ultimately leads to us adding this many chars to
the result buffer. As a consequence, it is trivial to e.g. allocate 2GB
of RAM via a single formatting directive and cause resource exhaustion
on the machine executing this logic. Furthermore, it is debatable
whether there are any sane usecases that require the user to pad data to
2GB boundaries or to indent wrapped data by 2GB.
Restrict the input sizes to 16 kilobytes at a maximum to limit the
amount of bytes that can be requested by the user. This is not meant
as a fix because there are ways to trivially amplify the amount of
data we generate via formatting directives; the real protection is
achieved by the changes in previous steps to catch and avoid integer
wraparound that causes us to under-allocate and access beyond the
end of allocated memory reagions. But having such a limit
significantly helps fuzzing the pretty format, because the fuzzer is
otherwise quite fast to run out-of-memory as it discovers these
formatters.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In `strbuf_utf8_replace`, we preallocate the destination buffer and then
use `memcpy` to copy bytes into it at computed offsets. This feels
rather fragile and is hard to understand at times. Refactor the code to
instead use `strbuf_add` and `strbuf_addstr` so that we can be sure that
there is no possibility to perform an out-of-bounds write.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In `strbuf_utf8_replace()`, we call `utf8_width()` to compute the width
of the current glyph. If the glyph is a control character though it can
be that `utf8_width()` returns `-1`, but because we assign this value to
a `size_t` the conversion will cause us to underflow. This bug can
easily be triggered with the following command:
$ git log --pretty='format:xxx%<|(1,trunc)%x10'
>From all I can see though this seems to be a benign underflow that has
no security-related consequences.
Fix the bug by using an `int` instead. When we see a control character,
we now copy it into the target buffer but don't advance the current
width of the string.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The return type of both `utf8_strwidth()` and `utf8_strnwidth()` is
`int`, but we operate on string lengths which are typically of type
`size_t`. This means that when the string is longer than `INT_MAX`, we
will overflow and thus return a negative result.
This can lead to an out-of-bounds write with `--pretty=format:%<1)%B`
and a commit message that is 2^31+1 bytes long:
=================================================================
==26009==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x603000001168 at pc 0x7f95c4e5f427 bp 0x7ffd8541c900 sp 0x7ffd8541c0a8
WRITE of size 2147483649 at 0x603000001168 thread T0
#0 0x7f95c4e5f426 in __interceptor_memcpy /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:827
#1 0x5612bbb1068c in format_and_pad_commit pretty.c:1763
#2 0x5612bbb1087a in format_commit_item pretty.c:1801
#3 0x5612bbc33bab in strbuf_expand strbuf.c:429
#4 0x5612bbb110e7 in repo_format_commit_message pretty.c:1869
#5 0x5612bbb12d96 in pretty_print_commit pretty.c:2161
#6 0x5612bba0a4d5 in show_log log-tree.c:781
#7 0x5612bba0d6c7 in log_tree_commit log-tree.c:1117
#8 0x5612bb691ed5 in cmd_log_walk_no_free builtin/log.c:508
#9 0x5612bb69235b in cmd_log_walk builtin/log.c:549
#10 0x5612bb6951a2 in cmd_log builtin/log.c:883
#11 0x5612bb56c993 in run_builtin git.c:466
#12 0x5612bb56d397 in handle_builtin git.c:721
#13 0x5612bb56db07 in run_argv git.c:788
#14 0x5612bb56e8a7 in cmd_main git.c:923
#15 0x5612bb803682 in main common-main.c:57
#16 0x7f95c4c3c28f (/usr/lib/libc.so.6+0x2328f)
#17 0x7f95c4c3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
#18 0x5612bb5680e4 in _start ../sysdeps/x86_64/start.S:115
0x603000001168 is located 0 bytes to the right of 24-byte region [0x603000001150,0x603000001168)
allocated by thread T0 here:
#0 0x7f95c4ebe7ea in __interceptor_realloc /usr/src/debug/gcc/libsanitizer/asan/asan_malloc_linux.cpp:85
#1 0x5612bbcdd556 in xrealloc wrapper.c:136
#2 0x5612bbc310a3 in strbuf_grow strbuf.c:99
#3 0x5612bbc32acd in strbuf_add strbuf.c:298
#4 0x5612bbc33aec in strbuf_expand strbuf.c:418
#5 0x5612bbb110e7 in repo_format_commit_message pretty.c:1869
#6 0x5612bbb12d96 in pretty_print_commit pretty.c:2161
#7 0x5612bba0a4d5 in show_log log-tree.c:781
#8 0x5612bba0d6c7 in log_tree_commit log-tree.c:1117
#9 0x5612bb691ed5 in cmd_log_walk_no_free builtin/log.c:508
#10 0x5612bb69235b in cmd_log_walk builtin/log.c:549
#11 0x5612bb6951a2 in cmd_log builtin/log.c:883
#12 0x5612bb56c993 in run_builtin git.c:466
#13 0x5612bb56d397 in handle_builtin git.c:721
#14 0x5612bb56db07 in run_argv git.c:788
#15 0x5612bb56e8a7 in cmd_main git.c:923
#16 0x5612bb803682 in main common-main.c:57
#17 0x7f95c4c3c28f (/usr/lib/libc.so.6+0x2328f)
SUMMARY: AddressSanitizer: heap-buffer-overflow /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:827 in __interceptor_memcpy
Shadow bytes around the buggy address:
0x0c067fff81d0: fd fd fd fa fa fa fd fd fd fa fa fa fd fd fd fa
0x0c067fff81e0: fa fa fd fd fd fd fa fa fd fd fd fd fa fa fd fd
0x0c067fff81f0: fd fa fa fa fd fd fd fa fa fa fd fd fd fa fa fa
0x0c067fff8200: fd fd fd fa fa fa fd fd fd fd fa fa 00 00 00 fa
0x0c067fff8210: fa fa fd fd fd fa fa fa fd fd fd fa fa fa fd fd
=>0x0c067fff8220: fd fa fa fa fd fd fd fa fa fa 00 00 00[fa]fa fa
0x0c067fff8230: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8240: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8250: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8260: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8270: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==26009==ABORTING
Now the proper fix for this would be to convert both functions to return
an `size_t` instead of an `int`. But given that this commit may be part
of a security release, let's instead do the minimal viable fix and die
in case we see an overflow.
Add a test that would have previously caused us to crash.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `utf8_strnwidth()` function calls `utf8_width()` in a loop and adds
its returned width to the end result. `utf8_width()` can return `-1`
though in case it reads a control character, which means that the
computed string width is going to be wrong. In the worst case where
there are more control characters than non-control characters, we may
even return a negative string width.
Fix this bug by treating control characters as having zero width.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `utf8_strnwidth()` function accepts an optional string length as
input parameter. This parameter can either be set to `-1`, in which case
we call `strlen()` on the input. Or it can be set to a positive integer
that indicates a precomputed length, which callers typically compute by
calling `strlen()` at some point themselves.
The input parameter is an `int` though, whereas `strlen()` returns a
`size_t`. This can lead to implementation-defined behaviour though when
the `size_t` cannot be represented by the `int`. In the general case
though this leads to wrap-around and thus to negative string sizes,
which is sure enough to not lead to well-defined behaviour.
Fix this by accepting a `size_t` instead of an `int` as string length.
While this takes away the ability of callers to simply pass in `-1` as
string length, it really is trivial enough to convert them to instead
pass in `strlen()` instead.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `%w(width,indent1,indent2)` formatting directive can be used to
rewrap text to a specific width and is designed after git-shortlog(1)'s
`-w` parameter. While the three parameters are all stored as `size_t`
internally, `strbuf_add_wrapped_text()` accepts integers as input. As a
result, the casted integers may overflow. As these now-negative integers
are later on passed to `strbuf_addchars()`, we will ultimately run into
implementation-defined behaviour due to casting a negative number back
to `size_t` again. On my platform, this results in trying to allocate
9000 petabyte of memory.
Fix this overflow by using `cast_size_t_to_int()` so that we reject
inputs that cannot be represented as an integer.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a formatting directive has a `+` or ` ` after the `%`, then we add
either a line feed or space if the placeholder expands to a non-empty
string. In specific cases though this logic doesn't work as expected,
and we try to add the character even in the case where the formatting
directive is empty.
One such pattern is `%w(1)%+d%+w(2)`. `%+d` expands to reference names
pointing to a certain commit, like in `git log --decorate`. For a tagged
commit this would for example expand to `\n (tag: v1.0.0)`, which has a
leading newline due to the `+` modifier and a space added by `%d`. Now
the second wrapping directive will cause us to rewrap the text to
`\n(tag:\nv1.0.0)`, which is one byte shorter due to the missing leading
space. The code that handles the `+` magic now notices that the length
has changed and will thus try to insert a leading line feed at the
original posititon. But as the string was shortened, the original
position is past the buffer's boundary and thus we die with an error.
Now there are two issues here:
1. We check whether the buffer length has changed, not whether it
has been extended. This causes us to try and add the character
past the string boundary.
2. The current logic does not make any sense whatsoever. When the
string got expanded due to the rewrap, putting the separator into
the original position is likely to put it somewhere into the
middle of the rewrapped contents.
It is debatable whether `%+w()` makes any sense in the first place.
Strictly speaking, the placeholder never expands to a non-empty string,
and consequentially we shouldn't ever accept this combination. We thus
fix the bug by simply refusing `%+w()`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
An out-of-bounds read can be triggered when parsing an incomplete
padding format string passed via `--pretty=format` or in Git archives
when files are marked with the `export-subst` gitattribute.
This bug exists since we have introduced support for truncating output
via the `trunc` keyword a7f01c6b4d (pretty: support truncating in %>, %<
and %><, 2013-04-19). Before this commit, we used to find the end of the
formatting string by using strchr(3P). This function returns a `NULL`
pointer in case the character in question wasn't found. The subsequent
check whether any character was found thus simply checked the returned
pointer. After the commit we switched to strcspn(3P) though, which only
returns the offset to the first found character or to the trailing NUL
byte. As the end pointer is now computed by adding the offset to the
start pointer it won't be `NULL` anymore, and as a consequence the check
doesn't do anything anymore.
The out-of-bounds data that is being read can in fact end up in the
formatted string. As a consequence, it is possible to leak memory
contents either by calling git-log(1) or via git-archive(1) when any of
the archived files is marked with the `export-subst` gitattribute.
==10888==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000000398 at pc 0x7f0356047cb2 bp 0x7fff3ffb95d0 sp 0x7fff3ffb8d78
READ of size 1 at 0x602000000398 thread T0
#0 0x7f0356047cb1 in __interceptor_strchrnul /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:725
#1 0x563b7cec9a43 in strbuf_expand strbuf.c:417
#2 0x563b7cda7060 in repo_format_commit_message pretty.c:1869
#3 0x563b7cda8d0f in pretty_print_commit pretty.c:2161
#4 0x563b7cca04c8 in show_log log-tree.c:781
#5 0x563b7cca36ba in log_tree_commit log-tree.c:1117
#6 0x563b7c927ed5 in cmd_log_walk_no_free builtin/log.c:508
#7 0x563b7c92835b in cmd_log_walk builtin/log.c:549
#8 0x563b7c92b1a2 in cmd_log builtin/log.c:883
#9 0x563b7c802993 in run_builtin git.c:466
#10 0x563b7c803397 in handle_builtin git.c:721
#11 0x563b7c803b07 in run_argv git.c:788
#12 0x563b7c8048a7 in cmd_main git.c:923
#13 0x563b7ca99682 in main common-main.c:57
#14 0x7f0355e3c28f (/usr/lib/libc.so.6+0x2328f)
#15 0x7f0355e3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
#16 0x563b7c7fe0e4 in _start ../sysdeps/x86_64/start.S:115
0x602000000398 is located 0 bytes to the right of 8-byte region [0x602000000390,0x602000000398)
allocated by thread T0 here:
#0 0x7f0356072faa in __interceptor_strdup /usr/src/debug/gcc/libsanitizer/asan/asan_interceptors.cpp:439
#1 0x563b7cf7317c in xstrdup wrapper.c:39
#2 0x563b7cd9a06a in save_user_format pretty.c:40
#3 0x563b7cd9b3e5 in get_commit_format pretty.c:173
#4 0x563b7ce54ea0 in handle_revision_opt revision.c:2456
#5 0x563b7ce597c9 in setup_revisions revision.c:2850
#6 0x563b7c9269e0 in cmd_log_init_finish builtin/log.c:269
#7 0x563b7c927362 in cmd_log_init builtin/log.c:348
#8 0x563b7c92b193 in cmd_log builtin/log.c:882
#9 0x563b7c802993 in run_builtin git.c:466
#10 0x563b7c803397 in handle_builtin git.c:721
#11 0x563b7c803b07 in run_argv git.c:788
#12 0x563b7c8048a7 in cmd_main git.c:923
#13 0x563b7ca99682 in main common-main.c:57
#14 0x7f0355e3c28f (/usr/lib/libc.so.6+0x2328f)
#15 0x7f0355e3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
#16 0x563b7c7fe0e4 in _start ../sysdeps/x86_64/start.S:115
SUMMARY: AddressSanitizer: heap-buffer-overflow /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:725 in __interceptor_strchrnul
Shadow bytes around the buggy address:
0x0c047fff8020: fa fa fd fd fa fa 00 06 fa fa 05 fa fa fa fd fd
0x0c047fff8030: fa fa 00 02 fa fa 06 fa fa fa 05 fa fa fa fd fd
0x0c047fff8040: fa fa 00 07 fa fa 03 fa fa fa fd fd fa fa 00 00
0x0c047fff8050: fa fa 00 01 fa fa fd fd fa fa 00 00 fa fa 00 01
0x0c047fff8060: fa fa 00 06 fa fa 00 06 fa fa 05 fa fa fa 05 fa
=>0x0c047fff8070: fa fa 00[fa]fa fa fd fa fa fa fd fd fa fa fd fd
0x0c047fff8080: fa fa fd fd fa fa 00 00 fa fa 00 fa fa fa fd fa
0x0c047fff8090: fa fa fd fd fa fa 00 00 fa fa fa fa fa fa fa fa
0x0c047fff80a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff80b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff80c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==10888==ABORTING
Fix this bug by checking whether `end` points at the trailing NUL byte.
Add a test which catches this out-of-bounds read and which demonstrates
that we used to write out-of-bounds data into the formatted message.
Reported-by: Markus Vervier <markus.vervier@x41-dsec.de>
Original-patch-by: Markus Vervier <markus.vervier@x41-dsec.de>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With the `%>>(<N>)` pretty formatter, you can ask git-log(1) et al to
steal spaces. To do so we need to look ahead of the next token to see
whether there are spaces there. This loop takes into account ANSI
sequences that end with an `m`, and if it finds any it will skip them
until it finds the first space. While doing so it does not take into
account the buffer's limits though and easily does an out-of-bounds
read.
Add a test that hits this behaviour. While we don't have an easy way to
verify this, the test causes the following failure when run with
`SANITIZE=address`:
==37941==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x603000000baf at pc 0x55ba6f88e0d0 bp 0x7ffc84c50d20 sp 0x7ffc84c50d10
READ of size 1 at 0x603000000baf thread T0
#0 0x55ba6f88e0cf in format_and_pad_commit pretty.c:1712
#1 0x55ba6f88e7b4 in format_commit_item pretty.c:1801
#2 0x55ba6f9b1ae4 in strbuf_expand strbuf.c:429
#3 0x55ba6f88f020 in repo_format_commit_message pretty.c:1869
#4 0x55ba6f890ccf in pretty_print_commit pretty.c:2161
#5 0x55ba6f7884c8 in show_log log-tree.c:781
#6 0x55ba6f78b6ba in log_tree_commit log-tree.c:1117
#7 0x55ba6f40fed5 in cmd_log_walk_no_free builtin/log.c:508
#8 0x55ba6f41035b in cmd_log_walk builtin/log.c:549
#9 0x55ba6f4131a2 in cmd_log builtin/log.c:883
#10 0x55ba6f2ea993 in run_builtin git.c:466
#11 0x55ba6f2eb397 in handle_builtin git.c:721
#12 0x55ba6f2ebb07 in run_argv git.c:788
#13 0x55ba6f2ec8a7 in cmd_main git.c:923
#14 0x55ba6f581682 in main common-main.c:57
#15 0x7f2d08c3c28f (/usr/lib/libc.so.6+0x2328f)
#16 0x7f2d08c3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
#17 0x55ba6f2e60e4 in _start ../sysdeps/x86_64/start.S:115
0x603000000baf is located 1 bytes to the left of 24-byte region [0x603000000bb0,0x603000000bc8)
allocated by thread T0 here:
#0 0x7f2d08ebe7ea in __interceptor_realloc /usr/src/debug/gcc/libsanitizer/asan/asan_malloc_linux.cpp:85
#1 0x55ba6fa5b494 in xrealloc wrapper.c:136
#2 0x55ba6f9aefdc in strbuf_grow strbuf.c:99
#3 0x55ba6f9b0a06 in strbuf_add strbuf.c:298
#4 0x55ba6f9b1a25 in strbuf_expand strbuf.c:418
#5 0x55ba6f88f020 in repo_format_commit_message pretty.c:1869
#6 0x55ba6f890ccf in pretty_print_commit pretty.c:2161
#7 0x55ba6f7884c8 in show_log log-tree.c:781
#8 0x55ba6f78b6ba in log_tree_commit log-tree.c:1117
#9 0x55ba6f40fed5 in cmd_log_walk_no_free builtin/log.c:508
#10 0x55ba6f41035b in cmd_log_walk builtin/log.c:549
#11 0x55ba6f4131a2 in cmd_log builtin/log.c:883
#12 0x55ba6f2ea993 in run_builtin git.c:466
#13 0x55ba6f2eb397 in handle_builtin git.c:721
#14 0x55ba6f2ebb07 in run_argv git.c:788
#15 0x55ba6f2ec8a7 in cmd_main git.c:923
#16 0x55ba6f581682 in main common-main.c:57
#17 0x7f2d08c3c28f (/usr/lib/libc.so.6+0x2328f)
#18 0x7f2d08c3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
#19 0x55ba6f2e60e4 in _start ../sysdeps/x86_64/start.S:115
SUMMARY: AddressSanitizer: heap-buffer-overflow pretty.c:1712 in format_and_pad_commit
Shadow bytes around the buggy address:
0x0c067fff8120: fa fa fd fd fd fa fa fa fd fd fd fa fa fa fd fd
0x0c067fff8130: fd fd fa fa fd fd fd fd fa fa fd fd fd fa fa fa
0x0c067fff8140: fd fd fd fa fa fa fd fd fd fa fa fa fd fd fd fa
0x0c067fff8150: fa fa fd fd fd fd fa fa 00 00 00 fa fa fa fd fd
0x0c067fff8160: fd fa fa fa fd fd fd fa fa fa fd fd fd fa fa fa
=>0x0c067fff8170: fd fd fd fa fa[fa]00 00 00 fa fa fa 00 00 00 fa
0x0c067fff8180: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8190: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff81a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff81b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff81c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Luckily enough, this would only cause us to copy the out-of-bounds data
into the formatted commit in case we really had an ANSI sequence
preceding our buffer. So this bug likely has no security consequences.
Fix it regardless by not traversing past the buffer's start.
Reported-by: Patrick Steinhardt <ps@pks.im>
Reported-by: Eric Sesterhenn <eric.sesterhenn@x41-dsec.de>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When using a padding specifier in the pretty format passed to git-log(1)
we need to calculate the string length in several places. These string
lengths are stored in `int`s though, which means that these can easily
overflow when the input lengths exceeds 2GB. This can ultimately lead to
an out-of-bounds write when these are used in a call to memcpy(3P):
==8340==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7f1ec62f97fe at pc 0x7f2127e5f427 bp 0x7ffd3bd63de0 sp 0x7ffd3bd63588
WRITE of size 1 at 0x7f1ec62f97fe thread T0
#0 0x7f2127e5f426 in __interceptor_memcpy /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:827
#1 0x5628e96aa605 in format_and_pad_commit pretty.c:1762
#2 0x5628e96aa7f4 in format_commit_item pretty.c:1801
#3 0x5628e97cdb24 in strbuf_expand strbuf.c:429
#4 0x5628e96ab060 in repo_format_commit_message pretty.c:1869
#5 0x5628e96acd0f in pretty_print_commit pretty.c:2161
#6 0x5628e95a44c8 in show_log log-tree.c:781
#7 0x5628e95a76ba in log_tree_commit log-tree.c:1117
#8 0x5628e922bed5 in cmd_log_walk_no_free builtin/log.c:508
#9 0x5628e922c35b in cmd_log_walk builtin/log.c:549
#10 0x5628e922f1a2 in cmd_log builtin/log.c:883
#11 0x5628e9106993 in run_builtin git.c:466
#12 0x5628e9107397 in handle_builtin git.c:721
#13 0x5628e9107b07 in run_argv git.c:788
#14 0x5628e91088a7 in cmd_main git.c:923
#15 0x5628e939d682 in main common-main.c:57
#16 0x7f2127c3c28f (/usr/lib/libc.so.6+0x2328f)
#17 0x7f2127c3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
#18 0x5628e91020e4 in _start ../sysdeps/x86_64/start.S:115
0x7f1ec62f97fe is located 2 bytes to the left of 4831838265-byte region [0x7f1ec62f9800,0x7f1fe62f9839)
allocated by thread T0 here:
#0 0x7f2127ebe7ea in __interceptor_realloc /usr/src/debug/gcc/libsanitizer/asan/asan_malloc_linux.cpp:85
#1 0x5628e98774d4 in xrealloc wrapper.c:136
#2 0x5628e97cb01c in strbuf_grow strbuf.c:99
#3 0x5628e97ccd42 in strbuf_addchars strbuf.c:327
#4 0x5628e96aa55c in format_and_pad_commit pretty.c:1761
#5 0x5628e96aa7f4 in format_commit_item pretty.c:1801
#6 0x5628e97cdb24 in strbuf_expand strbuf.c:429
#7 0x5628e96ab060 in repo_format_commit_message pretty.c:1869
#8 0x5628e96acd0f in pretty_print_commit pretty.c:2161
#9 0x5628e95a44c8 in show_log log-tree.c:781
#10 0x5628e95a76ba in log_tree_commit log-tree.c:1117
#11 0x5628e922bed5 in cmd_log_walk_no_free builtin/log.c:508
#12 0x5628e922c35b in cmd_log_walk builtin/log.c:549
#13 0x5628e922f1a2 in cmd_log builtin/log.c:883
#14 0x5628e9106993 in run_builtin git.c:466
#15 0x5628e9107397 in handle_builtin git.c:721
#16 0x5628e9107b07 in run_argv git.c:788
#17 0x5628e91088a7 in cmd_main git.c:923
#18 0x5628e939d682 in main common-main.c:57
#19 0x7f2127c3c28f (/usr/lib/libc.so.6+0x2328f)
#20 0x7f2127c3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
#21 0x5628e91020e4 in _start ../sysdeps/x86_64/start.S:115
SUMMARY: AddressSanitizer: heap-buffer-overflow /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:827 in __interceptor_memcpy
Shadow bytes around the buggy address:
0x0fe458c572a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fe458c572b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fe458c572c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fe458c572d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fe458c572e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0fe458c572f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa[fa]
0x0fe458c57300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0fe458c57310: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0fe458c57320: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0fe458c57330: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0fe458c57340: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==8340==ABORTING
The pretty format can also be used in `git archive` operations via the
`export-subst` attribute. So this is what in our opinion makes this a
critical issue in the context of Git forges which allow to download an
archive of user supplied Git repositories.
Fix this vulnerability by using `size_t` instead of `int` to track the
string lengths. Add tests which detect this vulnerability when Git is
compiled with the address sanitizer.
Reported-by: Joern Schneeweisz <jschneeweisz@gitlab.com>
Original-patch-by: Joern Schneeweisz <jschneeweisz@gitlab.com>
Modified-by: Taylor Blau <me@ttalorr.com>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Allow tests that assume a 64-bit `size_t` to be skipped in 32-bit
platforms and regardless of the size of `long`.
This imitates the `LONG_IS_64BIT` prerequisite.
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Similar as with the preceding commit, start ignoring gitattributes files
that are overly large to protect us against out-of-bounds reads and
writes caused by integer overflows. Unfortunately, we cannot just define
"overly large" in terms of any preexisting limits in the codebase.
Instead, we choose a very conservative limit of 100MB. This is plenty of
room for specifying gitattributes, and incidentally it is also the limit
for blob sizes for GitHub. While we don't want GitHub to dictate limits
here, it is still sensible to use this fact for an informed decision
given that it is hosting a huge set of repositories. Furthermore, over
at GitLab we scanned a subset of repositories for their root-level
attribute files. We found that 80% of them have a gitattributes file
smaller than 100kB, 99.99% have one smaller than 1MB, and only a single
repository had one that was almost 3MB in size. So enforcing a limit of
100MB seems to give us ample of headroom.
With this limit in place we can be reasonably sure that there is no easy
way to exploit the gitattributes file via integer overflows anymore.
Furthermore, it protects us against resource exhaustion caused by
allocating the in-memory data structures required to represent the
parsed attributes.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are two different code paths to read gitattributes: once via a
file, and once via the index. These two paths used to behave differently
because when reading attributes from a file, we used fgets(3P) with a
buffer size of 2kB. Consequentially, we silently truncate line lengths
when lines are longer than that and will then parse the remainder of the
line as a new pattern. It goes without saying that this is entirely
unexpected, but it's even worse that the behaviour depends on how the
gitattributes are parsed.
While this is simply wrong, the silent truncation saves us with the
recently discovered vulnerabilities that can cause out-of-bound writes
or reads with unreasonably long lines due to integer overflows. As the
common path is to read gitattributes via the worktree file instead of
via the index, we can assume that any gitattributes file that had lines
longer than that is already broken anyway. So instead of lifting the
limit here, we can double down on it to fix the vulnerabilities.
Introduce an explicit line length limit of 2kB that is shared across all
paths that read attributes and ignore any line that hits this limit
while printing a warning.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When reading attributes from a file we use fgets(3P) with a buffer size
of 2048 bytes. This means that as soon as a line exceeds the buffer size
we split it up into multiple parts and parse each of them as a separate
pattern line. This is of course not what the user intended, and even
worse the behaviour is inconsistent with how we read attributes from the
index.
Fix this bug by converting the code to use `strbuf_getline()` instead.
This will indeed read in the whole line, which may theoretically lead to
an out-of-memory situation when the gitattributes file is huge. We're
about to reject any gitattributes files larger than 100MB in the next
commit though, which makes this less of a concern.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When parsing an attributes line, we need to allocate an array that holds
all attributes specified for the given file pattern. The calculation to
determine the number of bytes that need to be allocated was prone to an
overflow though when there was an unreasonable amount of attributes.
Harden the allocation by instead using the `st_` helper functions that
cause us to die when we hit an integer overflow.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Attributes have a field that tracks the position in the `all_attrs`
array they're stored inside. This field gets set via `hashmap_get_size`
when adding the attribute to the global map of attributes. But while the
field is of type `int`, the value returned by `hashmap_get_size` is an
`unsigned int`. It can thus happen that the value overflows, where we
would now dereference teh `all_attrs` array at an out-of-bounds value.
We do have a sanity check for this overflow via an assert that verifies
the index matches the new hashmap's size. But asserts are not a proper
mechanism to detect against any such overflows as they may not in fact
be compiled into production code.
Fix this by using an `unsigned int` to track the index and convert the
assert to a call `die()`.
Reported-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `struct attr_stack` tracks the stack of all patterns together with
their attributes. When parsing a gitattributes file that has more than
2^31 such patterns though we may trigger multiple out-of-bounds reads on
64 bit platforms. This is because while the `num_matches` variable is an
unsigned integer, we always use a signed integer to iterate over them.
I have not been able to reproduce this issue due to memory constraints
on my systems. But despite the out-of-bounds reads, the worst thing that
can seemingly happen is to call free(3P) with a garbage pointer when
calling `attr_stack_free()`.
Fix this bug by using unsigned integers to iterate over the array. While
this makes the iteration somewhat awkward when iterating in reverse, it
is at least better than knowingly running into an out-of-bounds read.
While at it, convert the call to `ALLOC_GROW` to use `ALLOC_GROW_BY`
instead.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is possible to trigger an integer overflow when parsing attribute
names when there are more than 2^31 of them for a single pattern. This
can either lead to us dying due to trying to request too many bytes:
blob=$(perl -e 'print "f" . " a=" x 2147483649' | git hash-object -w --stdin)
git update-index --add --cacheinfo 100644,$blob,.gitattributes
git attr-check --all file
=================================================================
==1022==ERROR: AddressSanitizer: requested allocation size 0xfffffff800000032 (0xfffffff800001038 after adjustments for alignment, red zones etc.) exceeds maximum supported size of 0x10000000000 (thread T0)
#0 0x7fd3efabf411 in __interceptor_calloc /usr/src/debug/gcc/libsanitizer/asan/asan_malloc_linux.cpp:77
#1 0x5563a0a1e3d3 in xcalloc wrapper.c:150
#2 0x5563a058d005 in parse_attr_line attr.c:384
#3 0x5563a058e661 in handle_attr_line attr.c:660
#4 0x5563a058eddb in read_attr_from_index attr.c:769
#5 0x5563a058ef12 in read_attr attr.c:797
#6 0x5563a058f24c in bootstrap_attr_stack attr.c:867
#7 0x5563a058f4a3 in prepare_attr_stack attr.c:902
#8 0x5563a05905da in collect_some_attrs attr.c:1097
#9 0x5563a059093d in git_all_attrs attr.c:1128
#10 0x5563a02f636e in check_attr builtin/check-attr.c:67
#11 0x5563a02f6c12 in cmd_check_attr builtin/check-attr.c:183
#12 0x5563a02aa993 in run_builtin git.c:466
#13 0x5563a02ab397 in handle_builtin git.c:721
#14 0x5563a02abb2b in run_argv git.c:788
#15 0x5563a02ac991 in cmd_main git.c:926
#16 0x5563a05432bd in main common-main.c:57
#17 0x7fd3ef82228f (/usr/lib/libc.so.6+0x2328f)
==1022==HINT: if you don't care about these errors you may set allocator_may_return_null=1
SUMMARY: AddressSanitizer: allocation-size-too-big /usr/src/debug/gcc/libsanitizer/asan/asan_malloc_linux.cpp:77 in __interceptor_calloc
==1022==ABORTING
Or, much worse, it can lead to an out-of-bounds write because we
underallocate and then memcpy(3P) into an array:
perl -e '
print "A " . "\rh="x2000000000;
print "\rh="x2000000000;
print "\rh="x294967294 . "\n"
' >.gitattributes
git add .gitattributes
git commit -am "evil attributes"
$ git clone --quiet /path/to/repo
=================================================================
==15062==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000002550 at pc 0x5555559884d5 bp 0x7fffffffbc60 sp 0x7fffffffbc58
WRITE of size 8 at 0x602000002550 thread T0
#0 0x5555559884d4 in parse_attr_line attr.c:393
#1 0x5555559884d4 in handle_attr_line attr.c:660
#2 0x555555988902 in read_attr_from_index attr.c:784
#3 0x555555988902 in read_attr_from_index attr.c:747
#4 0x555555988a1d in read_attr attr.c:800
#5 0x555555989b0c in bootstrap_attr_stack attr.c:882
#6 0x555555989b0c in prepare_attr_stack attr.c:917
#7 0x555555989b0c in collect_some_attrs attr.c:1112
#8 0x55555598b141 in git_check_attr attr.c:1126
#9 0x555555a13004 in convert_attrs convert.c:1311
#10 0x555555a95e04 in checkout_entry_ca entry.c:553
#11 0x555555d58bf6 in checkout_entry entry.h:42
#12 0x555555d58bf6 in check_updates unpack-trees.c:480
#13 0x555555d5eb55 in unpack_trees unpack-trees.c:2040
#14 0x555555785ab7 in checkout builtin/clone.c:724
#15 0x555555785ab7 in cmd_clone builtin/clone.c:1384
#16 0x55555572443c in run_builtin git.c:466
#17 0x55555572443c in handle_builtin git.c:721
#18 0x555555727872 in run_argv git.c:788
#19 0x555555727872 in cmd_main git.c:926
#20 0x555555721fa0 in main common-main.c:57
#21 0x7ffff73f1d09 in __libc_start_main ../csu/libc-start.c:308
#22 0x555555723f39 in _start (git+0x1cff39)
0x602000002552 is located 0 bytes to the right of 2-byte region [0x602000002550,0x602000002552) allocated by thread T0 here:
#0 0x7ffff768c037 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x555555d7fff7 in xcalloc wrapper.c:150
#2 0x55555598815f in parse_attr_line attr.c:384
#3 0x55555598815f in handle_attr_line attr.c:660
#4 0x555555988902 in read_attr_from_index attr.c:784
#5 0x555555988902 in read_attr_from_index attr.c:747
#6 0x555555988a1d in read_attr attr.c:800
#7 0x555555989b0c in bootstrap_attr_stack attr.c:882
#8 0x555555989b0c in prepare_attr_stack attr.c:917
#9 0x555555989b0c in collect_some_attrs attr.c:1112
#10 0x55555598b141 in git_check_attr attr.c:1126
#11 0x555555a13004 in convert_attrs convert.c:1311
#12 0x555555a95e04 in checkout_entry_ca entry.c:553
#13 0x555555d58bf6 in checkout_entry entry.h:42
#14 0x555555d58bf6 in check_updates unpack-trees.c:480
#15 0x555555d5eb55 in unpack_trees unpack-trees.c:2040
#16 0x555555785ab7 in checkout builtin/clone.c:724
#17 0x555555785ab7 in cmd_clone builtin/clone.c:1384
#18 0x55555572443c in run_builtin git.c:466
#19 0x55555572443c in handle_builtin git.c:721
#20 0x555555727872 in run_argv git.c:788
#21 0x555555727872 in cmd_main git.c:926
#22 0x555555721fa0 in main common-main.c:57
#23 0x7ffff73f1d09 in __libc_start_main ../csu/libc-start.c:308
SUMMARY: AddressSanitizer: heap-buffer-overflow attr.c:393 in parse_attr_line
Shadow bytes around the buggy address:
0x0c047fff8450: fa fa 00 02 fa fa 00 07 fa fa fd fd fa fa 00 00
0x0c047fff8460: fa fa 02 fa fa fa fd fd fa fa 00 06 fa fa 05 fa
0x0c047fff8470: fa fa fd fd fa fa 00 02 fa fa 06 fa fa fa 05 fa
0x0c047fff8480: fa fa 07 fa fa fa fd fd fa fa 00 01 fa fa 00 02
0x0c047fff8490: fa fa 00 03 fa fa 00 fa fa fa 00 01 fa fa 00 03
=>0x0c047fff84a0: fa fa 00 01 fa fa 00 02 fa fa[02]fa fa fa fa fa
0x0c047fff84b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff84c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff84d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff84e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff84f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==15062==ABORTING
Fix this bug by using `size_t` instead to count the number of attributes
so that this value cannot reasonably overflow without running out of
memory before already.
Reported-by: Markus Vervier <markus.vervier@x41-dsec.de>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is possible to trigger an integer overflow when parsing attribute
names that are longer than 2^31 bytes because we assign the result of
strlen(3P) to an `int` instead of to a `size_t`. This can lead to an
abort in vsnprintf(3P) with the following reproducer:
blob=$(perl -e 'print "A " . "B"x2147483648 . "\n"' | git hash-object -w --stdin)
git update-index --add --cacheinfo 100644,$blob,.gitattributes
git check-attr --all path
BUG: strbuf.c:400: your vsnprintf is broken (returned -1)
But furthermore, assuming that the attribute name is even longer than
that, it can cause us to silently truncate the attribute and thus lead
to wrong results.
Fix this integer overflow by using a `size_t` instead. This fixes the
silent truncation of attribute names, but it only partially fixes the
BUG we hit: even though the initial BUG is fixed, we can still hit a BUG
when parsing invalid attribute lines via `report_invalid_attr()`.
This is due to an underlying design issue in vsnprintf(3P) which only
knows to return an `int`, and thus it may always overflow with large
inputs. This issue is benign though: the worst that can happen is that
the error message is misreported to be either truncated or too long, but
due to the buffer being NUL terminated we wouldn't ever do an
out-of-bounds read here.
Reported-by: Markus Vervier <markus.vervier@x41-dsec.de>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There is an out-of-bounds read possible when parsing gitattributes that
have an attribute that is 2^31+1 bytes long. This is caused due to an
integer overflow when we assign the result of strlen(3P) to an `int`,
where we use the wrapped-around value in a subsequent call to
memcpy(3P). The following code reproduces the issue:
blob=$(perl -e 'print "a" x 2147483649 . " attr"' | git hash-object -w --stdin)
git update-index --add --cacheinfo 100644,$blob,.gitattributes
git check-attr --all file
AddressSanitizer:DEADLYSIGNAL
=================================================================
==8451==ERROR: AddressSanitizer: SEGV on unknown address 0x7f93efa00800 (pc 0x7f94f1f8f082 bp 0x7ffddb59b3a0 sp 0x7ffddb59ab28 T0)
==8451==The signal is caused by a READ memory access.
#0 0x7f94f1f8f082 (/usr/lib/libc.so.6+0x176082)
#1 0x7f94f2047d9c in __interceptor_strspn /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:752
#2 0x560e190f7f26 in parse_attr_line attr.c:375
#3 0x560e190f9663 in handle_attr_line attr.c:660
#4 0x560e190f9ddd in read_attr_from_index attr.c:769
#5 0x560e190f9f14 in read_attr attr.c:797
#6 0x560e190fa24e in bootstrap_attr_stack attr.c:867
#7 0x560e190fa4a5 in prepare_attr_stack attr.c:902
#8 0x560e190fb5dc in collect_some_attrs attr.c:1097
#9 0x560e190fb93f in git_all_attrs attr.c:1128
#10 0x560e18e6136e in check_attr builtin/check-attr.c:67
#11 0x560e18e61c12 in cmd_check_attr builtin/check-attr.c:183
#12 0x560e18e15993 in run_builtin git.c:466
#13 0x560e18e16397 in handle_builtin git.c:721
#14 0x560e18e16b2b in run_argv git.c:788
#15 0x560e18e17991 in cmd_main git.c:926
#16 0x560e190ae2bd in main common-main.c:57
#17 0x7f94f1e3c28f (/usr/lib/libc.so.6+0x2328f)
#18 0x7f94f1e3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
#19 0x560e18e110e4 in _start ../sysdeps/x86_64/start.S:115
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (/usr/lib/libc.so.6+0x176082)
==8451==ABORTING
Fix this bug by converting the variable to a `size_t` instead.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function `git_attr_internal()` is called to upsert attributes into
the global map. And while all callers pass a `size_t`, the function
itself accepts an `int` as the attribute name's length. This can lead to
an integer overflow in case the attribute name is longer than `INT_MAX`.
Now this overflow seems harmless as the first thing we do is to call
`attr_name_valid()`, and that function only succeeds in case all chars
in the range of `namelen` match a certain small set of chars. We thus
can't do an out-of-bounds read as NUL is not part of that set and all
strings passed to this function are NUL-terminated. And furthermore, we
wouldn't ever read past the current attribute name anyway due to the
same reason. And if validation fails we will return early.
On the other hand it feels fragile to rely on this behaviour, even more
so given that we pass `namelen` to `FLEX_ALLOC_MEM()`. So let's instead
just do the correct thing here and accept a `size_t` as line length.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Explicitly cloning over the "file://" protocol in t7527 in preparation
for merging a security release which will change the default value of
this configuration to be "user".
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Explicitly cloning over the "file://" protocol in t5537 in preparation
for merging a security release which will change the default value of
this configuration to be "user".
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Explicitly cloning over the "file://" protocol in t3206 in preparation
for merging a security release which will change the default value of
this configuration to be "user".
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Explicitly cloning over the "file://" protocol in t7814 in preparation
for merging a security release which will change the default value of
this configuration to be "user".
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Explicitly cloning over the "file://" protocol in t5537 in preparation
for merging a security release which will change the default value of
this configuration to be "user".
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Explicitly cloning over the "file://" protocol in t5516 in preparation
for merging a security release which will change the default value of
this configuration to be "user".
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Explicitly cloning over the "file://" protocol in t3207 in preparation
for merging a security release which will change the default value of
this configuration to be "user".
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Explicitly cloning over the "file://" protocol in t1092 in preparation
for merging a security release which will change the default value of
this configuration to be "user".
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Explicitly cloning over the "file://" protocol in t1092 in preparation
for merging a security release which will change the default value of
this configuration to be "user".
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Explicitly cloning over the "file://" protocol in t1092 in preparation
for merging a security release which will change the default value of
this configuration to be "user".
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Explicitly cloning over the "file://" protocol in t1092 in preparation
for merging a security release which will change the default value of
this configuration to be "user".
Signed-off-by: Taylor Blau <me@ttaylorr.com>
This function improperly uses an int to represent the number of entries
in the resulting argument array. This allows a malicious actor to
intentionally overflow the return value, leading to arbitrary heap
writes.
Because the resulting argv array is typically passed to execv(), it may
be possible to leverage this attack to gain remote code execution on a
victim machine. This was almost certainly the case for certain
configurations of git-shell until the previous commit limited the size
of input it would accept. Other calls to split_cmdline() are typically
limited by the size of argv the OS is willing to hand us, so are
similarly protected.
So this is not strictly fixing a known vulnerability, but is a hardening
of the function that is worth doing to protect against possible unknown
vulnerabilities.
One approach to fixing this would be modifying the signature of
`split_cmdline()` to look something like:
int split_cmdline(char *cmdline, const char ***argv, size_t *argc);
Where the return value of `split_cmdline()` is negative for errors, and
zero otherwise. If non-NULL, the `*argc` pointer is modified to contain
the size of the `**argv` array.
But this implies an absurdly large `argv` array, which more than likely
larger than the system's argument limit. So even if split_cmdline()
allowed this, it would fail immediately afterwards when we called
execv(). So instead of converting all of `split_cmdline()`'s callers to
work with `size_t` types in this patch, instead pursue the minimal fix
here to prevent ever returning an array with more than INT_MAX entries
in it.
Signed-off-by: Kevin Backhouse <kevinbackhouse@github.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
When git-shell is run in interactive mode (which must be enabled by
creating $HOME/git-shell-commands), it reads commands from stdin, one
per line, and executes them.
We read the commands with git_read_line_interactively(), which uses a
strbuf under the hood. That means we'll accept an input of arbitrary
size (limited only by how much heap we can allocate). That creates two
problems:
- the rest of the code is not prepared to handle large inputs. The
most serious issue here is that split_cmdline() uses "int" for most
of its types, which can lead to integer overflow and out-of-bounds
array reads and writes. But even with that fixed, we assume that we
can feed the command name to snprintf() (via xstrfmt()), which is
stuck for historical reasons using "int", and causes it to fail (and
even trigger a BUG() call).
- since the point of git-shell is to take input from untrusted or
semi-trusted clients, it's a mild denial-of-service. We'll allocate
as many bytes as the client sends us (actually twice as many, since
we immediately duplicate the buffer).
We can fix both by just limiting the amount of per-command input we're
willing to receive.
We should also fix split_cmdline(), of course, which is an accident
waiting to happen, but that can come on top. Most calls to
split_cmdline(), including the other one in git-shell, are OK because
they are reading from an OS-provided argv, which is limited in practice.
This patch should eliminate the immediate vulnerabilities.
I picked 4MB as an arbitrary limit. It's big enough that nobody should
ever run into it in practice (since the point is to run the commands via
exec, we're subject to OS limits which are typically much lower). But
it's small enough that allocating it isn't that big a deal.
The code is mostly just swapping out fgets() for the strbuf call, but we
have to add a few niceties like flushing and trimming line endings. We
could simplify things further by putting the buffer on the stack, but
4MB is probably a bit much there. Note that we'll _always_ allocate 4MB,
which for normal, non-malicious requests is more than we would before
this patch. But on the other hand, other git programs are happy to use
96MB for a delta cache. And since we'd never touch most of those pages,
on a lazy-allocating OS like Linux they won't even get allocated to
actual RAM.
The ideal would be a version of strbuf_getline() that accepted a maximum
value. But for a minimal vulnerability fix, let's keep things localized
and simple. We can always refactor further on top.
The included test fails in an obvious way with ASan or UBSan (which
notice the integer overflow and out-of-bounds reads). Without them, it
fails in a less obvious way: we may segfault, or we may try to xstrfmt()
a long string, leading to a BUG(). Either way, it fails reliably before
this patch, and passes with it. Note that we don't need an EXPENSIVE
prereq on it. It does take 10-15s to fail before this patch, but with
the new limit, we fail almost immediately (and the perl process
generating 2GB of data exits via SIGPIPE).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
We have no tests of even basic functionality of git-shell. Let's add a
couple of obvious ones. This will serve as a framework for adding tests
for new things we fix, as well as making sure we don't screw anything up
too badly while doing so.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
An earlier patch discussed and fixed a scenario where Git could be used
as a vector to exfiltrate sensitive data through a Docker container when
a potential victim clones a suspicious repository with local submodules
that contain symlinks.
That security hole has since been plugged, but a similar one still
exists. Instead of convincing a would-be victim to clone an embedded
submodule via the "file" protocol, an attacker could convince an
individual to clone a repository that has a submodule pointing to a
valid path on the victim's filesystem.
For example, if an individual (with username "foo") has their home
directory ("/home/foo") stored as a Git repository, then an attacker
could exfiltrate data by convincing a victim to clone a malicious
repository containing a submodule pointing at "/home/foo/.git" with
`--recurse-submodules`. Doing so would expose any sensitive contents in
stored in "/home/foo" tracked in Git.
For systems (such as Docker) that consider everything outside of the
immediate top-level working directory containing a Dockerfile as
inaccessible to the container (with the exception of volume mounts, and
so on), this is a violation of trust by exposing unexpected contents in
the working copy.
To mitigate the likelihood of this kind of attack, adjust the "file://"
protocol's default policy to be "user" to prevent commands that execute
without user input (including recursive submodule initialization) from
taking place by default.
Suggested-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.
Tests that interact with submodules a handful of times use
`test_config_global`.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.
Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead. Test scripts that rely on
submodules throughout use a `git config --global` during a setup test
towards the beginning of the script.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.
Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.
Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead. Test scripts that rely on
submodules throughout use a `git config --global` during a setup test
towards the beginning of the script.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.
Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead. Test scripts that rely on
submodules throughout use a `git config --global` during a setup test
towards the beginning of the script.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.
Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead. Test scripts that rely on
submodules throughout use a `git config --global` during a setup test
towards the beginning of the script.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.
Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead. Test scripts that rely on
submodules throughout use a `git config --global` during a setup test
towards the beginning of the script.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.
Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
To prepare for changing the default value of `protocol.file.allow` to
"user", update the `prolog()` function in lib-submodule-update to allow
submodules to be cloned over the file protocol.
This is used by a handful of submodule-related test scripts, which
themselves will have to tweak the value of `protocol.file.allow` in
certain locations. Those will be done in subsequent commits.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
When cloning a repository with `--local`, Git relies on either making a
hardlink or copy to every file in the "objects" directory of the source
repository. This is done through the callpath `cmd_clone()` ->
`clone_local()` -> `copy_or_link_directory()`.
The way this optimization works is by enumerating every file and
directory recursively in the source repository's `$GIT_DIR/objects`
directory, and then either making a copy or hardlink of each file. The
only exception to this rule is when copying the "alternates" file, in
which case paths are rewritten to be absolute before writing a new
"alternates" file in the destination repo.
One quirk of this implementation is that it dereferences symlinks when
cloning. This behavior was most recently modified in 36596fd2df (clone:
better handle symlinked files at .git/objects/, 2019-07-10), which
attempted to support `--local` clones of repositories with symlinks in
their objects directory in a platform-independent way.
Unfortunately, this behavior of dereferencing symlinks (that is,
creating a hardlink or copy of the source's link target in the
destination repository) can be used as a component in attacking a
victim by inadvertently exposing the contents of file stored outside of
the repository.
Take, for example, a repository that stores a Dockerfile and is used to
build Docker images. When building an image, Docker copies the directory
contents into the VM, and then instructs the VM to execute the
Dockerfile at the root of the copied directory. This protects against
directory traversal attacks by copying symbolic links as-is without
dereferencing them.
That is, if a user has a symlink pointing at their private key material
(where the symlink is present in the same directory as the Dockerfile,
but the key itself is present outside of that directory), the key is
unreadable to a Docker image, since the link will appear broken from the
container's point of view.
This behavior enables an attack whereby a victim is convinced to clone a
repository containing an embedded submodule (with a URL like
"file:///proc/self/cwd/path/to/submodule") which has a symlink pointing
at a path containing sensitive information on the victim's machine. If a
user is tricked into doing this, the contents at the destination of
those symbolic links are exposed to the Docker image at runtime.
One approach to preventing this behavior is to recreate symlinks in the
destination repository. But this is problematic, since symlinking the
objects directory are not well-supported. (One potential problem is that
when sharing, e.g. a "pack" directory via symlinks, different writers
performing garbage collection may consider different sets of objects to
be reachable, enabling a situation whereby garbage collecting one
repository may remove reachable objects in another repository).
Instead, prohibit the local clone optimization when any symlinks are
present in the `$GIT_DIR/objects` directory of the source repository.
Users may clone the repository again by prepending the "file://" scheme
to their clone URL, or by adding the `--no-local` option to their `git
clone` invocation.
The directory iterator used by `copy_or_link_directory()` must no longer
dereference symlinks (i.e., it *must* call `lstat()` instead of `stat()`
in order to discover whether or not there are symlinks present). This has
no bearing on the overall behavior, since we will immediately `die()` on
encounter a symlink.
Note that t5604.33 suggests that we do support local clones with
symbolic links in the source repository's objects directory, but this
was likely unintentional, or at least did not take into consideration
the problem with sharing parts of the objects directory with symbolic
links at the time. Update this test to reflect which options are and
aren't supported.
Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
As the 'master' front will soon tag a preview and then release
candidates for 2.38, it is unknown if we are going to issue another
maintenance release on the 2.37.x track, but as we have accumulated
enough material there, let's prepare a draft for it.
Even if we end up not tagging 2.37.4, it would help motivated distro
packagers to maintain their slightly older and "more stable" versions.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The auto-stashed local changes created by "git merge --autostash"
was mixed into a conflicted state left in the working tree, which
has been corrected.
* en/merge-unstash-only-on-clean-merge:
merge: only apply autostash when appropriate
Update the version of Ubuntu used for GitHub Actions CI from 18.04
to 22.04.
* ds/github-actions-use-newer-ubuntu:
ci: update 'static-analysis' to Ubuntu 22.04
The preload-index codepath made copies of pathspec to give to
multiple threads, which were left leaked.
* ad/preload-plug-memleak:
preload-index: fix memleak
xcalloc(), imitating calloc(), takes "number of elements of the
array", and "size of a single element", in this order. A call that
does not follow this ordering has been corrected.
* sg/xcalloc-cocci-fix:
promisor-remote: fix xcalloc() argument order
Fix deadlocks between main Git process and subprocess spawned via
the pipe_command() API, that can kill "git add -p" that was
reimplemented in C recently.
* jk/pipe-command-nonblock:
pipe_command(): mark stdin descriptor as non-blocking
pipe_command(): handle ENOSPC when writing to a pipe
pipe_command(): avoid xwrite() for writing to pipe
git-compat-util: make MAX_IO_SIZE define globally available
nonblock: support Windows
compat: add function to enable nonblocking pipes
An earlier optimization discarded a tree-object buffer that is
still in use, which has been corrected.
* jk/is-promisor-object-keep-tree-in-use:
is_promisor_object(): fix use-after-free of tree buffer
Documentation for "git add --renormalize" has been improved.
source: <20220810144450.470-2-philipoakley@iee.email>
* po/doc-add-renormalize:
doc add: renormalize is not idempotent for CRCRLF
Fixes to sparse index compatibility work for "reset" and "checkout"
commands.
source: <pull.1312.v3.git.1659985672.gitgitgadget@gmail.com>
* vd/sparse-reset-checkout-fixes:
unpack-trees: unpack new trees as sparse directories
cache.h: create 'index_name_pos_sparse()'
oneway_diff: handle removed sparse directories
checkout: fix nested sparse directory diff in sparse index
"git fsck" reads mode from tree objects but canonicalizes the mode
before passing it to the logic to check object sanity, which has
hid broken tree objects from the checking logic. This has been
corrected, but to help exiting projects with broken tree objects
that they cannot fix retroactively, the severity of anomalies this
code detects has been demoted to "info" for now.
source: <YvQcNpizy9uOZiAz@coredump.intra.peff.net>
* jk/fsck-tree-mode-bits-fix:
fsck: downgrade tree badFilemode to "info"
fsck: actually detect bad file modes in trees
tree-walk: add a mechanism for getting non-canonicalized modes
Platform-specific code that determines if a directory is OK to use
as a repository has been taught to report more details, especially
on Windows.
source: <pull.1286.v2.git.1659965270.gitgitgadget@gmail.com>
* js/safe-directory-plus:
mingw: handle a file owned by the Administrators group correctly
mingw: be more informative when ownership check fails on FAT32
mingw: provide details about unsafe directories' ownership
setup: prepare for more detailed "dubious ownership" messages
setup: fix some formatting
Avoid repeatedly running getconf to ask libc version in the test
suite, and instead just as it once per script.
source: <pull.1311.git.1659620305757.gitgitgadget@gmail.com>
* pw/use-glibc-tunable-for-malloc-optim:
tests: cache glibc version check
Plug memory leaks in the failure code path in the "merge-ort" merge
strategy backend.
source: <pull.1307.v2.git.1659114727.gitgitgadget@gmail.com>
* js/ort-clean-up-after-failed-merge:
merge-ort: do leave trace2 region even if checkout fails
merge-ort: clean up after failed merge
Older gcc with -Wall complains about the universal zero initializer
"struct s = { 0 };" idiom, which makes developers' lives
inconvenient (as -Werror is enabled by DEVELOPER=YesPlease). The
build procedure has been tweaked to help these compilers.
source: <YuQ60ZUPBHAVETD7@coredump.intra.peff.net>
* jk/struct-zero-init-with-older-gcc:
config.mak.dev: squelch -Wno-missing-braces for older gcc
Conditionally allow building Python interpreter on Windows
source: <pull.1306.v2.git.1659109272.gitgitgadget@gmail.com>
* js/mingw-with-python:
mingw: remove unneeded `NO_CURL` directive
mingw: remove unneeded `NO_GETTEXT` directive
windows: include the Python bits when building Git for Windows
Fix build procedure for Windows that uses CMake so that it can pick
up the shell interpreter from local installation location.
source: <pull.1304.git.1658912756815.gitgitgadget@gmail.com>
* ca/unignore-local-installation-on-windows:
cmake: support local installations of git
GitHub Actions scheduled a brownout of Ubuntu 18.04, which canceled all
runs of the 'static-analysis' job in our CI runs. Update to 22.04 to
avoid this as the brownout later turns into a complete deprecation.
The use of 18.04 was set in d051ed77ee (.github/workflows/main.yml: run
static-analysis on bionic, 2021-02-08) due to the lack of Coccinelle
being available on 20.04 (which continues today).
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If a merge failed and we are leaving conflicts in the working directory
for the user to resolve, we should not attempt to apply any autostash.
Further, if we fail to apply the autostash (because either the merge
failed, or the user requested --no-commit), then we should instruct the
user how to apply it later.
Add a testcase verifying we have corrected this behavior.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Pass the number of elements first and their size second, as expected
by xcalloc().
Patch generated with:
make SPATCH_FLAGS=--recursive-includes contrib/coccinelle/xcalloc.cocci.patch
Our default SPATCH_FLAGS ('--all-includes') doesn't catch this
transformation by default, unless used in combination with a large-ish
SPATCH_BATCH_SIZE which happens to put 'promisor-remote.c' with a file
that includes 'repository.h' directly in the same batch.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a memory leak occuring in case of pathspec copy in preload_index.
Direct leak of 8 byte(s) in 8 object(s) allocated from:
#0 0x7f0a353ead47 in __interceptor_malloc (/usr/lib/gcc/x86_64-pc-linux-gnu/11.3.0/libasan.so.6+0xb5d47)
#1 0x55750995e840 in do_xmalloc /home/anthony/src/c/git/wrapper.c:51
#2 0x55750995e840 in xmalloc /home/anthony/src/c/git/wrapper.c:72
#3 0x55750970f824 in copy_pathspec /home/anthony/src/c/git/pathspec.c:684
#4 0x557509717278 in preload_index /home/anthony/src/c/git/preload-index.c:135
#5 0x55750975f21e in refresh_index /home/anthony/src/c/git/read-cache.c:1633
#6 0x55750915b926 in cmd_status builtin/commit.c:1547
#7 0x5575090e1680 in run_builtin /home/anthony/src/c/git/git.c:466
#8 0x5575090e1680 in handle_builtin /home/anthony/src/c/git/git.c:720
#9 0x5575090e284a in run_argv /home/anthony/src/c/git/git.c:787
#10 0x5575090e284a in cmd_main /home/anthony/src/c/git/git.c:920
#11 0x5575090dbf82 in main /home/anthony/src/c/git/common-main.c:56
#12 0x7f0a348230ab (/lib64/libc.so.6+0x290ab)
Signed-off-by: Anthony Delannoy <anthony.2lannoy@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our pipe_command() helper lets you both write to and read from a child
process on its stdin/stdout. It's supposed to work without deadlocks
because we use poll() to check when descriptors are ready for reading or
writing. But there's a bug: if both the data to be written and the data
to be read back exceed the pipe buffer, we'll deadlock.
The issue is that the code assumes that if you have, say, a 2MB buffer
to write and poll() tells you that the pipe descriptor is ready for
writing, that calling:
write(cmd->in, buf, 2*1024*1024);
will do a partial write, filling the pipe buffer and then returning what
it did write. And that is what it would do on a socket, but not for a
pipe. When writing to a pipe, at least on Linux, it will block waiting
for the child process to read() more. And now we have a potential
deadlock, because the child may be writing back to us, waiting for us to
read() ourselves.
An easy way to trigger this is:
git -c add.interactive.useBuiltin=true \
-c interactive.diffFilter=cat \
checkout -p HEAD~200
The diff against HEAD~200 will be big, and the filter wants to write all
of it back to us (obviously this is a dummy filter, but in the real
world something like diff-highlight would similarly stream back a big
output).
If you set add.interactive.useBuiltin to false, the problem goes away,
because now we're not using pipe_command() anymore (instead, that part
happens in perl). But this isn't a bug in the interactive code at all.
It's the underlying pipe_command() code which is broken, and has been
all along.
We presumably didn't notice because most calls only do input _or_
output, not both. And the few that do both, like gpg calls, may have
large inputs or outputs, but never both at the same time (e.g., consider
signing, which has a large payload but a small signature comes back).
The obvious fix is to put the descriptor into non-blocking mode, and
indeed, that makes the problem go away. Callers shouldn't need to
care, because they never see the descriptor (they hand us a buffer to
feed into it).
The included test fails reliably on Linux without this patch. Curiously,
it doesn't fail in our Windows CI environment, but has been reported to
do so for individual developers. It should pass in any environment after
this patch (courtesy of the compat/ layers added in the last few
commits).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When write() to a non-blocking pipe fails because the buffer is full,
POSIX says we should see EAGAIN. But our mingw_write() compat layer on
Windows actually returns ENOSPC for this case. This is probably
something we want to correct, but given that we don't plan to use
non-blocking descriptors in a lot of places, we can work around it by
just catching ENOSPC alongside EAGAIN. If we ever do fix mingw_write(),
then this patch can be reverted.
We don't actually use a non-blocking pipe yet, so this is still just
preparation.
Helped-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If xwrite() sees an EAGAIN response, it will loop forever until the
write succeeds (or encounters a real error). This is due to ef1cf0167a
(xwrite: poll on non-blocking FDs, 2016-06-26), with the idea that we
won't be surprised by a descriptor unexpectedly set as non-blocking.
But that will make things awkward when we do want a non-blocking
descriptor, and a future patch will switch pipe_command() to using one.
In that case, looping on EAGAIN is bad, because the process on the other
end of the pipe may be waiting on us before doing another read() on the
pipe, which would mean we deadlock.
In practice we're not supposed to ever see EAGAIN here, since poll()
will have just told us the descriptor is ready for writing. But our
Windows emulation of poll() will always return "ready" for writing to a
pipe descriptor! This is due to 94f4d01932 (mingw: workaround for hangs
when sending STDIN, 2020-02-17).
Our best bet in that case is to keep handling other descriptors, as any
read() we do may allow the child command to make forward progress (i.e.,
its write() finishes, and then it read()s from its stdin, freeing up
space in the pipe buffer). This means we might busy-loop between poll()
and write() on Windows if the child command is slow to read our input,
but it's much better than the alternative of deadlocking.
In practice, this busy-looping should be rare:
- for small inputs, we'll just write the whole thing in a single
write() anyway, non-blocking or not
- for larger inputs where the child reads input and then processes it
before writing (e.g., gpg verifying a signature), we may make a few
extra write() calls that get EAGAIN during the initial write, but
once it has taken in the whole input, we'll correctly block waiting
to read back the data.
- for larger inputs where the child process is streaming output back
(like a diff filter), we'll likewise see some extra EAGAINs, but
most of them will be followed immediately by a read(), which will
let the child command make forward progress.
Of course it won't happen at all for now, since we don't yet use a
non-blocking pipe. This is just preparation for when we do.
Helped-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We define MAX_IO_SIZE within wrapper.c, but it's useful for any code
that wants to do a raw write() for whatever reason (say, because they
want different EAGAIN handling). Let's make it available everywhere.
The alternative would be adding xwrite_foo() variants to give callers
more options. But there's really no reason MAX_IO_SIZE needs to be
abstracted away, so this give callers the most flexibility.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Implement enable_pipe_nonblock() using the Windows API. This works only
for pipes, but that is sufficient for this limited interface. Despite
the API calls used, it handles both "named" and anonymous pipes from our
pipe() emulation.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We'd like to be able to make some of our pipes nonblocking so that
poll() can be used effectively, but O_NONBLOCK isn't portable. Let's
introduce a compat wrapper so this can be abstracted for each platform.
The interface is as narrow as possible to let platforms do what's
natural there (rather than having to implement fcntl() and a fake
O_NONBLOCK for example, or having to handle other types of descriptors).
The next commit will add Windows support, at which point we should be
covering all platforms in practice. But if we do find some other
platform without O_NONBLOCK, we'll return ENOSYS. Arguably we could just
trigger a build-time #error in this case, which would catch the problem
earlier. But since we're not planning to use this compat wrapper in many
code paths, a seldom-seen runtime error may be friendlier for such a
platform than blocking compilation completely. Our test suite would
still notice it.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since commit fcc07e980b (is_promisor_object(): free tree buffer after
parsing, 2021-04-13), we'll always free the buffers attached to a
"struct tree" after searching them for promisor links. But there's an
important case where we don't want to do so: if somebody else is already
using the tree!
This can happen during a "rev-list --missing=allow-promisor" traversal
in a partial clone that is missing one or more trees or blobs. The
backtrace for the free looks like this:
#1 free_tree_buffer tree.c:147
#2 add_promisor_object packfile.c:2250
#3 for_each_object_in_pack packfile.c:2190
#4 for_each_packed_object packfile.c:2215
#5 is_promisor_object packfile.c:2272
#6 finish_object__ma builtin/rev-list.c:245
#7 finish_object builtin/rev-list.c:261
#8 show_object builtin/rev-list.c:274
#9 process_blob list-objects.c:63
#10 process_tree_contents list-objects.c:145
#11 process_tree list-objects.c:201
#12 traverse_trees_and_blobs list-objects.c:344
[...]
We're in the middle of walking through the entries of a tree object via
process_tree_contents(). We see a blob (or it could even be another tree
entry) that we don't have, so we call is_promisor_object() to check it.
That function loops over all of the objects in the promisor packfile,
including the tree we're currently walking. When we're done with it
there, we free the tree buffer. But as we return to the walk in
process_tree_contents(), it's still holding on to a pointer to that
buffer, via its tree_desc iterator, and it accesses the freed memory.
Even a trivial use of "--missing=allow-promisor" triggers this problem,
as the included test demonstrates (it's just a vanilla --blob:none
clone).
We can detect this case by only freeing the tree buffer if it was
allocated on our behalf. This is a little tricky since that happens
inside parse_object(), and it doesn't tell us whether the object was
already parsed, or whether it allocated the buffer itself. But by
checking for an already-parsed tree beforehand, we can distinguish the
two cases.
That feels a little hacky, and does incur an extra lookup in the
object-hash table. But that cost is fairly minimal compared to actually
loading objects (and since we're iterating the whole pack here, we're
likely to be loading most objects, rather than reusing cached results).
It may also be a good direction for this function in general, as there
are other possible optimizations that rely on doing some analysis before
parsing:
- we could detect blobs and avoid reading their contents; they can't
link to other objects, but parse_object() doesn't know that we don't
care about checking their hashes.
- we could avoid allocating object structs entirely for most objects
(since we really only need them in the oidset), which would save
some memory.
- promisor commits could use the commit-graph rather than loading the
object from disk
This commit doesn't do any of those optimizations, but I think it argues
that this direction is reasonable, rather than relying on parse_object()
and trying to teach it to give us more information about whether it
parsed.
The included test fails reliably under SANITIZE=address just when
running "rev-list --missing=allow-promisor". Checking the output isn't
strictly necessary to detect the bug, but it seems like a reasonable
addition given the general lack of coverage for "allow-promisor" in the
test suite.
Reported-by: Andrew Olsen <andrew.olsen@koordinates.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There was a bug in the codepath to upgrade generation information
in commit-graph from v1 to v2 format, which has been corrected.
source: <cover.1657667404.git.me@ttaylorr.com>
* tb/commit-graph-genv2-upgrade-fix:
commit-graph: fix corrupt upgrade from generation v1 to v2
commit-graph: introduce `repo_find_commit_pos_in_graph()`
t5318: demonstrate commit-graph generation v2 corruption
Fix for a bug that makes write-tree to fail to write out a
non-existent index as a tree, introduced in 2.37.
source: <20220722212232.833188-1-martin.agren@gmail.com>
* tk/untracked-cache-with-uall:
read-cache: make `do_read_index()` always set up `istate->repo`
"git checkout" miscounted the paths it updated, which has been
corrected.
source: <cover.1657799213.git.matheus.bernardino@usp.br>
* mt/checkout-count-fix:
checkout: fix two bugs on the final count of updated entries
checkout: show bug about failed entries being included in final report
checkout: document bug where delayed checkout counts entries twice
"git p4" did not handle non-ASCII client name well, which has been
corrected.
source: <pull.1285.v3.git.git.1658394440.gitgitgadget@gmail.com>
* kk/p4-client-name-encoding-fix:
git-p4: refactoring of p4CmdList()
git-p4: fix bug with encoding of p4 client name
"git p4" working on UTF-16 files on Windows did not implement
CRLF-to-LF conversion correctly, which has been corrected.
source: <pull.1294.v2.git.git.1658341065221.gitgitgadget@gmail.com>
* mb/p4-utf16-crlf:
git-p4: fix CR LF handling for utf16 files
A corner case bug where lazily fetching objects from a promisor
remote resulted in infinite recursion has been corrected.
source: <cover.1656593279.git.hanxin.hx@bytedance.com>
* hx/lookup-commit-in-graph-fix:
t5330: remove run_with_limited_processses()
commit-graph.c: no lazy fetch in lookup_commit_in_graph()
The resolve-undo information in the index was not protected against
GC, which has been corrected.
source: <xmqq35f7kzad.fsf@gitster.g>
* jc/resolve-undo:
fsck: do not dereference NULL while checking resolve-undo data
revision: mark blobs needed for resolve-undo as reachable
The previous commit un-broke the "badFileMode" check; before then it was
literally testing nothing. And as far as I can tell, it has been so
since the very initial version of fsck.
The current severity of "badFileMode" is just "warning". But in the
--strict mode used by transfer.fsckObjects, that is elevated to an
error. This will potentially cause hassle for users, because historical
objects with bad modes will suddenly start causing pushes to many server
operators to be rejected.
At the same time, these bogus modes aren't actually a big risk. Because
we canonicalize them everywhere besides fsck, they can't cause too much
mischief in the real world. The worst thing you can do is end up with
two almost-identical trees that have different hashes but are
interpreted the same. That will generally cause things to be inefficient
rather than wrong, and is a bug somebody working on a Git implementation
would want to fix, but probably not worth inconveniencing users by
refusing to push or fetch.
So let's downgrade this to "info" by default, which is our setting for
"mention this when fscking, but don't ever reject, even under strict
mode". If somebody really wants to be paranoid, they can still adjust
the level using config.
Suggested-by: Xavier Morel <xavier.morel@masklinn.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We use the normal tree_desc code to iterate over trees in fsck, meaning
we only see the canonicalized modes it returns. And hence we'd never see
anything unexpected, since it will coerce literally any garbage into one
of our normal and accepted modes.
We can use the new RAW_MODES flag to see the real modes, and then use
the existing code to actually analyze them. The existing code is written
as allow-known-good, so there's not much point in testing a variety of
breakages. The one tested here should be S_IFREG but with nonsense
permissions.
Do note that the error-reporting here isn't great. We don't mention the
specific bad mode, but just that the tree has one or more broken modes.
But when you go to look at it with "git ls-tree", we'll report the
canonicalized mode! This isn't ideal, but given that this should come up
rarely, and that any number of other tree corruptions might force you
into looking at the binary bytes via "cat-file", it's not the end of the
world. And it's something we can improve on top later if we choose.
Reported-by: Xavier Morel <xavier.morel@masklinn.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When using init_tree_desc() and tree_entry() to iterate over a tree, we
always canonicalize the modes coming out of the tree. This is a good
thing to prevent bugs or oddities in normal code paths, but it's
counter-productive for tools like fsck that want to see the exact
contents.
We can address this by adding an option to avoid the extra
canonicalization. A few notes on the implementation:
- I've attached the new option to the tree_desc struct itself. The
actual code change is in decode_tree_entry(), which is in turn
called by the public update_tree_entry(), tree_entry(), and
init_tree_desc() functions, plus their "gently" counterparts.
By letting it ride along in the struct, we can avoid changing the
signature of those functions, which are called many times. Plus it's
conceptually simpler: you really want a particular iteration of a
tree to be "raw" or not, rather than individual calls.
- We still have to set the new option somewhere. The struct is
initialized by init_tree_desc(). I added the new flags field only to
the "gently" version. That avoids disturbing the much more numerous
non-gentle callers, and it makes sense that anybody being careful
about looking at raw modes would also be careful about bogus trees
(i.e., the caller will be something like fsck in the first place).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Layouts with a single window other than "MERGED" do not work (e.g.
"LOCAL" or "MERGED+LOCAL").
This is because as the documentation of bufdo says:
The last buffer (or where an error occurred) becomes the current
buffer.
And we do always do bufdo the end.
Additionally, we do it only once, when it should be per tab.
Fix this by doing it once per tab right after it's created and before
any buffer is switched.
Cc: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Reviewed-by: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When diffopt has hiddenoff set and there's only one window (as is the
case in the single window mode) the diff mode is turned off.
We don't want that, so turn that option off.
Cc: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Reviewed-by: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When using the single window mode we are greeted with the following
warning:
"./content_LOCAL_8975" 6L, 28B
"./content_BASE_8975" 6 lines, 29 bytes
"./content_REMOTE_8975" 6 lines, 29 bytes
"content" 16 lines, 115 bytes
Press ENTER or type command to continue
every time.
Silence that.
Suggested-by: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Reviewed-by: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When vimdiff3 was added in 7c147b77d3 (mergetools: add vimdiff3 mode,
2014-04-20), the description made clear the intention:
It's similar to the default, except that the other windows are
hidden. This ensures that removed/added colors are still visible on
the main merge window, but the other windows not visible.
However, in 0041797449 (vimdiff: new implementation with layout support,
2022-03-30) this was broken by generating a command that never creates
windows, and therefore vim never shows the diff.
The layout support implementation broke the whole purpose of vimdiff3,
and simply shows MERGED, which is no different from simply opening the
file with vim.
In order to show the diff, the windows need to be created first, and
then when they are hidden the diff remains (if hidenoff isn't set), but
by setting the `hidden` option the initial buffers are marked as hidden
thus making the feature work.
Suggested-by: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Reviewed-by: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If 'unpack_single_entry()' is unpacking a new directory tree (that is, one
not already present in the index) into a sparse index, unpack the tree as a
sparse directory rather than traversing its contents and unpacking each file
individually. This helps keep the sparse index as collapsed as possible in
cases such as 'git reset --hard' restoring a outside-of-cone directory
removed with 'git rm -r --sparse'.
Without this patch, 'unpack_single_entry()' will only unpack a directory
into the index as a sparse directory (rather than traversing into it and
unpacking its files one-by-one) if an entry with the same name already
exists in the index. This patch allows sparse directory unpacking without a
matching index entry when the following conditions are met:
1. the directory's path is outside the sparse cone, and
2. there are no children of the directory in the index
If a directory meets these requirements (as determined by
'is_new_sparse_dir()'), 'unpack_single_entry()' unpacks the sparse directory
index entry and propagates the decision back up to 'unpack_callback()' to
prevent unnecessary tree traversal into the unpacked directory.
Reported-by: Shaoxuan Yuan <shaoxuan.yuan02@gmail.com>
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add 'index_name_pos_sparse()', which behaves the same as 'index_name_pos()',
except that it does not expand a sparse index to search for an entry inside
a sparse directory.
'index_entry_exists()' was originally implemented in 20ec2d034c (reset: make
sparse-aware (except --mixed), 2021-11-29) as an alternative to
'index_name_pos()' to allow callers to search for an index entry without
expanding a sparse index. However, that particular use case only required
knowing whether the requested entry existed, so 'index_entry_exists()' does
not return the index positioning information provided by 'index_name_pos()'.
This patch implements 'index_name_pos_sparse()' to accommodate callers that
need the positioning information of 'index_name_pos()', but do not want to
expand the index.
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Update 'do_oneway_diff()' to perform a 'diff_tree_oid()' on removed sparse
directories, as it does for added or modified sparse directories (see
9eb00af562 (diff-lib: handle index diffs with sparse dirs, 2021-07-14)).
At the moment, this update is unreachable code because 'unpack_trees()'
(currently the only way 'oneway_diff()' can be called, via 'diff_cache()')
will always traverse trees down to the individual removed files of a deleted
sparse directory. A subsequent patch will change this to better preserve a
sparse index in other uses of 'unpack_tree()', e.g. 'git reset --hard'.
However, making that change without this patch would result in (among other
issues) 'git status' printing only the name of a deleted sparse directory,
not its contents. To avoid introducing that bug, 'do_oneway_diff()' is
updated before modifying 'unpack_trees()'.
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add the 'recursive' diff flag to the local changes reporting done by 'git
checkout' in 'show_local_changes()'. Without the flag enabled, unexpanded
sparse directories will not be recursed into to report the diff of each
file's contents, resulting in the reported local changes including
"modified" sparse directories.
The same issue was found and fixed for 'git status' in 2c521b0e49 (status:
fix nested sparse directory diff in sparse index, 2022-03-01)
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When an Administrator creates a file or directory, the created
file/directory is owned not by the Administrator SID, but by the
_Administrators Group_ SID. The reason is that users with administrator
privileges usually run in unprivileged ("non-elevated") mode, and their
user SID does not change when running in elevated mode.
This is is relevant e.g. when running a GitHub workflow on a build
agent, which runs in elevated mode: cloning a Git repository in a script
step will cause the worktree to be owned by the Administrators Group
SID, for example.
Let's handle this case as following: if the current user is an
administrator, Git should consider a worktree owned by the
Administrators Group as if it were owned by said user.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The FAT file system has no concept of ACLs. Therefore, it cannot store
any ownership information anyway, and the `GetNamedSecurityInfoW()` call
pretends that everything is owned "by the world".
Let's special-case that scenario and tell the user what's going on.
This addresses https://github.com/git-for-windows/git/issues/3886
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When Git refuses to use an existing repository because it is owned by
someone else than the current user, it can be a bit tricky on Windows to
figure out what is going on.
Let's help with that by providing more detailed information.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When verifying the ownership of the Git directory, we sometimes would
like to say a bit more about it, e.g. when using a platform-dependent
code path (think: Windows has the permission model that is so different
from Unix'), but only when it is a appropriate to actually say
something.
To allow for that, collect that information and hand it back to the
caller (whose responsibility it is to show it or not).
Note: We do not actually fill in any platform-dependent information yet,
this commit just adds the infrastructure to be able to do so.
Based-on-an-idea-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In preparation for touching code that was introduced in 3b0bf27049
(setup: tighten ownership checks post CVE-2022-24765, 2022-05-10) and
that was formatted differently than preferred in the Git project, fix
the indentation before actually modifying the code.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Workaround for a compiler warning against use of die() in
osx-keychain (in contrib/).
source: <pull.1293.git.1658251503775.gitgitgadget@gmail.com>
* ld/osx-keychain-usage-fix:
osx-keychain: fix compiler warning
Add missing documentation for "include" and "includeIf" features in
"git config" file format, which incidentally teaches the command
line completion to include them in its offerings.
source: <pull.1285.v2.git.1658002423864.gitgitgadget@gmail.com>
* mb/config-document-include:
config.txt: document include, includeIf
mkstemp() emulation on Windows has been improved.
source: <7265e37f-fd29-3579-b840-19a1df52a59f@web.de>
* rs/mingw-tighten-mkstemp:
mingw: avoid mktemp() in mkstemp() implementation
"git clone" from a repository with some ref whose HEAD is unborn
did not set the HEAD in the resulting repository correctly, which
has been corrected.
source: <YsdyLS4UFzj0j/wB@coredump.intra.peff.net>
* jk/clone-unborn-confusion:
clone: move unborn head creation to update_head()
clone: use remote branch if it matches default HEAD
clone: propagate empty remote HEAD even with other branches
clone: drop extra newline from warning message
Fix a logic error in a082345372 (hook API: fix v2.36.0 regression:
hooks should be connected to a TTY, 2022-06-07). When it started using
the "ungroup" API added in fd3aaf53f7 (run-command: add an "ungroup"
option to run_process_parallel(), 2022-06-07) it should have made the
same sort of change that fd3aaf53f7 itself made in
"t/helper/test-run-command.c".
The correct way to emit this "Couldn't start" output with "ungroup"
would be:
fprintf(stderr, _("Couldn't start hook '%s'\n"), hook_path);
But we should instead remove the emitting of this output. As the added
test shows we already emit output when we can't run the child. The
"cannot run" output here is emitted by run-command.c's
child_err_spew().
So the addition of the "Couldn't start hook" output here in
96e7225b31 (hook: add 'run' subcommand, 2021-12-22) was always
redundant. For the pre-commit hook we'll now emit exactly the same
output as we did before f443246b9f (commit: convert
{pre-commit,prepare-commit-msg} hook to hook.h, 2021-12-22) (and
likewise for others).
We could at this point add this to the pick_next_hook() callbacks in
hook.c:
assert(!out);
assert(!*pp_task_cb);
And this to notify_start_failure() and notify_hook_finished() (in the
latter case the parameter is called "pp_task_cp"):
assert(!out);
assert(!pp_task_cb);
But let's leave any such instrumentation for some eventual cleanup of
the "ungroup" API.
Reported-by: Ilya K <me@0upti.me>
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Reviewed-by: Emily Shaffer <emilyshaffer@google.com>
Reviewed-by: Đoàn Trần Công Danh <congdanhqx@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
131b94a10a ("test-lib.sh: Use GLIBC_TUNABLES instead of MALLOC_CHECK_
on glibc >= 2.34", 2022-03-04) introduced a check for the version of
glibc that is in use. This check is performed as part of
setup_malloc_check() which is called at least once for each test. As
the test involves forking `getconf` and `expr` cache the result and
use that within setup_malloc_check() to avoid forking these extra
processes for each test.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are some inconsistencies with how different asciidoc environments
handle different combinations of "\{<>}", e.g. these results were
observed with asciidoc on two different environments:
| Input | Output (env A) | Output (env B) | same/different |
|-----------+----------------+------------------+----------------|
| \{<foo>\} | {<foo>} | \{<foo>}^M | different |
| {<foo>} | {<foo>} | {<foo>} | same |
| \{<foo>} | {<foo>} | \{<foo>}^M | different |
| \{foo\} | {foo} | {foo} | same |
| \{\} | {} | \{}^M | different |
| \{} | {} | {} | same |
| {\} | {} | {} | same |
The only instance of this biting us is "@\{<specifier>\}" in
Documentation/git-reflog.txt; all other combinations of "\{<>}" (e.g. in
Documentation/revisions.txt) seem to render consistently.
Fix this inconsistent rendering by removing the unnecessary "\" in
Documentation/git-reflog.txt.
Signed-off-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 557ac0350d (merge-ort: begin performance work; instrument with
trace2_region_* calls, 2021-01-23), we added Trace2 instrumentation, but
in the error path that returns early, we forgot to tell Trace2 that
we're leaving the region. Let's fix that.
Pointed-out-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 9fefce68dc (merge-ort: basic outline for merge_switch_to_result(),
2020-12-13), we added functionality to lay down the result of a merge on
disk. But we forgot to release the data structures in case
`unpack_trees()` failed to run properly.
This was pointed out by the `linux-leaks` job in our CI runs.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Versions of gcc prior to 4.9 complain about an initialization like:
struct inner { int x; };
struct outer { struct inner; };
struct outer foo = { 0 };
and insist on:
struct outer foo = { { 0 } };
Newer compilers handle this just fine. And ignoring the window even on
older compilers is fine; the resulting code is correct, but we just get
caught by -Werror.
Let's relax this for older compilers to make developer lives easier (we
don't care much about non-developers on old compilers; they may see a
warning, but it won't stop compilation).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Files' attributes can indicate more than just whether they are files or
directories. It was reported in Git for Windows that on certain network
shares, this led to a nasty problem trying to create tags:
$ git tag -a -m "automatic tag creation" test_dir/test_tag
fatal: cannot lock ref 'refs/tags/test_dir/test_tag': unable to resolve reference 'refs/tags/test_dir/test_tag': Not a directory
Note: This does not necessarily happen with all types of network shares.
One setup where it _did_ happen is a Windows Server 2019 VM, and as
hinted in
http://woshub.com/slow-network-shared-folder-refresh-windows-server/
in the indicated instance the following commands worked around the bug:
Set-SmbClientConfiguration -DirectoryCacheLifetime 0
Set-SmbClientConfiguration -FileInfoCacheLifetime 0
Set-SmbClientConfiguration -FileNotFoundCacheLifetime 0
This would impact performance negatively, though, as it essentially
turns off all caching, therefore we do not want to require users to do
that just to be able to use Git on Windows.
The underlying bug is in the code added in 4b0abd5c69 (mingw: let
lstat() fail with errno == ENOTDIR when appropriate, 2016-01-26) that
emulates the POSIX behavior where `lstat()` should return `ENOENT` if
the file or directory simply does not exist but could be created, and
`ENOTDIR` if there is no file or directory nor could there be because a
leading path already exists and is not a directory.
In that code, the return value of `GetFileAttributesW()` is interpreted
as an enum value, not as a bit field, so that a perfectly fine leading
directory can be misdetected as "not a directory".
As a consequence, the `read_refs_internal()` function would return
`ENOTDIR`, suggesting not only that the tag in the `git tag` invocation
above does not exist, but that it cannot even be created.
Let's fix the code so that it interprets the return value of the
`GetFileAttributesW()` call correctly.
This fixes https://github.com/git-for-windows/git/issues/3727
Reported-by: Pierre Garnier <pgarnier@mega.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In df5218b4c3 (config.mak.uname: support MSys2, 2016-01-13), we
introduced support for building Git for Windows in the then-brand new
Git for Windows v2.x build environment that was based off of MSYS2.
To do that, we split the non-msysGit part (that targeted MSys1) in two,
and instead of sharing the `NO_CURL = YesPlease` setting with MSys1, we
overrode it for MSYS2 with the empty value because we very much want to
build Git for Windows with libcurl.
But that was unnecessary: we never set that variable beforehand,
therefore there is no need to override it.
Let's just remove that unnecessary line.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In f9206ce268 (mingw: let's use gettext with MSYS2, 2016-01-26), we
flipped the switch to build Git for Windows with support for gettext.
However, the way we flipped the switch was by changing the value of the
`NO_GETTEXT` variable from a non-empty string to the empty string, as if
there was any `NO_GETTEXT` definition we needed to override.
But that was a mistake: while there _is_ a definition, it is in the
`THIS_IS_MSYSGIT` section, i.e. it does not affect the Git for Windows
part at all.
Let's just remove that unnecessary line.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While Git for Windows does not _ship_ Python (in order to save on
bandwidth), MSYS2 provides very fine Python interpreters that users can
easily take advantage of, by using Git for Windows within its SDK.
Previously, we excluded the Python bits, mostly due to historical
reasons: In the Git for Windows v1.x days, we built Git using
MSys/MinGW, without support for any Python scripts.
Therefore, let's move out the `NO_PYTHON` definition from the generic
part of the MINGW section (which includes special handling for MSYS2/Git
for Windows, for the long-superseded msysGit environment, as well as for
the setup of probably just one developer remaining with their MSys1)
into the two sections that cover different environments than Git for
Windows' SDK.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Update "git diff/log --raw" format documentation.
source: <pull.1259.git.1655123383.gitgitgadget@gmail.com>
* pb/diff-doc-raw-format:
diff-index.txt: update raw output format in examples
diff-format.txt: correct misleading wording
diff-format.txt: dst can be 0* SHA-1 when path is deleted, too
Certain diff options are currently ignored when combined-diff is
shown; mark them as incompatible with the feature.
source: <220524.86v8tuvfl1.gmgdl@evledraar.gmail.com>
* rs/combine-diff-with-incompatible-options:
combine-diff: abort if --output is given
combine-diff: abort if --ignore-matching-lines is given
Adjust technical/bitmap-format to be formatted by AsciiDoc, and
add some missing information to the documentation.
source: <pull.1246.v4.git.1655355834.gitgitgadget@gmail.com>
* ac/bitmap-format-doc:
bitmap-format.txt: add information for trailing checksum
bitmap-format.txt: fix some formatting issues
bitmap-format.txt: feed the file to asciidoc to generate html
Fixes for tests when the source directory has unusual characters in
its path, e.g. whitespaces, double-quotes, etc.
source: <cover-v2-0.3-00000000000-20220630T101646Z-avarab@gmail.com>
* ab/test-quoting-fix:
config tests: fix harmless but broken "rm -r" cleanup
test-lib.sh: fix prepend_var() quoting issue
tests: add missing double quotes to included library paths
"git mktree --missing" lazily fetched objects that are missing from
the local object store, which was totally unnecessary for the purpose
of creating the tree object(s) from its input.
source: <748f39a9-65aa-2110-cf92-7ddf81b5f507@roku.com>
* ro/mktree-allow-missing-fix:
mktree: do not check type of remote objects
Give _() markings to fatal/warning/usage: labels that are shown in
front of these messages.
source: <pull.1279.v2.git.git.1655819877758.gitgitgadget@gmail.com>
* dr/i18n-die-warn-error-usage:
i18n: mark message helpers prefix for translation
References to commands-to-be-typed-literally in "git rebase"
documentation mark-up have been corrected.
source: <pull.1270.v3.git.1656508868146.gitgitgadget@gmail.com>
* ds/git-rebase-doc-markup:
git-rebase.txt: use back-ticks consistently
In a non-bare repository, the behavior of Git when the
core.worktree configuration variable points at a directory that has
a repository as its subdirectory, regressed in Git 2.27 days.
source: <20220616234433.225-1-gg.oss@outlook.com>
source: <20220616231956.154-1-gg.oss@outlook.com>
* gg/worktree-from-the-above:
dir: minor refactoring / clean-up
dir: traverse into repository
Recent update to vimdiff layout code has been made more robust
against different end-user vim settings.
source: <20220708181024.45839-1-greenfoo@u92.eu>
* fr/vimdiff-layout-fix:
vimdiff: make layout engine more robust against user vim settings
Fixes a long-standing corner case bug around directory renames in
the merge-ort strategy.
source: <pull.1268.v4.git.1656984823.gitgitgadget@gmail.com>
* en/merge-dual-dir-renames-fix:
merge-ort: fix issue with dual rename and add/add conflict
merge-ort: shuffle the computation and cleanup of potential collisions
merge-ort: make a separate function for freeing struct collisions
merge-ort: small cleanups of check_for_directory_rename
t6423: add tests of dual directory rename plus add/add conflict
An earlier attempt to plug leaks placed a clean-up label to jump to
at a bogus place, which as been corrected.
source: <Ys0c0ePxPOqZ/5ck@coredump.intra.peff.net>
* jk/diff-files-cleanup-fix:
diff-files: move misplaced cleanup label
"git shortlog -n" relied on the underlying qsort() to be stable,
which shouldn't have. Fixed.
source: <pull.1290.git.1657813429221.gitgitgadget@gmail.com>
* js/shortlog-sort-stably:
shortlog: use a stable sort
At least in systems where the user is local and not an administrator
git will install in a subdirectory of %APPDATALOCAL%, so it makes
sense to also look there for the shell needed by the cmake integration
with Visual Studio.
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If there is no index file, e.g., because the repository has just been
created, we return zero early (unless `must_exist` makes us die
instead.)
This early return means we do not set up `istate->repo`. With
`core.untrackedCache=true`, the recent e6a653554b ("untracked-cache:
support '--untracked-files=all' if configured", 2022-03-31) will
eventually pass down `istate->repo` as a null pointer to
`repo_config_get_string()`, causing a segmentation fault.
If we do hit this early return, set up `istate->repo` similar to when we
actually read the index.
Reported-by: Joey Hess <id@joeyh.name>
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
ec9a37d ("pkt-line.[ch]: remove unused packet_read_line_buf()",
2021-10-14) removed the "src_buffer" and "src_len" parameters from
packet_read(), only leaving them at packet_read_with_status(). Let's
also update the function documentation by moving the comment about these
parameters from the former to the latter.
Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function p4CmdList executes a Perforce command and
decodes the marshalled python dictionary. Special care has to be
taken for certain dictionary values which contain non-unicode characters.
The old handling contained separate hacks for each of the corresponding
dictionary keys. This commit tries to refactor the coding to handle the
special cases uniformely.
Signed-off-by: Kilian Kilger <kkilger@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 4a4b4cda (builtin-remote: Make "remote -v" display push urls,
2009-06-13), the string_list that was initialized with 0 in its
strdup_string member is immediately made to strdup its key strings
by flipping the strdup_string member to true. When 183113a5
(string_list: Add STRING_LIST_INIT macro and make use of it.,
2010-07-04) has introduced STRING_LIST_INIT macros, it mechanically
replaced the initialization to STRING_LIST_INIT_NODUP.
Instead, just use the other initialization macro to make it strdup
the key from the beginning.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Perforce silently replaces LF with CR LF for "utf16" files if the client
is a native Windows client. Since git's autocrlf logic does not undo
this transformation for UTF-16 encoded files, git-p4 replaces CR LF with
LF during the sync if the file type "utf16" is detected and the Perforce
client platform indicates that this conversion is performed.
Windows only runs on little-endian architectures, therefore the encoding
of the byte stream received from the Perforce client is UTF-16-LE and
the relevant byte sequence is 0D 00 0A 00.
Signed-off-by: Moritz Baumann <moritz.baumann@sap.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The 'win build' job of our CI build is failing with the following error:
compat/win32/syslog.c: In function 'syslog':
compat/win32/syslog.c:53:17: error: pointer 'pos' may be used after \
'realloc' [-Werror=use-after-free]
53 | memmove(pos + 2, pos + 1, strlen(pos));
CC compat/poll/poll.o
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compat/win32/syslog.c:47:23: note: call to 'realloc' here
47 | str = realloc(str, st_add(++str_len, 1));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
However, between this realloc() and the use we have a line that resets
the value of 'pos'. Thus, this error is incorrect. It is likely due to a
new version of the compiler on the CI machines.
Instead of waiting for a new compiler, create a new variable to avoid
this error.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The word "whitelist" has cultural implications that are not inclusive.
Thankfully, it is not difficult to reword and avoid its use.
The GIT_ALLOW_PROTOCOL environment variable was referred to as a
"whitelist", but the word "allow" is already part of the variable.
Replace "whitelist" with "allow_list" in these cases to demonstrate that
we are processing a list of allowed protocols.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The word "whitelist" has cultural implications that are not inclusive.
Thankfully, it is not difficult to reword and avoid its use.
Focus on changes in the test scripts, since most of the changes are in
comments and test names. The renamed test_allow_var helper is only used
once inside the widely-used test_proto helper.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The documentation for GIT_ALLOW_PROTOCOL has a sentence that adds no
value, since it repeats the meaning from the previous sentence (twice!).
The word "whitelist" has cultural implications that are not inclusive,
which brought attention to this sentence.
Helped-by: Jeff King <peff@peff.net>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The documentation and error messages for git-cvsserver include some
references to a "whitelist" that is not otherwise included in the
documentation. When different parts of the documentation do not use
common language, this can lead to confusion as to how things are meant
to operate.
Further, the word "whitelist" has cultural implications that make its
use non-inclusive. Thankfully, we can remove it while increasing
clarity.
Update Documentation/git-cvsserver.txt in a similar way to the previous
change to Documentation/git-daemon.txt. The optional '<directory>...'
list can specify a list of allowed directories. We refer to that list
directly inside of the documentation for the GIT_CVSSERVER_ROOT
environment variable.
While modifying this documentation, update the environment variables to
use a list format. We use the modern way of tabbing the description of
each variable in this section. We do _not_ update the description of
'<directory>...' to use tabs this way since the rest of the items in the
OPTIONS list do not use this modern formatting.
A single error message in the actual git-cvsserver.perl code refers to
the whitelist during argument parsing. Instead, refer to the directory
list that has been clarified in the documentation.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The undecorated arguments to the 'git-daemon' command provide a list of
directories. When at least one directory is specified, then 'git-daemon'
only serves requests that are within that directory list. The boolean
'--strict-paths' option makes the list more explicit in that
subdirectories are no longer included.
The existing documentation and error messages around this directory list
refer to it and its behavior as a "whitelist". The word "whitelist" has
cultural implications that are not inclusive. Thankfully, it is not
difficult to reword and avoid its use. In the process, we can define the
purpose of this directory list directly.
In Documentation/git-daemon.txt, rewrite the OPTIONS section around the
'<directory>' option. Add additional clarity to the other options that
refer to these directories.
Some error messages can also be improved in daemon.c. The
'--strict-paths' option requires '<directory>' arguments, so refer to
that section of the documentation directly. A logerror() call points out
that a requested directory is not in the specified directory list. We
can use "list" here without any loss of information.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Update git-credential-osxkeychain.c to remove 'format string is not a string
literal (potentially insecure)' compiler warning by treating the string as
an argument.
Signed-off-by: Lessley Dennington <lessleydennington@gmail.com>
Acked-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Users may have configured "git merge" to always require GPG
signing the resulting commits. We are not running "git merge" to
re-create merge commits, but merely to replay merge conflicts,
and we will immediately discard the resulting commits; there
is no point in signing them.
Override such configuration that forces useless signing from the
command line with the "--no-gpg-sign" option.
Signed-off-by: Celeste Liu <coelacanthus@outlook.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While setting up an unresolved merge for `git rerere`, we run `git
rev-parse` and `git fmt-merge-msg` to create a variable `$fifth` and a
commit-message file `msg`, which we then never actually use. This has
been like that since these tests were added in 672d1b789b ("rerere:
migrate to parse-options API", 2010-08-05). This does exercise `git
rev-parse` and `git fmt-merge-msg`, but doesn't contribute to testing
`git rerere`. Drop these lines.
Reported-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The sparse checkout feature can be used in "cone mode" or "non-cone
mode". In this one instance in the documentation, we refer to the latter
as "non cone mode" with whitespace rather than a hyphen. Align this with
the rest of our documentation.
A few words later in the same paragraph, there's mention of "a more
flexible patterns". Drop that leading "a" to fix the grammar.
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Acked-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The first section of 'Documentation/technical/index-format.txt'
mentions that "Git currently supports cache tree and resolve undo
extensions", but then goes on, and in the "Extensions" section
describes not only these two, but six other extensions [1].
Remove this sentence, as it's misleading about the status of all those
other extensions.
Alternatively we could keep that sentence and update the list of
extensions, but that might well lead to a recurring issue, because
apparently this list is never updated when a new index extension is
added.
[1] Split index, untracked cache, FS monitor cache, end of index
entry, index entry offset table and sparse directory entries.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git config's tab completion does not yet know about the "include"
and "includeIf" sections, nor the related "path" variable.
Add a description for these two sections in
'Documentation/config/includeif.txt', which points to git-config's
documentation, specifically the "Includes" and "Conditional Includes"
subsections.
As a side effect, tab completion can successfully complete the
'include', 'includeIf', and 'include.add' expressions.
This effect is tested by two new ad-hoc tests.
Variable completion only works for "include" for now.
Credit for the ideas behind this patch goes to
Ævar Arnfjörð Bjarmason.
Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Manuel Boni <ziosombrero@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous commit demonstrates a bug where a commit-graph using
generation v2 could enter a state where one of the GDA2 values has its
most-significant bit set (indicating that its value should be read from
the extended offset table in the GDO2 chunk) without having a GDO2 chunk
to read from.
This results in the following error message being displayed to the
caller:
fatal: commit-graph requires overflow generation data but has none
This bug arises in the following scenario:
- We decide to write a commit-graph using generation number v2, and
decide (correctly) that no GDO2 chunk is necessary (e.g., because
all of the commiter date offsets are no larger than 2^31-1).
- The v2 generation numbers are stored in the `->generation` member of
the commit slab holding `struct commit_graph_data`'s.
- Later on, `load_commit_graph_info()` is called, overwriting the
v2 generation data in the aforementioned slab with any existing v1
generation data.
Then, when the commit-graph code goes to write the GDA2 chunk via
`write_graph_chunk_generation_data()`, we use the overwritten generation
v1 data in a place where we expect to use a v2 generation number:
offset = commit_graph_data_at(c)->generation - c->date;
...because `commit_graph_data_at(c)->generation` used to hold the v2
generation data, but it was overwritten to contain the v1 generation
number via `load_commit_graph_info()`.
If the `offset` computation above overflows the v2 generation number
max, then `write_graph_chunk_generation_data()` will update its count of
large offsets and write the marker accordingly:
if (offset > GENERATION_NUMBER_V2_OFFSET_MAX) {
offset = CORRECTED_COMMIT_DATE_OFFSET_OVERFLOW | num_generation_data_overflows;
num_generation_data_overflows++;
}
and reads will look for the GDO2 chunk containing the overflowing v2
generation number, *after* the commit-graph code decided that no such
chunk was necessary.
The main problem is that the slab containing `struct commit_graph_data`
has a dual purpose. It is used to hold data that we are about to write
to disk while generating a commit-graph, as well as hold data that was
read from an existing commit-graph.
When the two mix, namely when the result of reading the commit-graph has
a side-effect that mixes poorly with an in-progress commit-graph write,
we end up with corrupt data.
A complete fix might be to introduce a new slab that is used exclusively
for writing, and gate access between the two slabs based on context
provided by the caller (e.g., whether this computation is part of a
"read" or "write" operation).
But a more minimal fix addresses the only known path which overwrites
the slab data, which is `compute_bloom_filters()` ->
`get_or_compute_bloom_filter()` -> `load_commit_graph_info()` ->
`fill_commit_graph_info()` by avoiding the last call which clobbers the
data altogether.
This path only needs to learn the graph position of a given commit so
that it can be used in `load_bloom_filter_from_graph()`. By replacing
the last steps of the above with one that records the graph position
into a temporary variable which is then used to load the existing Bloom
data, we eliminate the clobbering, removing the corruption.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Low-level callers in systems that are adjacent to the commit-graph (like
the changed-path Bloom filter code) could benefit from being able to
call a function like `parse_commit_in_graph()` without modifying the
corresponding commit slab data.
This is useful in contexts where that slab data is being used to prepare
for an upcoming commit-graph write, where Git must be careful to avoid
clobbering any of that data during a read operation.
Introduce a low-level variant of `parse_commit_in_graph()` which returns
the graph position of a given commit only, without modifying any of the
slab data.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When upgrading a commit-graph using generation v1 to one using
generation v2, it is possible to force Git into a corrupt state where it
(incorrectly) believes that a GDO2 chunk is necessary, *after* deciding
not to write one.
This makes subsequent reads using the commit-graph produce the following
error message:
fatal: commit-graph requires overflow generation data but has none
Demonstrate this bug by increasing our test coverage to include a
minimal example of upgrading a commit-graph from generation v1 to v2.
The only notable components of this test are:
- The committer date of the commit is chosen carefully so that the
offset underflows when computed using a v1 generation number, but
would not overflow when using v2 generation numbers.
- The upgrade to generation number v2 must read in the v1 generation
numbers, which we can do by passing `--changed-paths`, which will
force the commit-graph internals to call `fill_commit_graph_info()`.
A future patch will squash this bug.
Reported-by: Jeff King <peff@peff.net>
Reproduced-by: Will Chandler <wfc@wfchandler.org>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The implementation of mkstemp() for MinGW uses mktemp() and open()
without the flag O_EXCL, which is racy. It's not a security problem
for now because all of its callers only create files within the
repository (incl. worktrees). Replace it with a call to our more
secure internal function, git_mkstemp_mode(), to prevent possible
future issues.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When sorting the output of `git shortlog` by count, a list of authors in
alphabetical order is then sorted by contribution count. Obviously, the
idea is to maintain the alphabetical order for items with identical
contribution count.
At the moment, this job is performed by `qsort()`. As that function is
not guaranteed to implement a stable sort algorithm, this can lead to
inconsistent and/or surprising behavior: items with identical
contribution count could lose their alphabetical sub-order.
The `qsort()` in MS Visual C's runtime does _not_ implement a stable
sort algorithm, and under certain circumstances this even causes a test
failure in t4201.21 "shortlog can match multiple groups", where two
authors both are listed with 2 contributions, and are listed in inverse
alphabetical order.
Let's instead use the stable sort provided by `git_stable_qsort()` to
avoid this inconsistency.
This is a companion to 2049b8dc65 (diffcore_rename(): use a stable sort,
2019-09-30).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 0041797449 (vimdiff: new implementation with layout support,
2022-03-30), we introduced a completely new implementation of the
`vimdiff` backend for `git mergetool`.
In this implementation, we no longer call `vim` directly but we
accumulate in the variable `FINAL_CMD` an arbitrary number of commands
for `vim` to execute, which necessitates the use of `eval` to split the
commands properly into multiple command-line arguments.
That same `eval` command also needs to pass the paths to `vim`, and
while it looks as if they are quoted correctly, that quoting only
reaches the `eval` instruction and is lost after that, therefore paths
that contain whitespace characters (or other characters that are
interpreted by the POSIX shell) are handled incorrectly.
This is a simple reproducer:
git init -b main bam-merge-fail
cd bam-merge-fail
echo a>"a file.txt"
git add "a file.txt"
git commit -m "added 'a file.txt'"
echo b>"a file.txt"
git add "a file.txt"
git commit -m "diverged b 'a file.txt'"
git checkout -b c HEAD~
echo c>"a file.txt"
git add "a file.txt"
git commit -m "diverged c 'a file.txt'"
git checkout main
git merge c
git mergetool --tool=vimdiff
With Git v2.37.0/v2.37.1, this will open 7 buffers, not four, and not
display the correct contents at all.
To fix this, let's not expand the variables containing the path
parameters before passing them to the `eval` command, but let that
command expand the variables instead.
This fixes https://github.com/git-for-windows/git/issues/3945
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
At the end of `git checkout <pathspec>`, we get a message informing how
many entries were updated in the working tree. However, this number can
be inaccurate for two reasons:
1) Delayed entries currently get counted twice.
2) Failed entries are included in the count.
The first problem happens because the counter is first incremented
before inserting the entry in the delayed checkout queue, and once again
when finish_delayed_checkout() calls checkout_entry(). And the second
happens because the counter is incremented too early in
checkout_entry(), before the entry was in fact checked out. Fix that by
moving the count increment further down in the call stack and removing
the duplicate increment on delayed entries. Note that we have to keep
a per-entry reference for the counter (both on parallel checkout and
delayed checkout) because not all entries are always accumulated at the
same counter. See checkout_worktree(), at builtin/checkout.c for an
example.
Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
After checkout, git usually reports how many entries were updated at
that operation. However, because we count the entries too soon during
the checkout process, we may actually include entries that do not get
properly checked out in the end. This can lead to an inaccurate final
report if the user expects it to show only the *successful* updates.
This will be fixed in the next commit, but for now let's document it
with a test that cover all checkout modes.
Note that `test_checkout_workers` have to be slightly adjusted in order
to use the construct `test_checkout_workers ... test_must_fail git
checkout`. The function runs the command given to it with an assignment
prefix to set the GIT_TRACE2 variable. However, this this assignment has
an undefined behavior when the command is a shell function (like
`test_must_fail`). As POSIX specifies:
If the command name is a function that is not a standard utility
implemented as a function, variable assignments shall affect the
current execution environment during the execution of the function. It
is unspecified:
- Whether or not the variable assignments persist after the
completion of the function
- Whether or not the variables gain the export attribute during the
execution of the function
Thus, in order to make sure the GIT_TRACE2 value gets visible to the git
command executed by `test_must_fail`, export the variable and run git in
a subshell.
[1]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html
(Vol. 3: Shell and Utilities, Section 2.9.1: Simple Commands)
Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
At the end of a `git checkout <pathspec>` operation, git reports how
many paths were checked out with a message like "Updated N paths from
the index". However, entries that end up on the delayed checkout queue
(as requested by a long-running process filter) get counted twice,
producing a wrong number in the final report. We will fix this bug in an
upcoming commit. For now, only document/demonstrate it with a
test_expect_failure.
Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 78d5e4cfb4 (tests: refactor --write-junit-xml code, 2022-05-21),
this developer refactored the `--write-junit-xml` code a bit, including
the part where the current test case's title was used in a `set`
invocation, but failed to account for the fact that some test cases'
titles start with a long option, which the `set` misinterprets as being
intended for parsing.
Let's fix this by using the `set -- <...>` form.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
run_with_limited_processses() is used to end the loop faster when an
infinite loop happen. But "ulimit" is tied to the entire development
station, and the test will fail due to too many other processes or using
"--stress".
Without run_with_limited_processses() the infinite loop can also be
stopped due to global configrations or quotas, and the verification
still works fine. So let's remove run_with_limited_processses().
Signed-off-by: Han Xin <hanxin.hx@bytedance.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 0139c58ab9 (revisions API users: add "goto cleanup" for
release_revisions(), 2022-04-13) converted an early return in
cmd_diff_files() into a goto. But it put the cleanup label too early: if
read_cache_preload() returns an error, we'll set result to "-1", but
then jump to calling run_diff_files(), overwriting our result.
We should jump past the call to run_diff_files(). Likewise, we should go
past diff_result_code(), which is expecting to see a code from an actual
diff, not a negative error code.
In practice, I suspect this bug cannot actually be triggered, because
read_cache_preload() does not seem to ever return an error. Its return
value (eventually) comes from do_read_index(), which gives the number of
cache entries found, and calls die() on error. Still, it makes sense to
fix the inadvertent change from 0139c58ab9 first, and we can look into
the overall error handling of read_cache() separately (which is present
in many other callsites).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we found an invalid object recorded in the resolve-undo data,
we would have ended up dereferencing NULL while fsck. Reporting the
problem and going on to the next object is the right thing to do
here.
Noticed by SZEDER Gábor.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Prior to 4f37d45706 (clone: respect remote unborn HEAD, 2021-02-05),
creation of the local HEAD was always done in update_head(). That commit
added code to handle an unborn head in an empty repository, and just did
all symref creation and config setup there.
This makes the code flow a little bit confusing, especially as new
corner cases have been covered (like the previous commit to match our
default branch name to a non-HEAD remote branch).
Let's move the creation of the unborn symref into update_head(). This
matches the other HEAD-creation cases, and now the logic is consistently
separated: the main cmd_clone() function only examines the situation and
sets variables based on what it finds, and update_head() actually
performs the update.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
'vim' has two configuration options ('splitbelow' and 'splitright') that
change the way the 'split' command behaves. When they are set, the
commands that the layout engine generates no longer work as expected.
In order to fix this we can append special keyword 'leftabove' to each
'split' and 'vertical split' subcommand found inside the command string
generated by the layout engine.
This works because whatever comes after 'leftabove' will temporally
ignore settings 'splitbelow' and 'splitright'.
Reported-by: Matthew Klein <mklein994@gmail.com>
Signed-off-by: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The Perforce client name can contain arbitrary characters
which do not decode to UTF-8. Use the fallback strategy
implemented in metadata_stream_to_writable_bytes() also
for the client name.
Signed-off-by: Kilian Kilger <kkilger@gmail.com>
Reviewed-by: Tao Klerks <tao@klerks.biz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Usually clone tries to use the same local HEAD as the remote (unless the
user has given --branch explicitly). Even if the remote HEAD is detached
or unborn, we can detect those situations with modern versions of Git.
If the remote is too old to support the "unborn" extension (or it has
been disabled via config), then we can't know the name of the remote's
unborn HEAD, and we fall back whatever the local default branch name is
configured to be.
But that leads to one weird corner case. It's rare because it needs a
number of factors:
- the remote has an unborn HEAD
- the remote is too old to support "unborn", or has disabled it
- the remote has another branch "foo"
- the local default branch name is "foo"
In that case you end up with a local clone on an unborn "foo" branch,
disconnected completely from the remote's "foo". This is rare in
practice, but the result is quite confusing.
When choosing "foo", we can double check whether the remote has such a
name, and if so, start our local "foo" at the same spot, rather than
making it unborn.
Note that this causes a test failure in t5605, which is cloning from a
bundle that doesn't contain HEAD (so it behaves like a remote that
doesn't support "unborn"), but has a single "main" branch. That test
expects that we end up in the weird "unborn main" case, where we don't
actually check out the remote branch of the same name. Even though we
have to update the test, this seems like an argument in favor of this
patch: checking out main is what I'd expect from such a bundle.
So this patch updates the test for the new behavior and adds an adjacent
one that checks what the original was going for: if there's no HEAD and
the bundle _doesn't_ have a branch that matches our local default name,
then we end up with nothing checked out.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Unless "--branch" was given, clone generally tries to match the local
HEAD to the remote one. For most repositories, this is easy: the remote
tells us which branch HEAD was pointing to, and we call our local
checkout() function on that branch.
When cloning an empty repository, it's a little more tricky: we have
special code that checks the transport's "unborn" extension, or falls back
to our local idea of what the default branch should be. In either case,
we point the new HEAD to that, and set up the branch.* config.
But that leaves one case unhandled: when the remote repository _isn't_
empty, but its HEAD is unborn. The checkout() function is smart enough
to realize we didn't fetch the remote HEAD and it bails with a warning.
But we'll have ignored any information the remote gave us via the unborn
extension. This leads to nonsense outcomes:
- If the remote has its HEAD pointing to an unborn "foo" and contains
another branch "bar", cloning will get branch "bar" but leave the
local HEAD pointing at "master" (or whatever our local default is),
which is useless. The project does not use "master" as a branch.
- Worse, if the other branch "bar" is instead called "master" (but
again, the remote HEAD is not pointing to it), then we end up with a
local unborn branch "master", which is not connected to the remote
"master" (it shares no history, and there's no branch.* config).
Instead, we should try to use the remote's HEAD, even if its unborn, to
be consistent with the other cases.
The reason this case was missed is that cmd_clone() handles empty and
non-empty repositories on two different sides of a conditional:
if (we have any refs) {
fetch refs;
check for --branch;
otherwise, try to point our head at remote head;
otherwise, our head is NULL;
} else {
check for --branch;
otherwise, try to use "unborn" extension;
otherwise, fall back to our default name name;
}
So the smallest change would be to repeat the "unborn" logic at the end
of the first block. But we can note some other overlaps and
inconsistencies:
- both sides have to handle --branch (though note that it's always an
error for the empty repo case, since an empty repo by definition
does not have a matching branch)
- the fall back to the default name is much more explicit in the
empty-repo case. The non-empty case eventually ends up bailing
from checkout() with a warning, which produces a similar result, but
fails to set up the branch config we do in the empty case.
So let's pull the HEAD setup out of this conditional entirely. This
de-duplicates some of the code and the result is easy to follow, because
helper functions like find_ref_by_name() do the right thing even in the
empty-repo case (i.e., by returning NULL).
There are two subtleties:
- for a remote with a detached HEAD, it will advertise an oid for HEAD
(which we store in our "remote_head" variable), but we won't find a
matching refname (so our "remote_head_points_at" is NULL). In this
case we make a local detached HEAD to match. Right now this happens
implicitly by reaching update_head() with a non-NULL remote_head
(since we skip all of the unborn-fallback). We'll now need to
account for it explicitly before doing the fallback.
- for an empty repo, we issue a warning to the user that they've
cloned an empty repo. The text of that warning doesn't make sense
for a non-empty repo with an unborn HEAD, so we'll have to
differentiate the two cases there. We could just use different text,
but instead let's allow the code to continue down to checkout(),
which will issue an appropriate warning, like:
remote HEAD refers to nonexistent ref, unable to checkout
Continuing down to checkout() will make it easier to do more fixes
on top (see below).
Note that this patch fixes the case where the other side reports an
unborn head to us using the protocol extension. It _doesn't_ fix the
case where the other side doesn't tell us, we locally guess "master",
and the other side happens to have a "master" which its HEAD doesn't
point. But it doesn't make anything worse there, and it should actually
make it easier to fix that problem on top.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We don't need to put a "\n" in calls to warning(), since it adds one
itself (and the user sees an extra blank line). Drop it, and while we're
here, drop the full-stop from the message, which goes against our
guidelines.
This bug dates all the way back to 8434c2f1af (Build in clone,
2008-04-27), but presumably nobody noticed because it's hard to trigger:
you have to clone a repository whose HEAD is unborn, but which is not
otherwise empty.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Update test style in t/t30[*].sh for uniformity, that's to
keep test title the same line with helper function itself,
and fix some indentions.
Add a new section "recommended style" in t/README to
encourage people to use more modern style in test.
Signed-off-by: Li Linchao <lilinchao@oschina.cn>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There is code in both merge-recursive and merge-ort for avoiding doubly
transitive renames (i.e. one side renames directory A/ -> B/, and the
other side renames directory B/ -> C/), because this combination would
otherwise make a mess for new files added to A/ on the first side and
wondering which directory they end up in -- especially if there were
even more renames such as the first side renaming C/ -> D/. In such
cases, it just turns "off" directory rename detection for the higher
order transitive cases.
The testcases added in t6423 a couple commits ago are slightly different
but similar in principle. They involve a similar case of paired
renaming but instead of A/ -> B/ and B/ -> C/, the second side renames
a leading directory of B/ to C/. And both sides add a new file
somewhere under the directory that the other side will rename. While
the new files added start within different directories and thus could
logically end up within different directories, it is weird for a file
on one side to end up where the other one started and not move along
with it. So, let's just turn off directory rename detection in this
case as well.
Another way to look at this is that if the source name involved in a
directory rename on one side is the target name of a directory rename
operation for a file from the other side, then we avoid the doubly
transitive rename. (More concretely, if a directory rename on side D
wants to rename a file on side E from OLD_NAME -> NEW_NAME, and side D
already had a file named NEW_NAME, and a directory rename on side E
wants to rename side D's NEW_NAME -> NEWER_NAME, then we turn off the
directory rename detection for NEW_NAME to prevent the
NEW_NAME -> NEWER_NAME rename, and instead end up with an add/add
conflict on NEW_NAME.)
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Run compute_collisions() for renames on both sides of history before
any calls to collect_renames(), and do not free the computed collisions
until after both calls to collect_renames(). This is just a code
reorganization at this point that doesn't make sense on its own, but
will permit us to use the computed collision info from both sides
within each call to collect_renames() in a subsequent commit.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This commit makes no functional changes, it's just some code movement in
preparation for later changes.
Signed-off-by: Elijah Newren <newren@palantir.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
No functional changes, just some preparatory cleanups.
Suggested-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Elijah Newren <newren@palantir.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is an attempt at minimalizing a testcase reported by Glen Choo
with tensorflow where merge-ort would report an assertion failure:
Assertion failed: (ci->filemask == 2 || ci->filemask == 4), function apply_directory_rename_modifications, file merge-ort.c, line 2410
reversing the direction of the merge provides a different error:
error: cache entry has null sha1: ...
fatal: unable to write .git/index
so we add testcases for both. With these new testcases, the
recursive strategy differs in that it returns the latter error for
both merge directions.
These testcases are somehow a little different than Glen's original
tensorflow testcase in that these ones trigger a bug with the recursive
algorithm whereas his testcase didn't. I figure that means these
testcases somehow manage to be more comprehensive.
Reported-by: Glen Choo <chooglen@google.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Rewrite of "git add -i" in C that appeared in Git 2.25 didn't
correctly record a removed file to the index, which is an old
regression but has become widely known because the C version
has become the default in the latest release.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This test had a line reading
! test_file_is_empty actual
which was meant to be
! test_must_be_empty actual
The test worked despite the error, because even though
test_file_is_empty is a non-existent function, the '!' negated the
return value and made it pass. It'd be better to avoid the negation,
so something like
test_file_not_empty actual
would be better, but perhaps it makes even more sense to specify the
number of lines of expected output to make the test a bit tighter.
Reported-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Elijah Newren <newren@palantir.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The commit-graph is used to opportunistically optimize accesses to
certain pieces of information on commit objects, and
lookup_commit_in_graph() tries to say "no" when the requested commit
does not locally exist by returning NULL, in which case the caller
can ask for (which may result in on-demand fetching from a promisor
remote) and parse the commit object itself.
However, it uses a wrong helper, repo_has_object_file(), to do so.
This helper not only checks if an object is mmediately available in
the local object store, but also tries to fetch from a promisor remote.
But the fetch machinery calls lookup_commit_in_graph(), thus causing an
infinite loop.
We should make lookup_commit_in_graph() expect that a commit given to it
can be legitimately missing from the local object store, by using the
has_object_file() helper instead.
Signed-off-by: Han Xin <hanxin.hx@bytedance.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "test_when_finished" cleanup phase added in 4179b4897f (config:
allow overriding of global and system configuration, 2021-04-19) has
never worked as intended, firstly the ".config/git" is a directory, so
we'd need the "-r" flag, but more importantly the $HOME variable
wasn't properly quoted.
We'd thus end up trying to remove the "trash" part of "trash
directory", which wouldn't fail with "-f", since "rm -f" won't fail on
non-existing files.
It's possible that this would have caused an actual failure if someone
had a $HOME with a space character in it, such that our "rm -f" would
fail to remove an existing directory, but in practice that probably
never happened.
Let's fix both the quoting issue, and the other issue cleanup issue in
4179b4897f, which is that we were attempting to clean up
~/.config/git, but weren't cleaing up ~/.gitconfig.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a quoting issue in the function introduced in
b9638d7286 (test-lib: make $GIT_BUILD_DIR an absolute path,
2022-02-27), running the test suite where the git checkout was on a
path with e.g. a space in it would fail.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix inclusion errors which would occur if the $TEST_DIRECTORY had $IFS
whitespace in it.
See d42bab442d (core.fsyncmethod: tests for batch mode, 2022-04-04)
and a242c150eb (vimdiff: integrate layout tests in the unit tests
framework ('t' folder), 2022-03-30) for the two relevant commits. Both
were first released with v2.37.0-rc0 (and were also part of v2.37.0).
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While inspecting the 'git rebase' documentation, I noticed that it is
inconsistent with how it uses back-ticks (or other punctuation) for
identifying Git commands, command-line arguments, or values for those
arguments.
Sometimes, an argument (like '--interactive') would appear without any
punctuation, causing the argument to not have any special formatting.
Other times, arguments or 'git rebase' itself would have single-quotes
giving a bold look (in the HTML documentation at least).
By consistently using back-ticks, these types of strings appear in a
monospace font with special highlighting to appear more clearly as text
that exists in a command-line invocation of a Git command.
This rather-large diff is the result of scanning git-rebase.txt and
adding back-ticks as appropriate. Some are adding back-ticks where there
was no punctuation. Others are replacing single quotes.
There are also a few minor cleanups in the process, including those
found by reviewers.
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The size and padding of `struct object_entry` is an important factor in
determining the memory usage of `pack-objects`. For this reason,
3b13a5f263 (pack-objects: reorder members to shrink struct object_entry,
2018-04-14) added a comment containing some information from pahole
indicating the size and padding of that struct.
Unfortunately, this comment hasn't been updated since 9ac3f0e5b3
(pack-objects: fix performance issues on packing large deltas,
2018-07-22), despite the size of this struct changing many times since
that commit.
To see just how often the size of object_entry changes, I skimmed the
first-parent history with this script:
for sha in $(git rev-list --first-parent --reverse 9ac3f0e..)
do
echo -n "$sha "
git checkout -q $sha
make -s pack-objects.o 2>/dev/null
pahole -C object_entry pack-objects.o | sed -n \
-e 's/\/\* size: \([0-9]*\).*/size \1/p' \
-e 's/\/\*.*padding: \([0-9]*\).*/padding \1/p' | xargs
done | uniq -f1
In between each merge, the size of object_entry changes too often to
record every instance here. But the important merges (along with their
corresponding sizes and bit paddings) in chronological order are:
ad635e82d6 (Merge branch 'nd/pack-objects-pack-struct', 2018-05-23) size 80 padding 4
29d9e3e2c4 (Merge branch 'nd/pack-deltify-regression-fix', 2018-08-22) size 80 padding 9
3ebdef2e1b (Merge branch 'jk/pack-delta-reuse-with-bitmap', 2018-09-17) size 80 padding 8
33e4ae9c50 (Merge branch 'bc/sha-256', 2019-01-29) size 96 padding 8
(indicating that the current size of the struct is 96 bytes, with 8
padding bits).
Even though this comment was written in a good spirit, it is updated
infrequently enough that it serves to confuse rather than to encourage
contributors to update the appropriate values when the modify the
definition of object_entry.
For that reason, eliminate the confusion by removing the comment
altogether.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The contrib/vscode/init.sh script initializes the .vscode directory with
some helpful metadata so VS Code handles Git code better.
One big issue that VS Code has is detecting the tab width based on file
type. ".txt" files were not covered by this script before, so add them
with the appropriate tab widths. This prevents inserting spaces instead
of tabs and keeps the tab width to eight instead of four or two.
While we are here, remove the "editor.wordWrap" settings. The editor's
word wrap is only cosmetic: it does not actually insert newlines when
your typing goes over the column limit. This can make it appear like you
have properly wrapped code, but it is incorrect. Further, existing code
that is over the column limit is wrapped even if your editor window is
wider than the limit. This can make reading such code more difficult.
Without these lines, VS Code renders the lines accurately, without
"ghost" newlines.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
43966ab315 (revert: optionally refer to commit in the "reference"
format, 2022-05-26) added the documentation file config/revert.txt.
Actually include it in config.txt.
Make is used with a bare infinitive after the object; remove the "to".
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Run msgmerge with --no-location to drop file locations to decrease the
size of future patches. Also removed old translations.
Signed-off-by: Peter Krefting <peter@softwolves.pp.se>
Commit 7281c196b1 (transfer doc: move fetch.credentialsInUrl to
"transfer" config namespace, 2022-06-15) propagates a typo from
6dcbdc0d66 (remote: create fetch.credentialsInUrl config, 2022-06-06),
where "other" is misspelled as "oher". Fix the typo accordingly.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The documentation explained the conversion from remote branch path to
local tracking ref path for @{push}, but not for @{upstream}.
Add the explanation to @{upstream}, and reference it in @{push} to avoid
undue repetition.
Signed-off-by: Tao Klerks <tao@klerks.biz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint-2.35:
Git 2.35.4
Git 2.34.4
Git 2.33.4
Git 2.32.3
Git 2.31.4
Git 2.30.5
setup: tighten ownership checks post CVE-2022-24765
git-compat-util: allow root to access both SUDO_UID and root owned
t0034: add negative tests and allow git init to mostly work under sudo
git-compat-util: avoid failing dir ownership checks if running privileged
t: regression git needs safe.directory when using sudo
* maint-2.34:
Git 2.34.4
Git 2.33.4
Git 2.32.3
Git 2.31.4
Git 2.30.5
setup: tighten ownership checks post CVE-2022-24765
git-compat-util: allow root to access both SUDO_UID and root owned
t0034: add negative tests and allow git init to mostly work under sudo
git-compat-util: avoid failing dir ownership checks if running privileged
t: regression git needs safe.directory when using sudo
* maint-2.33:
Git 2.33.4
Git 2.32.3
Git 2.31.4
Git 2.30.5
setup: tighten ownership checks post CVE-2022-24765
git-compat-util: allow root to access both SUDO_UID and root owned
t0034: add negative tests and allow git init to mostly work under sudo
git-compat-util: avoid failing dir ownership checks if running privileged
t: regression git needs safe.directory when using sudo
* maint-2.32:
Git 2.32.3
Git 2.31.4
Git 2.30.5
setup: tighten ownership checks post CVE-2022-24765
git-compat-util: allow root to access both SUDO_UID and root owned
t0034: add negative tests and allow git init to mostly work under sudo
git-compat-util: avoid failing dir ownership checks if running privileged
t: regression git needs safe.directory when using sudo
* maint-2.31:
Git 2.31.4
Git 2.30.5
setup: tighten ownership checks post CVE-2022-24765
git-compat-util: allow root to access both SUDO_UID and root owned
t0034: add negative tests and allow git init to mostly work under sudo
git-compat-util: avoid failing dir ownership checks if running privileged
t: regression git needs safe.directory when using sudo
* maint-2.30:
Git 2.30.5
setup: tighten ownership checks post CVE-2022-24765
git-compat-util: allow root to access both SUDO_UID and root owned
t0034: add negative tests and allow git init to mostly work under sudo
git-compat-util: avoid failing dir ownership checks if running privileged
t: regression git needs safe.directory when using sudo
8959555cee (setup_git_directory(): add an owner check for the top-level
directory, 2022-03-02), adds a function to check for ownership of
repositories using a directory that is representative of it, and ways to
add exempt a specific repository from said check if needed, but that
check didn't account for owership of the gitdir, or (when used) the
gitfile that points to that gitdir.
An attacker could create a git repository in a directory that they can
write into but that is owned by the victim to work around the fix that
was introduced with CVE-2022-24765 to potentially run code as the
victim.
An example that could result in privilege escalation to root in *NIX would
be to set a repository in a shared tmp directory by doing (for example):
$ git -C /tmp init
To avoid that, extend the ensure_valid_ownership function to be able to
check for all three paths.
This will have the side effect of tripling the number of stat() calls
when a repository is detected, but the effect is expected to be likely
minimal, as it is done only once during the directory walk in which Git
looks for a repository.
Additionally make sure to resolve the gitfile (if one was used) to find
the relevant gitdir for checking.
While at it change the message printed on failure so it is clear we are
referring to the repository by its worktree (or gitdir if it is bare) and
not to a specific directory.
Helped-by: Junio C Hamano <junio@pobox.com>
Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
With a recent update to refuse access to repositories of other
people by default, "sudo make install" and "sudo git describe"
stopped working. This series intends to loosen it while keeping
the safety.
* cb/path-owner-check-with-sudo:
t0034: add negative tests and allow git init to mostly work under sudo
git-compat-util: avoid failing dir ownership checks if running privileged
t: regression git needs safe.directory when using sudo
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This 10-year old typo was introduced at 75b182ae (Update l10n guide:
change the repository URL, etc, 2012-03-02). The word "l10" should be
"l10n".
Signed-off-by: Arthur Milchior <arthur@milchior.fr>
Signed-off-by: Jiang Xin <worldhello.net@gmail.com>
In asciidoc's HTML output of the "gitrevisions" and "git-rev-parse"
documentation, the header:
The ... (three-dot) Symmetric Difference Notation
is rendered using "&8230;", a horizontal ellipsis. This is visually
ugly, but also hard to search for or cut-and-paste. We really mean three
ascii dots (0x2e) here, so let's make sure it renders as such.
The simplest way to do that is just escaping the leading dot, as the
instances in the rest of the section do. Arguably this should all be
converted to use backticks, which would let us drop the quoting here and
elsewhere (e.g., {carat}). But that does change the rendering slightly.
So let's fix the bug first, and we can decide on migrating the whole
section separately.
Note that this produces an empty doc-diff of the manpages. Curiously,
asciidoc produces the same ellipsis entity in the XML file, but docbook
then converts it back into three literal dots for the roff output! So
the roff manpages have been correct all along (which may be a reason
nobody noticed this until now).
Reported-by: Arthur Milchior
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* 'master' of github.com:git/git:
name-rev: prefix annotate-stdin with '--' in message
git-prompt: fix expansion of branch colour codes
git-prompt: make colourization consistent
Narrow the scope of the `nested_repo` variable and conditional return
statement to the block where the variable is set.
Signed-off-by: Goss Geppert <ggossdev@gmail.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 8d92fb2927 (dir: replace exponential algorithm with a linear one,
2020-04-01) traversing into a repository's directory tree when the
traversal began outside the repository's standard location has failed
because the encountered repository was identified as a nested foreign
repository.
Prior to this commit, the failure to traverse into a repository's
default worktree location was observable from a user's perspective under
either of the following conditions (there may be others):
1) Set the `core.worktree` location to a parent directory of the
default worktree; or
2) Use the `--git_dir` option while the working directory is outside
the repository's default worktree location
Under either of these conditions, symptoms of the failure to traverse
into the repository's default worktree location include the inability to
add files to the index or get a list of untracked files via ls-files.
This commit adds a check to determine whether a nested repository that
is encountered in recursing a path is actually `the_repository`. If so,
we simply treat the directory as if it doesn't contain a nested
repository.
The commit includes test-cases to reduce the likelihood of future
regressions.
Signed-off-by: Goss Geppert <ggossdev@gmail.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The many test_configured_prune tests in t5510-fetch.sh test many
combinations of --prune, --prune-tags, and using 'origin' or an explicit
URL. Some machinery was introduced in e1790f9245 (fetch tests: fetch
<url> <spec> as well as fetch [<remote>], 2018-02-09) to replace
'origin' with this explicit URL. This URL is a "file:///" URL for the
root of the $TRASH_DIRECTORY.
However, if the current build tree has an '@' symbol, the
replacement using perl fails. It drops the '@' as well as anything
else in that directory name. You can observe this locally by
cloning git.git into a "victim@03" directory and running the test
script.
As we are writing in Perl anyway, pass in the shell variables involved
to the script as arguments and perform necessary string transformations
inside it, instead of assuming that it is sufficient to enclose the
$remote_url variable inside a pair of single quotes.
Reported-by: Randall Becker <rsbecker@nexbridge.com>
Original-patch-by: Derrick Stolee <derrickstolee@github.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With 31c8221a (mktree: validate entry type in input, 2009-05-14), we
called the sha1_object_info() API to obtain the type information, but
allowed the call to silently fail when the object was missing locally,
so that we can sanity-check the types opportunistically when the
object did exist.
The implementation is understandable because back then there was no
lazy/on-demand downloading of individual objects from the promisor
remotes that causes a long delay and materializes the object, hence
defeating the point of using "--missing". The design is hurting us
now.
We could bypass the opportunistic type/mode consistency check
altogether when "--missing" is given, but instead, use the
oid_object_info_extended() API and tell it that we are only interested
in objects that locally exist and are immediately available by passing
OBJECT_INFO_SKIP_FETCH_OBJECT bit to it. That way, we will still
retain the cheap and opportunistic sanity check for local objects.
Signed-off-by: Richard Oliver <roliver@roku.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Bash command line prompt (in contrib/) update.
* jp/prompt-clear-before-upstream-mark:
git-prompt: fix expansion of branch colour codes
git-prompt: make colourization consistent
Some messages prefixes like 'usage:'/'fatal:'/'warning:'/'error:'
were not translated.
Signed-off-by: Dimiytriy Ryazantcev <dimitriy.ryazantcev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code for combined diffs currently only writes to stdout. Abort and
report that fact instead of silently ignoring the --output option. The
(empty) output file has already been created at that point, though.
Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code for combined diffs doesn't currently support ignoring changes
that match a regex. Abort and report that fact instead of running into
a segfault.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is an option rather than command. Make the message convey this
similar to the other messages in the file.
Signed-off-by: Alexander Shopov <ash@kambanaria.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* 'master' of github.com:git/git: (22 commits)
Git 2.37-rc1
git-compat-util: allow root to access both SUDO_UID and root owned
i18n: fix mismatched camelCase config variables
Another batch of fixes before -rc1
bug_fl(): correctly initialize trace2 va_list
relative_url(): fix incorrect condition
pack-mtimes: avoid closing a bogus file descriptor
read_index_from(): avoid memory leak
submodule--helper: avoid memory leak when fetching submodules
submodule-config: avoid memory leak
fsmonitor: avoid memory leak in `fsm_settings__get_incompatible_msg()`
cache-tree: remove cache_tree_find_path()
pack-write: drop always-NULL parameter
t5329: test 'git gc --cruft' without '--prune=now'
t2107: test 'git update-index --verbose'
perf-lib: fix missing test titles in output
transfer doc: move fetch.credentialsInUrl to "transfer" config namespace
fetch doc: note "pushurl" caveat about "credentialsInUrl", elaborate
ci(github): also mark up compile errors
ci(github): use grouping also in the `win-build` job
...
"sudo git foo" used to consider a repository owned by the original
user a safe one to access; it now also considers a repository owned
by root a safe one, too (after all, if an attacker can craft a
malicious repository owned by root, the box is 0wned already).
* cb/path-owner-check-with-sudo-plus:
git-compat-util: allow root to access both SUDO_UID and root owned
The reference given to users when the result of
setup_git_directory_gently_1() is unexpected is incorrect.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previous changes introduced a regression which will prevent root for
accessing repositories owned by thyself if using sudo because SUDO_UID
takes precedence.
Loosen that restriction by allowing root to access repositories owned
by both uid by default and without having to add a safe.directory
exception.
A previous workaround that was documented in the tests is no longer
needed so it has been removed together with its specially crafted
prerequisite.
Helped-by: Johanness Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some config variables are combinations of multiple words, and we
typically write them in camelCase forms in manpage and translatable
strings. It's not easy to find mismatches for these camelCase config
variables during code reviews, but occasionally they are identified
during localization translations.
To check for mismatched config variables, I introduced a new feature
in the helper program for localization[^1]. The following mismatched
config variables have been identified by running the helper program,
such as "git-po-helper check-pot".
Lowercase in manpage should use camelCase:
* Documentation/config/http.txt: http.pinnedpubkey
Lowercase in translable strings should use camelCase:
* builtin/fast-import.c: pack.indexversion
* builtin/gc.c: gc.logexpiry
* builtin/index-pack.c: pack.indexversion
* builtin/pack-objects.c: pack.indexversion
* builtin/repack.c: pack.writebitmaps
* commit.c: i18n.commitencoding
* gpg-interface.c: user.signingkey
* http.c: http.postbuffer
* submodule-config.c: submodule.fetchjobs
Mismatched camelCases, choose the former:
* Documentation/config/transfer.txt: transfer.credentialsInUrl
remote.c: transfer.credentialsInURL
[^1]: https://github.com/git-l10n/git-po-helper
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Rename fetch.credentialsInUrl to transfer.credentialsInUrl as the
single configuration variable should work both in pushing and
fetching.
* ab/credentials-in-url-more:
transfer doc: move fetch.credentialsInUrl to "transfer" config namespace
fetch doc: note "pushurl" caveat about "credentialsInUrl", elaborate
Recent CI update hides certain failures in test jobs, which has
been corrected.
* js/ci-github-workflow-markup:
ci(github): also mark up compile errors
ci(github): use grouping also in the `win-build` job
ci(github): bring back the 'print test failures' step
Improve test coverage with a handful of tests.
* ds/more-test-coverage:
cache-tree: remove cache_tree_find_path()
pack-write: drop always-NULL parameter
t5329: test 'git gc --cruft' without '--prune=now'
t2107: test 'git update-index --verbose'
The code added 0cc05b044f (usage.c: add a non-fatal bug() function to go
with BUG(), 2022-06-02) sets up two va_list variables: one to output to
stderr, and one to trace2. But the order of initialization is wrong:
va_list ap, cp;
va_copy(cp, ap);
va_start(ap, fmt);
We copy the contents of "ap" into "cp" before it is initialized, meaning
it is full of garbage. The two should be swapped.
However, there's another bug, noticed by Johannes Schindelin: we forget
to call va_end() for the copy. So instead of just fixing the copy's
initialization, let's do two separate start/end pairs. This is allowed
by the standard, and we don't need to use copy here since we have access
to the original varargs. Matching the pairs with the calls makes it more
obvious that everything is being done correctly.
Note that we do call bug_fl() in the tests, but it didn't trigger this
problem because our format string doesn't have any placeholders. So even
though we were passing a garbage va_list through the stack, nobody ever
needed to look at it. We can easily adjust one of the trace2 tests to
trigger this, both for bug() and for BUG(). The latter isn't broken, but
it's nice to exercise both a bit more. Without the fix in this patch
(but with the test change), the bug() case causes a segfault.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 63e95beb08 (submodule: port resolve_relative_url from shell to C,
2016-04-15), we added a loop over `url` where we are looking for `../`
or `./` components.
The loop condition we used is the pointer `url` itself, which is clearly
not what we wanted.
Pointed out by Coverity.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 94cd775a6c (pack-mtimes: support reading .mtimes files,
2022-05-20), code was added to close the file descriptor corresponding
to the mtimes file.
However, it is possible that opening that file failed, in which case we
are closing a file descriptor with the value `-1`. Let's guard that
`close()` call.
Reported by Coverity.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 998330ac2e (read-cache: look for shared index files next to the
index, too, 2021-08-26), we added code that allocates memory to store
the base path of a shared index, but we never released that memory.
Reported by Coverity.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In c51f8f94e5 (submodule--helper: run update procedures from C,
2021-08-24), we added code that first obtains the default remote, and
then adds that to a `strvec`.
However, we never released the default remote's memory.
Reported by Coverity.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 961b130d20 (branch: add --recurse-submodules option for branch
creation, 2022-01-28), a funny pattern was introduced where first some
struct is `xmalloc()`ed, then we resize an array whose element type is
the same struct, and then the first struct's contents are copied into
the last element of that array.
Crucially, the `xmalloc()`ed memory never gets released.
Let's avoid that memory leak and that memory allocation dance altogether
by first reallocating the array, then using a pointer to the last array
element to go forward.
Reported by Coverity.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This reverts 080ab56a46 (cache-tree: implement cache_tree_find_path(),
2022-05-23). The cache_tree_find_path() method was never actually called
in the topic that added it. I cannot find any reference to it in any of
my forks, so this appears to not be needed at the moment.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
write_mtimes_file() takes an mtimes parameter as its first option, but
the only caller passes a NULL constant. Drop this parameter to simplify
logic. This can be reverted if that parameter is needed in the future.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Replace a 'git repack --cruft -d' with the wrapper 'git gc --cruft' to
exercise some logic in builtin/gc.c that adds the '--cruft' option to
the underlying 'git repack' command.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The '--verbose' option reports what is being added and removed from the
index, but has not been tested up to this point. Augment the tests in
t2107 to check the '--verbose' option in some scenarios.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 5dccd9155f (t/perf: add iteration setup mechanism to perf-lib,
2022-04-04) modified the parameter parsing of test_wrapper() such that
the test title was no longer in $1, and is instead in $test_title_.
We correctly pass the new variable to the code which outputs the title
to the log, but missed the spot in test_wrapper() where the title is
written to the ".descr" file which is used to produce the final output
table. As a result, all of the titles are missing from that table (or
worse, using whatever was left in $1):
$ ./p0000-perf-lib-sanity.sh
[...]
Test this tree
------------------------------
0000.1: 0.01(0.01+0.00)
0000.2: 0.01(0.00+0.01)
0000.4: 0.00(0.00+0.00)
0000.5: true 0.00(0.00+0.00)
0000.7: 0.00(0.00+0.00)
0000.8: 0.00(0.00+0.00)
After this patch, we get the pre-5dccd9155f output:
Test this tree
--------------------------------------------------------------------------
0000.1: test_perf_default_repo works 0.00(0.00+0.00)
0000.2: test_checkout_worktree works 0.01(0.00+0.01)
0000.4: export a weird var 0.00(0.00+0.00)
0000.5: éḿíẗ ńöń-ÁŚĆÍÍ ćḧáŕáćẗéŕś 0.00(0.00+0.00)
0000.7: important variables available in subshells 0.00(0.00+0.00)
0000.8: test-lib-functions correctly loaded in subshells 0.00(0.00+0.00)
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Bitmap file has a trailing checksum at the end of the file. However
there is no information in the bitmap-format documentation about it.
Add a trailer section to include the trailing checksum info in the
`Documentation/technical/bitmap-format.txt` file.
Signed-off-by: Abhradeep Chakraborty <chakrabortyabhradeep79@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The asciidoc generated html for `Documentation/technical/bitmap-
format.txt` is broken. This is mainly because `-` is used for nested
lists (which is not allowed in asciidoc) instead of `*`.
Fix these and also reformat it for better readability of the html page.
Signed-off-by: Abhradeep Chakraborty <chakrabortyabhradeep79@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Documentation/Makefile does not include bitmap-format.txt to generate
a html page using asciidoc.
Teach Documentation/Makefile to also generate a html page for
Documentation/technical/bitmap-format.txt file.
Signed-off-by: Abhradeep Chakraborty <chakrabortyabhradeep79@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* fix more translation mistakes
* consistently translate "amend" as "enmendar"
* consistently translate "chunk" as "fragmento"
* consistently translate "prune" as "recortar" or "recorte"
* consistently translate "push" as "empujar" or "empuje"
* consistently translate "rephrase" as "refrasear" or "refraseo"
* consistently translate "squash" as "aplastar" or "aplastamiento"
Signed-off-by: Alex Henrie <alexhenrie24@gmail.com>
Various error messages that talk about the removal of
"--preserve-merges" in "rebase" have been strengthened, and "rebase
--abort" learned to get out of a state that was left by an earlier
use of the option.
* po/rebase-preserve-merges:
rebase: translate a die(preserve-merges) message
rebase: note `preserve` merges may be a pull config option
rebase: help users when dying with `preserve-merges`
rebase.c: state preserve-merges has been removed
"git revert" learns "--reference" option to use more human-readable
reference to the commit it reverts in the message template it
prepares for the user.
* jc/revert-show-parent-info:
revert: --reference should apply only to 'revert', not 'cherry-pick'
revert: optionally refer to commit in the "reference" format
Add and use a LIBCURL prerequisite for tests added in
6dcbdc0d66 (remote: create fetch.credentialsInUrl config,
2022-06-06).
These tests would get as far as emitting a couple of the warnings we
were testing for, but would then die as we had no "git-remote-https"
program compiled.
It would be more consistent with other prerequisites (e.g. PERL for
NO_PERL) to name this "CURL", but since e9184b0789 (t5561: skip tests
if curl is not available, 2018-04-03) we've had that prerequisite
defined for checking of we have the curl(1) program.
The existing "CURL" prerequisite is only used in one place, and we
should probably name it "CURL_PROGRAM", then rename "LIBCURL" to
"CURL" as a follow-up, but for now (pre-v2.37.0) let's aim for the
most minimal fix possible.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Rename the "fetch.credentialsInUrl" configuration variable introduced
in 6dcbdc0d66 (remote: create fetch.credentialsInUrl config,
2022-06-06) to "transfer".
There are existing exceptions, but generally speaking the
"<namespace>.<var>" configuration should only apply to command
described in the "namespace" (and its sub-commands, so e.g. "clone.*"
or "fetch.*" might also configure "git-remote-https").
But in the case of "fetch.credentialsInUrl" we've got a configuration
variable that configures the behavior of all of "clone", "push" and
"fetch", someone adjusting "fetch.*" configuration won't expect to
have the behavior of "git push" altered, especially as we have the
pre-existing "{transfer,fetch,receive}.fsckObjects", which configures
different parts of the transfer dialog.
So let's move this configuration variable to the "transfer" namespace
before it's exposed in a release. We could add all of
"{transfer,fetch,pull}.credentialsInUrl" at some other time, but once
we have "fetch" configure "pull" such an arrangement would would be a
confusing mess, as we'd at least need to have "fetch" configure
"push" (but not the other way around), or change existing behavior.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Acked-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Amend the documentation and release notes entry for the
"fetch.credentialsInUrl" feature added in 6dcbdc0d66 (remote: create
fetch.credentialsInUrl config, 2022-06-06), it currently doesn't
detect passwords in `remote.<name>.pushurl` configuration. We
shouldn't lull users into a false sense of security, so we need to
mention that prominently.
This also elaborates and clarifies the "exposes the password in
multiple ways" part of the documentation. As noted in [1] a user
unfamiliar with git's implementation won't know what to make of that
scary claim, e.g. git hypothetically have novel git-specific ways of
exposing configured credentials.
The reality is that this configuration is intended as an aid for users
who can't fully trust their OS's or system's security model, so lets
say that's what this is intended for, and mention the most common ways
passwords stored in configuration might inadvertently get exposed.
1. https://lore.kernel.org/git/220524.86ilpuvcqh.gmgdl@evledraar.gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Acked-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix an issue that existed before 0527ccb1b5 (add -i: default to the
built-in implementation, 2021-11-30), but which became the default
with that change, we should not be marking tests that are known to
pass as "TODO" tests.
When GIT_TEST_ADD_I_USE_BUILTIN=1 was made the default we started
passing the tests added in 0f0fba2cc8 (t3701: add a test for advanced
split-hunk editing, 2019-12-06) and 1bf01040f0 (add -p: demonstrate
failure when running 'edit' after a split, 2015-04-16).
Thus we've been emitting this sort of output:
$ prove ./t3701-add-interactive.sh
./t3701-add-interactive.sh .. ok
All tests successful.
Test Summary Report
-------------------
./t3701-add-interactive.sh (Wstat: 0 Tests: 70 Failed: 0)
TODO passed: 45, 47
Files=1, Tests=70, 2 wallclock secs ( 0.03 usr 0.00 sys + 0.86 cusr 0.33 csys = 1.22 CPU)
Result: PASS
Which isn't just cosmetic, but due to issues with
test_expect_failure (see [1]) we could e.g. be hiding something as bad
as a segfault in the new implementation. It makes sense catch that,
especially before we put out a release with the built-in "add -i", so
let's generalize the check we were already doing in 0527ccb1b5 with a
new "ADD_I_USE_BUILTIN" prerequisite.
1. https://lore.kernel.org/git/patch-1.7-4624abc2591-20220318T002951Z-avarab@gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We used to log an error return from wait_or_whine() as process
termination of the waited child, which was incorrect.
* js/wait-or-whine-can-fail:
run-command: don't spam trace2_child_exit()
Sample watchman interface hook sometimes failed to produce
correctly formatted JSON message, which has been corrected.
* sn/fsmonitor-missing-clock:
fsmonitor: query watchman with right valid json
Remove redundant copying (with index v3 and older) or possible
over-reading beyond end of mmapped memory (with index v4) has been
corrected.
* zh/read-cache-copy-name-entry-fix:
read-cache.c: reduce unnecessary cache entry name copying
"git show-ref --heads" (and "--tags") still iterated over all the
refs only to discard refs outside the specified area, which has
been corrected.
* tb/show-ref-optim:
builtin/show-ref.c: avoid over-iterating with --heads, --tags
The "fetch.credentialsInUrl" configuration variable controls what
happens when a URL with embedded login credential is used.
* ds/credentials-in-url:
remote: create fetch.credentialsInUrl config
Updating the graft information invalidates the list of parents of
in-core commit objects that used to be in the graft file.
* jt/unparse-commit-upon-graft-change:
commit,shallow: unparse commits if grafts changed
In Git 2.36 we revamped the way how hooks are invoked. One change
that is end-user visible is that the output of a hook is no longer
directly connected to the standard output of "git" that spawns the
hook, which was noticed post release. This is getting corrected.
* ab/hooks-regression-fix:
hook API: fix v2.36.0 regression: hooks should be connected to a TTY
run-command: add an "ungroup" option to run_process_parallel()
"git -c diff.submodule=log range-diff" did not show anything for
submodules that changed in the ranges being compared, and
"git -c diff.submodule=diff range-diff" did not work correctly.
Fix this by including the "--submodule=short" output
unconditionally to be compared.
* pb/range-diff-with-submodule:
range-diff: show submodule changes irrespective of diff.submodule
The two examples in the doc for 'git diff-index' were not updated when
the raw output format was changed in 81e50eabf0 ([PATCH] The diff-raw
format updates., 2005-05-21) (first example) and in b6d8f309d9 ([PATCH]
diff-raw format update take #2., 2005-05-23) and 7cb6ac1e4b (diff:
diff_aligned_abbrev: remove ellipsis after abbreviated SHA-1 value,
2017-12-03) (second example).
Update the output, inventing some characters to complete the source
hash in the second example. Also correct the destination mode in the
second example, which was wrongly '100664' since the addition of the
example in c64b9b8860 (Reference documentation for the core git
commands., 2005-05-05).
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Near the end of the "Raw output format" section, an example shows the
output of 'git diff-files' for a tracked file modified on disk but not
yet added to the index. However the wording is:
<sha1> is shown as all 0's if a file is new on the filesystem
and it is out of sync with the index.
which is confusing since it can be understood to mean that 'file' is a
new, yet untracked file, in which case 'git diff-files' does not care
about it at all.
When this example was introduced all the way back in c64b9b8860
(Reference documentation for the core git commands., 2005-05-05), 'old'
and 'new' referred to the two entities being compared, depending on the
command being used (diff-index, diff-tree or diff-files - which at the
time were diff-cache, diff-tree and show-diff). The wording used at the
time was:
<new-sha1> is shown as all 0's if new is a file on the
filesystem and it is out of sync with the cache.
This section was reworked in 81e50eabf0 ([PATCH] The diff-raw
format updates., 2005-05-21) and the mention of the meaning of 'new' and
'old' was removed. Then in f73ae1fc5d (Some typos and light editing of
various manpages, 2005-10-05), the wording was changed to what it is
now.
In addition, in b6d8f309d9 ([PATCH] diff-raw format update take #2.,
2005-05-23), the section was further reworked and did not use '<sha1>'
anymore, making the example the sole user of this token.
Rework the introductory sentence of the example to instead refer to
'sha1 for "dst"', which is what the text description above it uses, and
fix the wording so that we do not mention a "new file".
While at it, also tweak the wording used in the description of the raw
format to explicitely state that all 0's are used for the destination
hash if the working tree is out of sync with the index, instead of the
more vague "look at worktree".
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"dst" can legitimately be "0\{40\}" for a creation patch, e.g. when
the stat information is stale, but it falls into "look at work tree"
case. The original description in b6d8f309 ([PATCH] diff-raw format
update take #2., 2005-05-23) forgot that deletion also makes the
"dst" 0* SHA-1.
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When GCC produces those helpful errors, we will want to present them in
the GitHub workflow runs in the most helpful manner. To that end, we
want to use workflow commands to render errors and warnings:
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions
In the previous commit, we ensured that grouping is used for the build
in all jobs, and this allows us to piggy-back onto the `group` function
to transmogrify the output.
Note: If `set -o pipefail` was available, we could do this in a little
more elegant way. But since some of the steps are run using `dash`, we
have to do a little `{ ...; echo $? >exit.status; } | ...` dance.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We already do the same when building Git in all the other jobs.
This will allow us to piggy-back on top of grouping to mark up compiler
errors in the next commit.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A new bug() and BUG_if_bug() API is introduced to make it easier to
uniformly log "detect multiple bugs and abort in the end" pattern.
* ab/bug-if-bug:
cache-tree.c: use bug() and BUG_if_bug()
receive-pack: use bug() and BUG_if_bug()
parse-options.c: use optbug() instead of BUG() "opts" check
parse-options.c: use new bug() API for optbug()
usage.c: add a non-fatal bug() function to go with BUG()
common-main.c: move non-trace2 exit() behavior out of trace2.c
More fsmonitor--daemon.
* jh/builtin-fsmonitor-part3: (30 commits)
t7527: improve implicit shutdown testing in fsmonitor--daemon
fsmonitor--daemon: allow --super-prefix argument
t7527: test Unicode NFC/NFD handling on MacOS
t/lib-unicode-nfc-nfd: helper prereqs for testing unicode nfc/nfd
t/helper/hexdump: add helper to print hexdump of stdin
fsmonitor: on macOS also emit NFC spelling for NFD pathname
t7527: test FSMonitor on case insensitive+preserving file system
fsmonitor: never set CE_FSMONITOR_VALID on submodules
t/perf/p7527: add perf test for builtin FSMonitor
t7527: FSMonitor tests for directory moves
fsmonitor: optimize processing of directory events
fsm-listen-darwin: shutdown daemon if worktree root is moved/renamed
fsm-health-win32: force shutdown daemon if worktree root moves
fsm-health-win32: add polling framework to monitor daemon health
fsmonitor--daemon: stub in health thread
fsmonitor--daemon: rename listener thread related variables
fsmonitor--daemon: prepare for adding health thread
fsmonitor--daemon: cd out of worktree root
fsm-listen-darwin: ignore FSEvents caused by xattr changes on macOS
unpack-trees: initialize fsmonitor_has_run_once in o->result
...
A misconfigured 'branch..remote' led to a bug in configuration
parsing.
* gc/zero-length-branch-config-fix:
remote.c: reject 0-length branch names
remote.c: don't BUG() on 0-length branch names
Rename .env_array member to .env in the child_process structure.
* ab/env-array:
run-command API users: use "env" not "env_array" in comments & names
run-command API: rename "env_array" to "env"
With a more targetted workaround in http.c in another topic, we may
be able to lift this blanket "GCC12 dangling-pointer warning is
broken and unsalvageable" workaround.
* cb/buggy-gcc-12-workaround:
Revert -Wno-error=dangling-pointer
Because of the wrapping of the branch name variable $b, the colour codes
in the variable don't get applied, but are instead printed directly in
the output. Move the wrapping of $b to before colour codes are inserted
to correct this. Revert move of branch name colour codes in tests, as
the branch name is now coloured after the wrapping instead of before.
Signed-off-by: Joakim Petersen <joak-pet@online.no>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The resolve-undo extension was added to the index in cfc5789a
(resolve-undo: record resolved conflicts in a new index extension
section, 2009-12-25). This extension records the blob object names
and their modes of conflicted paths when the path gets resolved
(e.g. with "git add"), to allow "undoing" the resolution with
"checkout -m path". These blob objects should be guarded from
garbage-collection while we have the resolve-undo information in the
index (otherwise unresolve operation may try to use a blob object
that has already been pruned away).
But the code called from mark_reachable_objects() for the index
forgets to do so. Teach add_index_objects_to_pending() helper to
also add objects referred to by the resolve-undo extension.
Also make matching changes to "fsck", which has code that is fairly
similar to the reachability stuff, but have parallel implementations
for all these stuff, which may (or may not) someday want to be unified.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Using `ssh-add -L` for gpg.ssh.defaultKeyCommand is not a good
recommendation. It might switch keys depending on the order of known
keys and it only supports ssh-* and no ecdsa or other keys.
Clarify that we expect a literal key prefixed by `key::`, give valid
example use cases and refer to `user.signingKey` as the preferred
option.
Signed-off-by: Fabian Stelzer <fs@gigacodes.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git now shows better information in the GitHub workflow runs when a test
case failed. However, when a test case was implemented incorrectly and
therefore does not even run, nothing is shown.
Let's bring back the step that prints the full logs of the failed tests,
and to improve the user experience, print out an informational message
for readers so that they do not have to know/remember where to see the
full logs.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git clone --origin X" leaked piece of memory that held value read
from the clone.defaultRemoteName configuration variable, which has
been plugged.
source: <xmqqlevl4ysk.fsf@gitster.g>
* jc/clone-remote-name-leak-fix:
clone: plug a miniscule leak
The path taken by "git multi-pack-index" command from the end user
was compared with path internally prepared by the tool withut first
normalizing, which lead to duplicated paths not being noticed,
which has been corrected.
source: <pull.1221.v2.git.1650911234.gitgitgadget@gmail.com>
* ds/midx-normalize-pathname-before-comparison:
cache: use const char * for get_object_directory()
multi-pack-index: use --object-dir real path
midx: use real paths in lookup_multi_pack_index()
"git rebase --keep-base <upstream> <branch-to-rebase>" computed the
commit to rebase onto incorrectly, which has been corrected.
source: <20220421044233.894255-1-alexhenrie24@gmail.com>
* ah/rebase-keep-base-fix:
rebase: use correct base for --keep-base when a branch is given
Avoid problems from interaction between malloc_check and address
sanitizer.
source: <pull.1210.git.1649507317350.gitgitgadget@gmail.com>
* pw/test-malloc-with-sanitize-address:
tests: make SANITIZE=address imply TEST_NO_MALLOC_CHECK
The commit summary shown after making a commit is matched to what
is given in "git status" not to use the break-rewrite heuristics.
source: <c35bd0aa-2e46-e710-2b39-89f18bad0097@web.de>
* rs/commit-summary-wo-break-rewrite:
commit, sequencer: turn off break_opt for commit summary
macOS CI jobs have been occasionally flaky due to tentative version
skew between perforce and the homebrew packager. Instead of
failing the whole CI job, just let it skip the p4 tests when this
happens.
source: <20220512223940.238367-1-gitster@pobox.com>
* cb/ci-make-p4-optional:
ci: use https, not http to download binaries from perforce.com
ci: reintroduce prevention from perforce being quarantined in macOS
ci: avoid brew for installing perforce
ci: make failure to find perforce more user friendly
A bit of test framework fixes with a few fixes to issues found by
valgrind.
source: <20220512223218.237544-1-gitster@pobox.com>
* ab/valgrind-fixes:
commit-graph.c: don't assume that stat() succeeds
object-file: fix a unpack_loose_header() regression in 3b6a8db3b0
log test: skip a failing mkstemp() test under valgrind
tests: using custom GIT_EXEC_PATH breaks --valgrind tests
"git archive --add-file=<path>" picked up the raw permission bits
from the path and propagated to zip output in some cases, without
normalization, which has been corrected (tar output did not have
this issue).
source: <xmqqmtfme8v6.fsf@gitster.g>
* jc/archive-add-file-normalize-mode:
archive: do not let on-disk mode leak to zip archives
The "--current" option of "git show-branch" should have been made
incompatible with the "--reflog" mode, but this was not enforced,
which has been corrected.
source: <xmqqh76mf7s4.fsf_-_@gitster.g>
* jc/show-branch-g-current:
show-branch: -g and --current are incompatible
Meant to go with js/ci-gcc-12-fixes.
source: <xmqq7d68ytj8.fsf_-_@gitster.g>
* jc/http-clear-finished-pointer:
http.c: clear the 'finished' member once we are done with it
Fixes real problems noticed by gcc 12 and works around false
positives.
source: <pull.1238.git.1653351786.gitgitgadget@gmail.com>
* js/ci-gcc-12-fixes:
dir.c: avoid "exceeds maximum object size" error with GCC v12.x
nedmalloc: avoid new compile error
compat/win32/syslog: fix use-after-realloc
Test that "git config --show-scope" shows the "worktree" scope, and add
it to the list of scopes in Documentation/git-config.txt.
"git config --help" does not need to be updated because it already
mentions "worktree".
Signed-off-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 8d4d86b0 (cache: remove null_sha1, 2019-08-18) removed the
is_null_sha1() function, rewrite rules to correct callers of the
function to use is_null_oid() instead has become irrelevant, as any
new callers of the function will get caught by the compiler much
more quickly without spending cycles on Coccinelle.
Remove these rules.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A git subcommand like "git add -p" spawns a separate git process
while relaying its command line arguments. A pathspec with only
negative elements was mistakenly passed with an empty string, which
has been corrected.
* jc/all-negative-pathspec:
pathspec: correct an empty string used as a pathspec element
Implementation of "scalar diagnose" subcommand.
* js/scalar-diagnose:
scalar: teach `diagnose` to gather loose objects information
scalar: teach `diagnose` to gather packfile info
scalar diagnose: include disk space information
scalar: implement `scalar diagnose`
scalar: validate the optional enlistment argument
archive --add-virtual-file: allow paths containing colons
archive: optionally add "virtual" files
The documentation on the interaction between "--add-file" and
"--prefix" options of "git archive" has been improved.
* rs/document-archive-prefix:
archive: improve documentation of --prefix
Leakfix.
* fh/transport-push-leakfix:
transport: free local and remote refs in transport_push()
transport: unify return values and exit point from transport_push()
transport: remove unnecessary indenting in transport_push()
Update the GitHub workflow support to make it quicker to get to the
failing test.
* js/ci-github-workflow-markup:
ci: call `finalize_test_case_output` a little later
ci(github): mention where the full logs can be found
ci: use `--github-workflow-markup` in the GitHub workflow
ci(github): avoid printing test case preamble twice
ci(github): skip the logs of the successful test cases
ci: optionally mark up output in the GitHub workflow
ci/run-build-and-tests: add some structure to the GitHub workflow output
ci: make it easier to find failed tests' logs in the GitHub workflow
ci/run-build-and-tests: take a more high-level view
test(junit): avoid line feeds in XML attributes
tests: refactor --write-junit-xml code
ci: fix code style
Plug the memory leaks from the trickiest API of all, the revision
walker.
* ab/plug-leak-in-revisions: (27 commits)
revisions API: add a TODO for diff_free(&revs->diffopt)
revisions API: have release_revisions() release "topo_walk_info"
revisions API: have release_revisions() release "date_mode"
revisions API: call diff_free(&revs->pruning) in revisions_release()
revisions API: release "reflog_info" in release revisions()
revisions API: clear "boundary_commits" in release_revisions()
revisions API: have release_revisions() release "prune_data"
revisions API: have release_revisions() release "grep_filter"
revisions API: have release_revisions() release "filter"
revisions API: have release_revisions() release "cmdline"
revisions API: have release_revisions() release "mailmap"
revisions API: have release_revisions() release "commits"
revisions API users: use release_revisions() for "prune_data" users
revisions API users: use release_revisions() with UNLEAK()
revisions API users: use release_revisions() in builtin/log.c
revisions API users: use release_revisions() in http-push.c
revisions API users: add "goto cleanup" for release_revisions()
stash: always have the owner of "stash_info" free it
revisions API users: use release_revisions() needing REV_INFO_INIT
revision.[ch]: document and move code declared around "init"
...
CMake updates.
* yw/cmake-updates:
cmake: remove (_)UNICODE def on Windows in CMakeLists.txt
cmake: add pcre2 support
cmake: fix CMakeLists.txt on Linux
In rare cases[1], wait_or_whine() cannot determine a child process's
status (and will return -1 in this case). This can cause Git to issue
trace2 child_exit events despite the fact that the child may still be
running. In pathological cases, we've seen > 80 million exit events in
our trace logs for a single child process.
Fix this by only issuing trace2 events in finish_command_in_signal() if
we get a value other than -1 from wait_or_whine(). This can lead to
missing child_exit events in such a case, but that is preferable to
duplicating events on a scale that threatens to fill the user's
filesystem with invalid trace logs.
[1]: This can happen when:
* waitpid() returns -1 and errno != EINTR
* waitpid() returns an invalid PID
* the status set by waitpid() has neither the WIFEXITED() nor
WIFSIGNALED() flags
Signed-off-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a regression reported[1] against f443246b9f (commit: convert
{pre-commit,prepare-commit-msg} hook to hook.h, 2021-12-22): Due to
using the run_process_parallel() API in the earlier 96e7225b31 (hook:
add 'run' subcommand, 2021-12-22) we'd capture the hook's stderr and
stdout, and thus lose the connection to the TTY in the case of
e.g. the "pre-commit" hook.
As a preceding commit notes GNU parallel's similar --ungroup option
also has it emit output faster. While we're unlikely to have hooks
that emit truly massive amounts of output (or where the performance
thereof matters) it's still informative to measure the overhead. In a
similar "seq" test we're now ~30% faster:
$ cat .git/hooks/seq-hook; git hyperfine -L rev origin/master,HEAD~0 -s 'make CFLAGS=-O3' './git hook run seq-hook'
#!/bin/sh
seq 100000000
Benchmark 1: ./git hook run seq-hook' in 'origin/master
Time (mean ± σ): 787.1 ms ± 13.6 ms [User: 701.6 ms, System: 534.4 ms]
Range (min … max): 773.2 ms … 806.3 ms 10 runs
Benchmark 2: ./git hook run seq-hook' in 'HEAD~0
Time (mean ± σ): 603.4 ms ± 1.6 ms [User: 573.1 ms, System: 30.3 ms]
Range (min … max): 601.0 ms … 606.2 ms 10 runs
Summary
'./git hook run seq-hook' in 'HEAD~0' ran
1.30 ± 0.02 times faster than './git hook run seq-hook' in 'origin/master'
1. https://lore.kernel.org/git/CA+dzEBn108QoMA28f0nC8K21XT+Afua0V2Qv8XkR8rAeqUCCZw@mail.gmail.com/
Reported-by: Anthony Sottile <asottile@umich.edu>
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
[jc: minor fix-up to tests for consistency]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a bug in fd3cb0501e (remote: move static variables into
per-repository struct, 2021-11-17) where we'd free(remote->pushurl[i])
after having NULL'd out remote->pushurl. itself. We free
"remote->pushurl" in the next "for"-loop, so doing this appears to
have been a copy/paste error.
Before this change GCC 12's -fanalyzer would correctly note that we'd
dereference NULL in this case, this change fixes that:
remote.c: In function ‘remote_clear’:
remote.c:153:17: error: dereference of NULL ‘*remote.pushurl’ [CWE-476] [-Werror=analyzer-null-dereference]
153 | free((char *)remote->pushurl[i]);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[...]
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove braces that don't follow the CodingGuidelines from code added
in fd3cb0501e (remote: move static variables into per-repository
struct, 2021-11-17). A subsequent commit will edit code adjacent to
this.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extend the parallel execution API added in c553c72eed (run-command:
add an asynchronous parallel child processor, 2015-12-15) to support a
mode where the stdout and stderr of the processes isn't captured and
output in a deterministic order, instead we'll leave it to the kernel
and stdio to sort it out.
This gives the API same functionality as GNU parallel's --ungroup
option. As we'll see in a subsequent commit the main reason to want
this is to support stdout and stderr being connected to the TTY in the
case of jobs=1, demonstrated here with GNU parallel:
$ parallel --ungroup 'test -t {} && echo TTY || echo NTTY' ::: 1 2
TTY
TTY
$ parallel 'test -t {} && echo TTY || echo NTTY' ::: 1 2
NTTY
NTTY
Another is as GNU parallel's documentation notes a potential for
optimization. As demonstrated in next commit our results with "git
hook run" will be similar, but generally speaking this shows that if
you want to run processes in parallel where the exact order isn't
important this can be a lot faster:
$ hyperfine -r 3 -L o ,--ungroup 'parallel {o} seq ::: 10000000 >/dev/null '
Benchmark 1: parallel seq ::: 10000000 >/dev/null
Time (mean ± σ): 220.2 ms ± 9.3 ms [User: 124.9 ms, System: 96.1 ms]
Range (min … max): 212.3 ms … 230.5 ms 3 runs
Benchmark 2: parallel --ungroup seq ::: 10000000 >/dev/null
Time (mean ± σ): 154.7 ms ± 0.9 ms [User: 136.2 ms, System: 25.1 ms]
Range (min … max): 153.9 ms … 155.7 ms 3 runs
Summary
'parallel --ungroup seq ::: 10000000 >/dev/null ' ran
1.42 ± 0.06 times faster than 'parallel seq ::: 10000000 >/dev/null '
A large part of the juggling in the API is to make the API safer for
its maintenance and consumers alike.
For the maintenance of the API we e.g. avoid malloc()-ing the
"pp->pfd", ensuring that SANITIZE=address and other similar tools will
catch any unexpected misuse.
For API consumers we take pains to never pass the non-NULL "out"
buffer to an API user that provided the "ungroup" option. The
resulting code in t/helper/test-run-command.c isn't typical of such a
user, i.e. they'd typically use one mode or the other, and would know
whether they'd provided "ungroup" or not.
We could also avoid the strbuf_init() for "buffered_output" by having
"struct parallel_processes" use a static PARALLEL_PROCESSES_INIT
initializer, but let's leave that cleanup for later.
Using a global "run_processes_parallel_ungroup" variable to enable
this option is rather nasty, but is being done here to produce as
minimal of a change as possible for a subsequent regression fix. This
change is extracted from a larger initial version[1] which ends up
with a better end-state for the API, but in doing so needed to modify
all existing callers of the API. Let's defer that for now, and
narrowly focus on what we need for fixing the regression in the
subsequent commit.
It's safe to do this with a global variable because:
A) hook.c is the only user of it that sets it to non-zero, and before
we'll get any other API users we'll refactor away this method of
passing in the option, i.e. re-roll [1].
B) Even if hook.c wasn't the only user we don't have callers of this
API that concurrently invoke this parallel process starting API
itself in parallel.
As noted above "A" && "B" are rather nasty, and we don't want to live
with those caveats long-term, but for now they should be an acceptable
compromise.
1. https://lore.kernel.org/git/cover-v2-0.8-00000000000-20220518T195858Z-avarab@gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In rare circumstances where the current git index does not carry the
last_update_token, the fsmonitor v2 hook will be invoked with an
empty string which would caused the final rendered json to be invalid.
["query", "/path/to/my/git/repository/", {
"since": ,
"fields": ["name"],
"expression": ["not", ["dirname", ".git"]]
}]
Which will left user with the following error message
> git status
failed to parse command from stdin: line 2, column 13, position 67: unexpected token near ','
Watchman: command returned no output.
Falling back to scanning...
Hide the "since" field in json query when "last_update_token" is empty.
Co-authored-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Son Luong Ngoc <sluongng@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The short upstream state indicator inherits the colour of the last short
state indicator before it (if there is one), and the sparsity state
indicator inherits this colour as well. This behaviour was introduced by
0ec7c23cdc (git-prompt: make upstream state indicator location
consistent, 2022-02-27), while before this change the aforementioned
indicators were white/the default text colour. Some examples to
illustrate this behaviour (assuming all indicators are enabled and
colourization is on):
* If there is something in the stash, both the '$' and the short
upstream state indicator following it will be blue.
* If the local tree has new, untracked files and there is nothing in
the stash, both the '%' and the short upstream state indicator
will be red.
* If all local changes are added to the index and the stash is empty,
both the '+' and the short upstream state indicator following it will
be green.
* If the local tree is clean and there is nothing in the stash, the
short upstream state indicator will be white/${default text colour}.
This appears to be an unintended side-effect of the change, and makes
little sense semantically (e.g. why is it bad to be in sync with
upstream when you have uncommitted local changes?). The cause of the
change in colourization is that previously, the short upstream state
indicator appeared immediately after the rebase/revert/bisect/merge
state indicator (note the position of $p in $gitstring):
local f="$h$w$i$s$u"
local gitstring="$c$b${f:+$z$f}${sparse}$r$p"
Said indicator is prepended with the clear colour code, and the short
upstream state indicator is thus also uncoloured. Now, the short
upstream state indicator follows the sequence of colourized indicators,
without any clearing of colour (again note the position of $p, now in
$f):
local f="$h$w$i$s$u$p"
local gitstring="$c$b${f:+$z$f}${sparse}$r${upstream}"
If the user is in a sparse checkout, the sparsity state indicator
follows a similar pattern to the short upstream state indicator.
However, clearing colour of the colourized indicators changes how the
sparsity state indicator is colourized, as it currently inherits (and
before the change referenced also inherited) the colour of the last
short state indicator before it. Reading the commit message of the
change that introduced the sparsity state indicator, afda36dbf3
(git-prompt: include sparsity state as well, 2020-06-21), it appears
this colourization also was unintended, so clearing the colour for said
indicator further increases consistency.
Make the colourization of these state indicators consistent by making
all colourized indicators clear their own colour. Make colouring of $c
dependent on it not being empty, as it is no longer being used to colour
the branch name. Move clearing of $b's prefix to before colourization so
it gets cleared properly when colour codes are inserted into it. These
changes make changing the layout of the prompt less prone to unintended
colour changes in the future.
Change coloured Bash prompt tests to reflect the colourization changes:
* Move the colour codes to wrap the expected content of the expanded
$__git_ps1_branch_name in all tests.
* Insert a clear-colour code after the symbol for the first indicator
in "prompt - bash color pc mode - dirty status indicator - dirty
index and worktree", to reflect that all indicators should clear
their own colour.
Signed-off-by: Joakim Petersen <joak-pet@online.no>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
After generating diffs for each range to be compared using a 'git log'
invocation, range-diff.c::read_patches looks for the "diff --git" header
in those diffs to recognize the beginning of a new change.
In a project with submodules, and with 'diff.submodule=log' set in the
config, this header is missing for the diff of a changed submodule, so
any submodule changes are quietly ignored in the range-diff.
When 'diff.submodule=diff' is set in the config, the "diff --git" header
is also missing for the submodule itself, but is shown for submodule
content changes, which can easily confuse 'git range-diff' and lead to
errors such as:
error: git apply: bad git-diff - inconsistent old filename on line 1
error: could not parse git header 'diff --git path/to/submodule/and/some/file/within
'
error: could not parse log for '@{u}..@{1}'
Force the submodule diff format to its default ("short") when invoking
'git log' to generate the patches for each range, such that submodule
changes are always detected.
Add a test, including an invocation with '--creation-factor=100' to
force the second commit in the range not to be considered a complete
rewrite, in order to verify we do indeed get the "short" format.
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a commit is parsed, it pretends to have a different (possibly
empty) list of parents if there is graft information for that commit.
But there is a bug that could occur when a commit is parsed, the graft
information is updated (for example, when a shallow file is rewritten),
and the same commit is subsequently used: the parents of the commit do
not conform to the updated graft information, but the information at the
time of parsing.
This is usually not an issue, as a commit is usually introduced into the
repository at the same time as its graft information. That means that
when we try to parse that commit, we already have its graft information.
But it is an issue when fetching a shallow point directly into a
repository with submodules. The function
assign_shallow_commits_to_refs() parses all sought objects (including
the shallow point, which we are directly fetching). In update_shallow()
in fetch-pack.c, assign_shallow_commits_to_refs() is called before
commit_shallow_file(), which means that the shallow point would have
been parsed before graft information is updated. Once a commit is
parsed, it is no longer sensitive to any graft information updates. This
parsed commit is subsequently used when we do a revision walk to search
for submodules to fetch, meaning that the commit is considered to have
parents even though it is a shallow point (and therefore should be
treated as having no parents).
Therefore, whenever graft information is updated, mark the commits that
were previously grafts and the commits that are newly grafts as
unparsed.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is a user facing message for a situation seen in the wild.
Translate it.
Signed-off-by: Philip Oakley <philipoakley@iee.email>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `--preserve-merges` option was removed by v2.34.0. However
users may not be aware that it is also a Pull configuration option,
which is still offered by major IDE vendors such as Visual Studio.
Extend the `--preserve-merges` die message to also direct users to
the possible use of the `preserve` option in the `pull.rebase` config.
This is an additional 'belt and braces' information statement.
Signed-off-by: Philip Oakley <philipoakley@iee.email>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git would die if a "rebase --preserve-merges" was in progress.
Users could neither --quit, --abort, nor --continue the rebase.
Make the `rebase --abort` option available to allow users to remove
traces of any preserve-merges rebase, even if they had upgraded
during a rebase.
One trigger case was an unexpectedly difficult to resolve conflict, as
reported on the `git-users` group.
(https://groups.google.com/g/git-for-windows/c/3jMWbBlXXHM)
Other potential use-cases include git-experts using the portable
'Git on a stick' to help users with an older git version.
Signed-off-by: Philip Oakley <philipoakley@iee.email>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since feebd2d256 (rebase: hide --preserve-merges option, 2019-10-18)
this option is now removed as stated in the subsequent release notes.
Fix and reflow the option tip.
Signed-off-by: Philip Oakley <philipoakley@iee.email>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
575fa8a3 (read-cache: read data in a hash-independent way,
2019-02-19) added a new code to copy from the on-disk data into the
name member of the in-core cache entry, which is already done
immediately after that in a way that takes prefix-compression into
account.
Remove this code, as it is not just unnecessary, but also can be
reading beyond the on-disk data, when we are copying very long
prefix string from the previous entry.
Signed-off-by: ZheNing Hu <adlternative@gmail.com>
[jc: rewrote the log message with Réne's findings]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When `show-ref` is combined with the `--heads` or `--tags` options, it
can avoid iterating parts of a repository's references that it doesn't
care about.
But it doesn't take advantage of this potential optimization. When this
command was introduced back in 358ddb62cf (Add "git show-ref" builtin
command, 2006-09-15), `for_each_ref_in()` did exist. But since most
repositories don't have many (any?) references that aren't branches or
tags already, this makes little difference in practice.
Though for repositories with a large imbalance of branches and tags (or,
more likely in the case of server operators, many hidden references),
this can make quite a difference. Take, for example, a repository with
500,000 "hidden" references (all of the form "refs/__hidden__/N"), and
a single branch:
git commit --allow-empty -m "base" &&
seq 1 500000 | sed 's,\(.*\),create refs/__hidden__/\1 HEAD,' |
git update-ref --stdin &&
git pack-refs --all
Outputting the existence of that single branch currently takes on the
order of ~50ms on my machine. The vast majority of this time is wasted
iterating through references that we know we're going to discard.
Instead, teach `show-ref` that it can iterate just "refs/heads" and/or
"refs/tags" when given `--heads` and/or `--tags`, respectively. A few
small interesting things to note:
- When given either option, we can avoid the general-purpose
for_each_ref() call altogether, since we know that it won't give us
any references that we wouldn't filter out already.
- We can make two separate calls to `for_each_fullref_in()` (and
avoid, say, the more specialized `for_each_fullref_in_prefixes()`,
since we know that the set of references enumerated by each is
disjoint, so we'll never see the same reference appear in both
calls.
- We have to use the "fullref" variant (instead of just
`for_each_branch_ref()` and `for_each_tag_ref()`), since we expect
fully-qualified reference names to appear in `show-ref`'s output.
When either of `heads_only` or `tags_only` is set, we can eliminate the
strcmp() calls in `builtin/show-ref.c::show_ref()` altogether, since we
know that `show_ref()` will never see a non-branch or tag reference.
Unfortunately, we can't use `for_each_fullref_in_prefixes()` to enhance
`show-ref`'s pattern matching, since `show-ref` patterns match on the
_suffix_ (e.g., the pattern "foo" shows "refs/heads/foo",
"refs/tags/foo", and etc, not "foo/*").
Nonetheless, in our synthetic example above, this provides a significant
speed-up ("git" is roughly v2.36, "git.compile" is this patch):
$ hyperfine -N 'git show-ref --heads' 'git.compile show-ref --heads'
Benchmark 1: git show-ref --heads
Time (mean ± σ): 49.9 ms ± 6.2 ms [User: 45.6 ms, System: 4.1 ms]
Range (min … max): 46.1 ms … 73.6 ms 43 runs
Benchmark 2: git.compile show-ref --heads
Time (mean ± σ): 2.8 ms ± 0.4 ms [User: 1.4 ms, System: 1.2 ms]
Range (min … max): 1.3 ms … 5.6 ms 957 runs
Summary
'git.compile show-ref --heads' ran
18.03 ± 3.38 times faster than 'git show-ref --heads'
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Users sometimes provide a "username:password" combination in their
plaintext URLs. Since Git stores these URLs in plaintext in the
.git/config file, this is a very insecure way of storing these
credentials. Credential managers are a more secure way of storing this
information.
System administrators might want to prevent this kind of use by users on
their machines.
Create a new "fetch.credentialsInUrl" config option and teach Git to
warn or die when seeing a URL with this kind of information. The warning
anonymizes the sensitive information of the URL to be clear about the
issue.
This change currently defaults the behavior to "allow" which does
nothing with these URLs. We can consider changing this behavior to
"warn" by default if we wish. At that time, we may want to add some
advice about setting fetch.credentialsInUrl=ignore for users who still
want to follow this pattern (and not receive the warning).
An earlier version of this change injected the logic into
url_normalize() in urlmatch.c. While most code paths that parse URLs
eventually normalize the URL, that normalization does not happen early
enough in the stack to avoid attempting connections to the URL first. By
inserting a check into the remote validation, we identify the issue
before making a connection. In the old code path, this was revealed by
testing the new t5601-clone.sh test under --stress, resulting in an
instance where the return code was 13 (SIGPIPE) instead of 128 from the
die().
However, we can reuse the parsing information from url_normalize() in
order to benefit from its well-worn parsing logic. We can use the struct
url_info that is created in that method to replace the password with
"<redacted>" in our error messages. This comes with a slight downside
that the normalized URL might look slightly different from the input URL
(for instance, the normalized version adds a closing slash). This should
not hinder users figuring out what the problem is and being able to fix
the issue.
As an attempt to ensure the parsing logic did not catch any
unintentional cases, I modified this change locally to to use the "die"
option by default. Running the test suite succeeds except for the
explicit username:password URLs used in t5550-http-fetch-dumb.sh and
t5541-http-push-smart.sh. This means that all other tested URLs did not
trigger this logic.
The tests show that the proper error messages appear (or do not
appear), but also count the number of error messages. When only warning,
each process validates the remote URL and outputs a warning. This
happens twice for clone, three times for fetch, and once for push.
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A mechanism to pack unreachable objects into a "cruft pack",
instead of ejecting them into loose form to be reclaimed later, has
been introduced.
* tb/cruft-packs:
sha1-file.c: don't freshen cruft packs
builtin/gc.c: conditionally avoid pruning objects via loose
builtin/repack.c: add cruft packs to MIDX during geometric repack
builtin/repack.c: use named flags for existing_packs
builtin/repack.c: allow configuring cruft pack generation
builtin/repack.c: support generating a cruft pack
builtin/pack-objects.c: --cruft with expiration
reachable: report precise timestamps from objects in cruft packs
reachable: add options to add_unseen_recent_objects_to_traversal
builtin/pack-objects.c: --cruft without expiration
builtin/pack-objects.c: return from create_object_entry()
t/helper: add 'pack-mtimes' test-tool
pack-mtimes: support writing pack .mtimes files
chunk-format.h: extract oid_version()
pack-write: pass 'struct packing_data' to 'stage_tmp_packfiles'
pack-mtimes: support reading .mtimes files
Documentation/technical: add cruft-packs.txt
Disable the "do not remove the directory the user started Git in"
logic when Git cannot tell where that directory is. Earlier we
refused to run in such a case.
* kl/setup-in-unreadable-worktree:
setup: don't die if realpath(3) fails on getcwd(3)
A workflow change for translators are being proposed.
* jx/l10n-workflow-change:
l10n: Document the new l10n workflow
Makefile: add "po-init" rule to initialize po/XX.po
Makefile: add "po-update" rule to update po/XX.po
po/git.pot: don't check in result of "make pot"
po/git.pot: this is now a generated file
Makefile: remove duplicate and unwanted files in FOUND_SOURCE_FILES
i18n CI: stop allowing non-ASCII source messages in po/git.pot
Makefile: have "make pot" not "reset --hard"
Makefile: generate "po/git.pot" from stable LOCALIZED_C
Makefile: sort source files before feeding to xgettext
Teach "git repack --geometric" work better with "--keep-pack" and
avoid corrupting the repository when packsize limit is used.
* tb/geom-repack-with-keep-and-max:
builtin/repack.c: ensure that `names` is sorted
t7703: demonstrate object corruption with pack.packSizeLimit
repack: respect --keep-pack with geometric repack
"sparse-checkout" learns to work well with the sparse-index
feature.
* ds/sparse-sparse-checkout:
sparse-checkout: integrate with sparse index
p2000: add test for 'git sparse-checkout [add|set]'
sparse-index: complete partial expansion
sparse-index: partially expand directories
sparse-checkout: --no-sparse-index needs a full index
cache-tree: implement cache_tree_find_path()
sparse-index: introduce partially-sparse indexes
sparse-index: create expand_index()
t1092: stress test 'git sparse-checkout set'
t1092: refactor 'sparse-index contents' test
The multi-pack-index code did not protect the packfile it is going
to depend on from getting removed while in use, which has been
corrected.
* tb/midx-race-in-pack-objects:
builtin/pack-objects.c: ensure pack validity from MIDX bitmap objects
builtin/pack-objects.c: ensure included `--stdin-packs` exist
builtin/pack-objects.c: avoid redundant NULL check
pack-bitmap.c: check preferred pack validity when opening MIDX bitmap
Preliminary code refactoring around transport and bundle code.
* ds/bundle-uri:
bundle.h: make "fd" version of read_bundle_header() public
remote: allow relative_url() to return an absolute url
remote: move relative_url()
http: make http_get_file() external
fetch-pack: move --keep=* option filling to a function
fetch-pack: add a deref_without_lazy_fetch_extended()
dir API: add a generalized path_match_flags() function
connect.c: refactor sending of agent & object-format
Introduce a filesystem-dependent mechanism to optimize the way the
bits for many loose object files are ensured to hit the disk
platter.
* ns/batch-fsync:
core.fsyncmethod: performance tests for batch mode
t/perf: add iteration setup mechanism to perf-lib
core.fsyncmethod: tests for batch mode
test-lib-functions: add parsing helpers for ls-files and ls-tree
core.fsync: use batch mode and sync loose objects by default on Windows
unpack-objects: use the bulk-checkin infrastructure
update-index: use the bulk-checkin infrastructure
builtin/add: add ODB transaction around add_files_to_cache
cache-tree: use ODB transaction around writing a tree
core.fsyncmethod: batched disk flushes for loose-objects
bulk-checkin: rebrand plug/unplug APIs as 'odb transactions'
bulk-checkin: rename 'state' variable and separate 'plugged' boolean
Deprecate non-cone mode of the sparse-checkout feature.
* en/sparse-cone-becomes-default:
Documentation: some sparsity wording clarifications
git-sparse-checkout.txt: mark non-cone mode as deprecated
git-sparse-checkout.txt: flesh out pattern set sections a bit
git-sparse-checkout.txt: add a new EXAMPLES section
git-sparse-checkout.txt: shuffle some sections and mark as internal
git-sparse-checkout.txt: update docs for deprecation of 'init'
git-sparse-checkout.txt: wording updates for the cone mode default
sparse-checkout: make --cone the default
tests: stop assuming --no-cone is the default mode for sparse-checkout
Add a test for the regression introduced in my 9c4d58ff2c (ls-tree:
split up "fast path" callbacks, 2022-03-23) and fixed in
350296cc78 (ls-tree: `-l` should not imply recursive listing,
2022-04-04), and test for the test of ls-tree option/mode combinations
to make sure we don't have other blind spots.
The setup for these tests can be shared with those added in the
1041d58b4d (Merge branch 'tl/ls-tree-oid-only', 2022-04-04) topic, so
let's create a new t/lib-t3100.sh to help them share data.
The existing tests in "t3104-ls-tree-format.sh" didn't deal with a
submodule, which they'll now encounter with as the
setup_basic_ls_tree_data() sets one up.
This extensive testing should give us confidence that there were no
further regressions in this area. The lack of testing was noted back
in [1], but unfortunately we didn't cover that blind-spot before
9c4d58ff2c.
1. https://lore.kernel.org/git/211115.86o86lqe3c.gmgdl@evledraar.gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Follow-up on a preceding commit which changed all references to the
"env_array" when referring to the "struct child_process" member. These
changes are all unnecessary for the compiler, but help the code's
human readers.
All the comments that referred to "env_array" have now been updated,
as well as function names and variables that had "env_array" in their
name, they now refer to "env".
In addition the "out" name for the submodule.h prototype was
inconsistent with the function definition's use of "env_array" in
submodule.c. Both of them use "env" now.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Start following-up on the rename mentioned in c7c4bdeccf (run-command
API: remove "env" member, always use "env_array", 2021-11-25) of
"env_array" to "env".
The "env_array" name was picked in 19a583dc39 (run-command: add
env_array, an optional argv_array for env, 2014-10-19) because "env"
was taken. Let's not forever keep the oddity of "*_array" for this
"struct strvec", but not for its "args" sibling.
This commit is almost entirely made with a coccinelle rule[1]. The
only manual change here is in run-command.h to rename the struct
member itself and to change "env_array" to "env" in the
CHILD_PROCESS_INIT initializer.
The rest of this is all a result of applying [1]:
* make contrib/coccinelle/run_command.cocci.patch
* patch -p1 <contrib/coccinelle/run_command.cocci.patch
* git add -u
1. cat contrib/coccinelle/run_command.pending.cocci
@@
struct child_process E;
@@
- E.env_array
+ E.env
@@
struct child_process *E;
@@
- E->env_array
+ E->env
I've avoided changing any comments and derived variable names here,
that will all be done in the next commit.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change "BUG" output originally added in a97e4075a1 (Keep
rename/rename conflicts of intermediate merges while doing recursive
merge, 2007-03-31), and later made to say it was a "BUG" in
19c6a4f836 (merge-recursive: do not return NULL only to cause
segfault, 2010-01-21) to use the new bug() function.
This gets the same job done with slightly less code, as we won't need
to prefix lines with "BUG: ". More importantly we'll now log the full
set of messages via trace2, before this we'd only log the one BUG()
invocation.
We don't replace the last "BUG()" invocation with "BUG_if_bug()", as
in this case we're sure that we called bug() earlier, so there's no
need to make it a conditional.
While we're at it let's replace "There" with "there" in the message,
i.e. not start a message with a capital letter, per the
CodingGuidelines.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Amend code added in a6a8431968 (receive-pack.c: shorten the
execute_commands loop over all commands, 2015-01-07) and amended to
hard die in b6a4788586 (receive-pack.c: die instead of error in case
of possible future bug, 2015-01-07) to use the new bug() function
instead.
Let's also rename the warn_if_*() function that code is in to
BUG_if_*(), its name became outdated in b6a4788586.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change the assertions added in bf3ff338a2 (parse-options: stop
abusing 'callback' for lowlevel callbacks, 2019-01-27) to use optbug()
instead of BUG(). At this point we're looping over individual options,
so if we encounter any issues we'd like to report the offending option.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we run into bugs in parse-options.c usage it's good to be able to
note all the issues we ran into before dying. This use-case is why we
have the optbug() function introduced in 1e5ce570ca (parse-options:
clearer reporting of API misuse, 2010-12-02)
Let's change this code to use the new bug() API introduced in the
preceding commit, which cuts down on the verbosity of
parse_options_check().
There are existing uses of BUG() in adjacent code that should have
been using optbug() that aren't being changed here. That'll be done in
a subsequent commit. This only changes the optbug() callers.
Since this will invoke BUG() the previous exit(128) code will be
changed, but in this case that's what we want, i.e. to have
encountering a BUG() return the specific "BUG" exit code.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a bug() function to use in cases where we'd like to indicate a
runtime BUG(), but would like to defer the BUG() call because we're
possibly accumulating more bug() callers to exhaustively indicate what
went wrong.
We already have this sort of facility in various parts of the
codebase, just in the form of ad-hoc re-inventions of the
functionality that this new API provides. E.g. this will be used to
replace optbug() in parse-options.c, and the 'error("BUG:[...]' we do
in a loop in builtin/receive-pack.c.
Unlike the code this replaces we'll log to trace2 with this new bug()
function (as with other usage.c functions, including BUG()), we'll
also be able to avoid calls to xstrfmt() in some cases, as the bug()
function itself accepts variadic sprintf()-like arguments.
Any caller to bug() can follow up such calls with BUG_if_bug(),
which will BUG() out (i.e. abort()) if there were any preceding calls
to bug(), callers can also decide not to call BUG_if_bug() and leave
the resulting BUG() invocation until exit() time. There are currently
no bug() API users that don't call BUG_if_bug() themselves after a
for-loop, but allowing for not calling BUG_if_bug() keeps the API
flexible. As the tests and documentation here show we'll catch missing
BUG_if_bug() invocations in our exit() wrapper.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change the exit() wrapper added in ee4512ed48 (trace2: create new
combined trace facility, 2019-02-22) so that we'll split up the trace2
logging concerns from wanting to wrap the "exit()" function itself for
other purposes.
This makes more sense structurally, as we won't seem to conflate
non-trace2 behavior with the trace2 code. I'd previously added an
explanation for this in 368b584315 (common-main.c: call exit(), don't
return, 2021-12-07), that comment is being adjusted here.
Now the only thing we'll do if we're not using trace2 is to truncate
the "code" argument to the lowest 8 bits.
We only need to do that truncation on non-POSIX systems, but in
ee4512ed48 that "if defined(__MINGW32__)" code added in
47e3de0e79 (MinGW: truncate exit()'s argument to lowest 8 bits,
2009-07-05) was made to run everywhere. It might be good for clarify
to narrow that down by an "ifdef" again, but I'm not certain that in
the interim we haven't had some other non-POSIX systems rely the
behavior. On a POSIX system taking the lowest 8 bits is implicit, see
exit(3)[1] and wait(2)[2]. Let's leave a comment about that instead.
1. https://man7.org/linux/man-pages/man3/exit.3.html
2. https://man7.org/linux/man-pages/man2/wait.2.html
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
According to the HTML Standard FAQ:
“What is the DOCTYPE for modern HTML documents?
In text/html documents:
<!DOCTYPE html>
In documents delivered with an XML media type: no DOCTYPE is required
and its use is generally unnecessary. However, you may use one if you
want (see the following question). Note that the above is well-formed
XML.”
Source: [1]
Gitweb uses an XHTML 1.0 DOCTYPE:
<!DOCTYPE html PUBLIC
"-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
While that DOCTYPE is still valid [2], it has several disadvantages:
1. It’s misleading. If an XML parser uses the DTD at the given link,
then the entities and ⋅ won’t get declared. Instead, the
parser has to use a DTD from the HTML Standard that has nothing to do
with XHTML 1.0 [2].
2. It’s obsolete. XHTML 1.0 was last revised in 2002 and was superseded in
2018 [3].
3. It’s unreliable. Gitweb uses and ⋅ but lets an external file
define them. “[…U]using entity references for characters in XML documents
is unsafe if they are defined in an external file (except for <, >,
&, ", and ').” [4]
[1]: <https://github.com/whatwg/html/blob/main/FAQ.md#what-is-the-doctype-for-modern-html-documents>
[2]: <https://html.spec.whatwg.org/multipage/xhtml.html#parsing-xhtml-documents>
[3]: <https://www.w3.org/TR/xhtml1/#xhtml>
[4]: <https://html.spec.whatwg.org/multipage/xhtml.html#writing-xhtml-documents>
Signed-off-by: Jason Yundt <jason@jasonyundt.email>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Branch names can't be empty, so config keys with an empty branch name,
e.g. "branch..remote", are silently ignored.
Since these config keys will never be useful, make it a fatal error when
remote.c finds a key that starts with "branch." and has an empty
subsection.
Signed-off-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
4a2dcb1a08 (remote: die if branch is not found in repository,
2021-11-17) introduced a regression where multiple config entries with
an empty branch name, e.g.
[branch ""]
remote = foo
merge = bar
could cause Git to fail when it tries to look up branch tracking
information.
We parse the config key to get (branch name, branch name length), but
when the branch name subsection is empty, we get a bogus branch name,
e.g. "branch..remote" gives (".remote", 0). We continue to use the bogus
branch name as if it were valid, and prior to 4a2dcb1a08, this wasn't an
issue because length = 0 caused the branch name to effectively be ""
everywhere.
However, that commit handles length = 0 inconsistently when we create
the branch:
- When find_branch() is called to check if the branch exists in the
branch hash map, it interprets a length of 0 to mean that it should
call strlen on the char pointer.
- But the code path that inserts into the branch hash map interprets a
length of 0 to mean that the string is 0-length.
This results in the bug described above:
- "branch..remote" looks for ".remote" in the branch hash map. Since we
do not find it, we insert the "" entry into the hash map.
- "branch..merge" looks for ".merge" in the branch hash map. Since we
do not find it, we again try to insert the "" entry into the hash map.
However, the entries in the branch hash map are supposed to be
appended to, not overwritten.
- Since overwriting an entry is a BUG(), Git fails instead of silently
ignoring the empty branch name.
Fix the bug by removing the convenience strlen functionality, so that
0 means that the string is 0-length. We still insert a bogus branch name
into the hash map, but this will be fixed in a later commit.
Reported-by: "Ing. Martin Prantl Ph.D." <perry@ntis.zcu.cz>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This reverts commit 9c539d1027 (config.mak.dev: alternative
workaround to gcc 12 warning in http.c, 2022-04-15).
Let's give GCC12's "dangling-pointer" warning a second chance, as we
have a more focused workaround for this particular compiler glitch.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fixes real problems noticed by gcc 12 and works around false
positives.
* js/ci-gcc-12-fixes:
dir.c: avoid "exceeds maximum object size" error with GCC v12.x
nedmalloc: avoid new compile error
compat/win32/syslog: fix use-after-realloc
As 'revert' and 'cherry-pick' share a lot of code, it is easy to
modify the behaviour of one command and inadvertently affect the
other. An earlier change to teach the '--reference' option and the
'revert.reference' configuration variable to the former was not
careful enough and 'cherry-pick --reference' wasn't rejected as an
error.
It is possible to think 'cherry-pick -x' might benefit from the
'--reference' option, but it is fundamentally different from
'revert' in at least two ways to make it questionable:
- 'revert' names a commit that is ancestor of the resulting commit,
so an abbreviated object name with human readable title is
sufficient to identify the named commit uniquely without using
the full object name. On the other hand, 'cherry-pick'
usually [*] picks a commit that is not an ancestor. It might be
even picking a private commit that never becomes part of the
public history.
- The whole commit message of 'cherry-pick' is a copy of the
original commit, and there is nothing gained to repeat only the
title part on 'cherry-picked from' message.
[*] well, you could revert and then you can pick the original that
was reverted to get back to where you were, but then you can
revert the revert to do the same thing.
Helped-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git add -i" was rewritten in C some time ago and has been in
testing; the reimplementation is now exposed to general public by
default.
* js/use-builtin-add-i:
add -i: default to the built-in implementation
t2016: require the PERL prereq only when necessary
The tests that ensured merges stop when interfering local changes
are present did not make sure that local changes are preserved; now
they do.
* jc/t6424-failing-merge-preserve-local-changes:
t6424: make sure a failed merge preserves local changes
With the new http.curloptResolve configuration, the CURLOPT_RESOLVE
mechanism that allows cURL based applications to use pre-resolved
IP addresses for the requests is exposed to the scripts.
* cc/http-curlopt-resolve:
http: add custom hostname to IP address resolutions
When operating at the scale that Scalar wants to support, certain data
shapes are more likely to cause undesirable performance issues, such as
large numbers of loose objects.
By including statistics about this, `scalar diagnose` now makes it
easier to identify such scenarios.
Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It's helpful to see if there are other crud files in the pack
directory. Let's teach the `scalar diagnose` command to gather
file size information about pack files.
While at it, also enumerate the pack files in the alternate
object directories, if any are registered.
Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When analyzing problems with large worktrees/repositories, it is useful
to know how close to a "full disk" situation Scalar/Git operates. Let's
include this information.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Over the course of Scalar's development, it became obvious that there is
a need for a command that can gather all kinds of useful information
that can help identify the most typical problems with large
worktrees/repositories.
The `diagnose` command is the culmination of this hard-won knowledge: it
gathers the installed hooks, the config, a couple statistics describing
the data shape, among other pieces of information, and then wraps
everything up in a tidy, neat `.zip` archive.
Note: originally, Scalar was implemented in C# using the .NET API, where
we had the luxury of a comprehensive standard library that includes
basic functionality such as writing a `.zip` file. In the C version, we
lack such a commodity. Rather than introducing a dependency on, say,
libzip, we slightly abuse Git's `archive` machinery: we write out a
`.zip` of the empty try, augmented by a couple files that are added via
the `--add-file*` options. We are careful trying not to modify the
current repository in any way lest the very circumstances that required
`scalar diagnose` to be run are changed by the `diagnose` run itself.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `scalar` command needs a Scalar enlistment for many subcommands, and
looks in the current directory for such an enlistment (traversing the
parent directories until it finds one).
These is subcommands can also be called with an optional argument
specifying the enlistment. Here, too, we traverse parent directories as
needed, until we find an enlistment.
However, if the specified directory does not even exist, or is not a
directory, we should stop right there, with an error message.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
By allowing the path to be enclosed in double-quotes, we can avoid
the limitation that paths cannot contain colons.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With the `--add-virtual-file=<path>:<content>` option, `git archive` now
supports use cases where relatively trivial files need to be added that
do not exist on disk.
This will allow us to generate `.zip` files with generated content,
without having to add said content to the object database and without
having to write it out to disk.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
[jc: tweaked <path> handling]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Pathspecs with only negative elements did not work with some
commands that pass the pathspec along to a subprocess. For
instance,
$ git add -p -- ':!*.txt'
should add everything except for paths ending in ".txt", but it gets
complaint from underlying "diff-index" and aborts.
We used to error out when a pathspec with only negative elements in
it, like the one in the above example. Later, 859b7f1d (pathspec:
don't error out on all-exclusionary pathspec patterns, 2017-02-07)
updated the logic to add an empty string as an extra element. The
intention was to let the extra element to match everything and let
the negative ones given by the user to subtract from it.
At around the same time, we were migrating from "an empty string is
a valid pathspec element that matches everything" to "either a dot
or ":/" is used to match all, and an empty string is rejected",
between d426430e (pathspec: warn on empty strings as pathspec,
2016-06-22) and 9e4e8a64 (pathspec: die on empty strings as
pathspec, 2017-06-06). I think 9e4e8a64, which happened long after
859b7f1d happened, was not careful enough to turn the empty string
859b7f1d added to either a dot or ":/".
A care should be taken as the definition of "everything" depends on
subcommand. For the purpose of "add -p", adding a "." to add
everything in the current directory is the right thing to do. But
for some other commands, ":/" (i.e. really really everything, even
things outside the current subdirectory) is the right choice.
We would break commands in a big way if we get this wrong, so add a
handful of test pieces to make sure the resulting code still
excludes the paths that are expected and includes "everything" else.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Document the interaction between --add-file and --prefix by giving an
example.
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In http.c, the run_active_slot() function allows the given "slot" to
make progress by calling step_active_slots() in a loop repeatedly,
and the loop is not left until the request held in the slot
completes.
Ages ago, we used to use the slot->in_use member to get out of the
loop, which misbehaved when the request in "slot" completes (at
which time, the result of the request is copied away from the slot,
and the in_use member is cleared, making the slot ready to be
reused), and the "slot" gets reused to service a different request
(at which time, the "slot" becomes in_use again, even though it is
for a different request). The loop terminating condition mistakenly
thought that the original request has yet to be completed.
Today's code, after baa7b67d (HTTP slot reuse fixes, 2006-03-10)
fixed this issue, uses a separate "slot->finished" member that is
set in run_active_slot() to point to an on-stack variable, and the
code that completes the request in finish_active_slot() clears the
on-stack variable via the pointer to signal that the particular
request held by the slot has completed. It also clears the in_use
member (as before that fix), so that the slot itself can safely be
reused for an unrelated request.
One thing that is not quite clean in this arrangement is that,
unless the slot gets reused, at which point the finished member is
reset to NULL, the member keeps the value of &finished, which
becomes a dangling pointer into the stack when run_active_slot()
returns. Clear the finished member before the control leaves the
function, which has a side effect of unconfusing compilers like
recent GCC 12 that is over-eager to warn against such an assignment.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix memory leaks in transport_push(), where remote_refs and local_refs
are never freed.
116 bytes in 1 blocks are definitely lost in loss record 56 of 103
at 0x484486F: malloc (vg_replace_malloc.c:381)
by 0x4938D7E: strdup (strdup.c:42)
by 0x628418: xstrdup (wrapper.c:39)
by 0x4FD454: process_capabilities (connect.c:232)
by 0x4FD454: get_remote_heads (connect.c:354)
by 0x610A38: handshake (transport.c:333)
by 0x612B02: transport_push (transport.c:1302)
by 0x4803D6: push_with_options (push.c:357)
by 0x4811D6: do_push (push.c:414)
by 0x4811D6: cmd_push (push.c:650)
by 0x405210: run_builtin (git.c:465)
by 0x405210: handle_builtin (git.c:719)
by 0x406363: run_argv (git.c:786)
by 0x406363: cmd_main (git.c:917)
by 0x404F17: main (common-main.c:56)
5,912 (388 direct, 5,524 indirect) bytes in 2 blocks are definitely lost in loss record 98 of 103
at 0x4849464: calloc (vg_replace_malloc.c:1328)
by 0x628705: xcalloc (wrapper.c:150)
by 0x5C216D: alloc_ref_with_prefix (remote.c:975)
by 0x5C232A: alloc_ref (remote.c:983)
by 0x5C232A: one_local_ref (remote.c:2299)
by 0x5C232A: one_local_ref (remote.c:2289)
by 0x5BDB03: do_for_each_repo_ref_iterator (iterator.c:418)
by 0x5B4C4F: do_for_each_ref (refs.c:1486)
by 0x5B4C4F: refs_for_each_ref (refs.c:1492)
by 0x5B4C4F: for_each_ref (refs.c:1497)
by 0x5C6ADF: get_local_heads (remote.c:2310)
by 0x612A85: transport_push (transport.c:1286)
by 0x4803D6: push_with_options (push.c:357)
by 0x4811D6: do_push (push.c:414)
by 0x4811D6: cmd_push (push.c:650)
by 0x405210: run_builtin (git.c:465)
by 0x405210: handle_builtin (git.c:719)
by 0x406363: run_argv (git.c:786)
by 0x406363: cmd_main (git.c:917)
Signed-off-by: Frantisek Hrbata <frantisek@hrbata.com>
Reviewed-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It seems there is no reason to return 1 instead of -1 when push_refs()
is not set in transport vtable. Let's unify the error return values and
use the done label as a single exit point from transport_push().
Suggested-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Frantisek Hrbata <frantisek@hrbata.com>
Reviewed-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove the big indented block for transport_push() check in transport vtable
and let's just return error immediately. Hopefully this makes the code
more readable.
Signed-off-by: Frantisek Hrbata <frantisek@hrbata.com>
Reviewed-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A typical "git revert" commit uses the full title of the original
commit in its title, and starts its body of the message with:
This reverts commit 8fa7f667cf61386257c00d6e954855cc3215ae91.
This does not encourage the best practice of describing not just
"what" (i.e. "Revert X" on the title says what we did) but "why"
(i.e. and it does not say why X was undesirable).
We can instead phrase this first line of the body to be more like
This reverts commit 8fa7f667 (do this and that, 2022-04-25)
so that the title does not have to be
Revert "do this and that"
We can instead use the title to describe "why" we are reverting the
original commit.
Introduce the "--reference" option to "git revert", and also the
revert.reference configuration variable, which defaults to false, to
tweak the title and the first line of the draft commit message for
when creating a "revert" commit.
When this option is in use, the first line of the pre-filled editor
buffer becomes a comment line that tells the user to say _why_. If
the user exits the editor without touching this line by mistake,
what we prepare to become the first line of the body, i.e. "This
reverts commit 8fa7f667 (do this and that, 2022-04-25)", ends up to
be the title of the resulting commit. This behaviour is designed to
help such a user to identify such a revert in "git log --oneline"
easily so that it can be further reworded with "git rebase -i" later.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Refactor the tests that exercise implicit shutdown cases
to make them more robust and less racy.
The fsmonitor--daemon will implicitly shutdown in a variety
of situations, such as when the ".git" directory is deleted
or renamed.
The existing tests would delete or rename the directory, sleep
for one second, and then check the status of the daemon. This
is racy, since the client/status command has no way to sync
with the daemon. This was noticed occasionally on very slow
CI build machines where it would cause a random test to fail.
Replace the simple sleep with a sleep-and-retry loop.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create a test in t7527 to verify that we get a stray warning from
`git fsmonitor--daemon start` when indirectly called from
`git submodule absorbgitdirs`.
Update `git fsmonitor--daemon` to take (and ignore) the `--super-prefix`
argument to suppress the warning.
When we have:
1. a submodule with a `sub/.git/` directory (rather than a `sub/.git`
file).
2. `core.fsmonitor` is turned on in the submodule, but the daemon is
not yet started in the submodule.
3. and someone does a `git submodule absorbgitdirs` in the super.
Git will recursively invoke `git submodule--helper absorb-git-dirs`
in the submodule. This will read the index and may attempt to start
the fsmonitor--daemon with the `--super-prefix` argument.
`git fsmonitor--daemon start` does not accept the `--super-prefix`
argument and causes a warning to be issued.
This does not cause a problem because the `refresh_index()` code
assumes a trivial response if the daemon does not start.
The net-net is a harmelss, but stray warning. Lets eliminate the
warning.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Confirm that the daemon reports events using the on-disk
spelling for Unicode NFC/NFD characters. On APFS we still
have Unicode aliasing, so we cannot create two files that
only differ by NFC/NFD, but the on-disk format preserves
the spelling used to create the file. On HFS+ we also
have aliasing, but the path is always stored on disk in
NFD.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create a set of prereqs to help understand how file names
are handled by the filesystem when they contain NFC and NFD
Unicode characters.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Emit NFC or NFC and NFD spellings of pathnames on macOS.
MacOS is Unicode composition insensitive, so NFC and NFD spellings are
treated as aliases and collide. While the spelling of pathnames in
filesystem events depends upon the underlying filesystem, such as
APFS, HFS+ or FAT32, the OS enforces such collisions regardless of
filesystem.
Teach the daemon to always report the NFC spelling and to report
the NFD spelling when stored in that format on the disk.
This is slightly more general than "core.precomposeUnicode".
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Test that FS events from the OS are received using the preserved,
on-disk spelling of files/directories rather than spelling used
to make the change.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Never set CE_FSMONITOR_VALID on the cache-entry of submodule
directories.
During a client command like 'git status', we may need to recurse
into each submodule to compute a status summary for the submodule.
Since the purpose of the ce_flag is to let Git avoid scanning a
cache-entry, setting the flag causes the recursive call to be
avoided and we report incorrect (no status) for the submodule.
We created an OS watch on the root directory of our working
directory and we receive events for everything in the cone
under it. When submodules are present inside our working
directory, we receive events for both our repo (the super) and
any subs within it. Since our index doesn't have any information
for items within the submodules, we can't use those events.
We could try to truncate the paths of those events back to the
submodule boundary and mark the GITLINK as dirty, but that
feels expensive since we would have to prefix compare every FS
event that we receive against a list of submodule roots. And
it still wouldn't be sufficient to correctly report status on
the submodule, since we don't have any space in the cache-entry
to cache the submodule's status (the 'SCMU' bits in porcelain
V2 speak). That is, the CE_FSMONITOR_VALID bit just says that
we don't need to scan/inspect it because we already know the
answer -- it doesn't say that the item is clean -- and we
don't have space in the cache-entry to store those answers.
So we should always do the recursive scan.
Therefore, we should never set the flag on GITLINK cache-entries.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create unit tests to move a directory. Verify that `git status`
gives the same result with and without FSMonitor enabled.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach Git to perform binary search over the cache-entries for a directory
notification and then linearly scan forward to find the immediate children.
Previously, when the FSMonitor reported a modified directory Git would
perform a linear search on the entire cache-entry array for all
entries matching that directory prefix and invalidate them. Since the
cache-entry array is already sorted, we can use a binary search to
find the first matching entry and then only linearly walk forward and
invalidate entries until the prefix changes.
Also, the original code would invalidate anything having the same
directory prefix. Since a directory event should only be received for
items that are immediately within the directory (and not within
sub-directories of it), only invalidate those entries and not the
whole subtree.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach the listener thread to shutdown the daemon if the spelling of the
worktree root directory changes.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Force shutdown fsmonitor daemon if the worktree root directory
is moved, renamed, or deleted.
Use Windows low-level GetFileInformationByHandle() to get and
compare the Windows system unique ID for the directory with a
cached version when we started up. This lets us detect the
case where someone renames the directory that we are watching
and then creates a new directory with the original pathname.
This is important because we are listening to a named pipe for
requests and they are stored in the Named Pipe File System (NPFS)
which a kernel-resident pseudo filesystem not associated with
the actual NTFS directory.
For example, if the daemon was watching "~/foo/", it would have
a directory-watch handle on that directory and a named-pipe
handle for "//./pipe/...foo". Moving the directory to "~/bar/"
does not invalidate the directory handle. (So the daemon would
actually be watching "~/bar" but listening on "//./pipe/...foo".
If the user then does "git init ~/foo" and causes another daemon
to start, the first daemon will still have ownership of the pipe
and the second daemon instance will fail to start. "git status"
clients in "~/foo" will ask "//./pipe/...foo" about changes and
the first daemon instance will tell them about "~/bar".
This commit causes the first daemon to shutdown if the system unique
ID for "~/foo" changes (changes from what it was when the daemon
started). Shutdown occurs after a periodic poll. After the
first daemon exits and releases the lock on the named pipe,
subsequent Git commands may cause another daemon to be started
on "~/foo". Similarly, a subsequent Git command may cause another
daemon to be started on "~/bar".
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extend the Windows version of the "health" thread to periodically
inspect the system and shutdown if warranted.
This commit updates the thread's wait loop to use a timeout and
defines a (currently empty) table of functions to poll the system.
A later commit will add functions to the table to actually
inspect the system.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create another thread to watch over the daemon process and
automatically shut it down if necessary.
This commit creates the basic framework for a "health" thread
to monitor the daemon and/or the file system. Later commits
will add platform-specific code to do the actual work.
The "health" thread is intended to monitor conditions that
would be difficult to track inside the IPC thread pool and/or
the file system listener threads. For example, when there are
file system events outside of the watched worktree root or if
we want to have an idle-timeout auto-shutdown feature.
This commit creates the health thread itself, defines the thread-proc
and sets up the thread's event loop. It integrates this new thread
into the existing IPC and Listener thread models.
This commit defines the API to the platform-specific code where all of
the monitoring will actually happen.
The platform-specific code for MacOS is just stubs. Meaning that the
health thread will immediately exit on MacOS, but that is OK and
expected. Future work can define MacOS-specific monitoring.
The platform-specific code for Windows sets up enough of the
WaitForMultipleObjects() machinery to watch for system and/or custom
events. Currently, the set of wait handles only includes our custom
shutdown event (sent from our other theads). Later commits in this
series will extend the set of wait handles to monitor other
conditions.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Rename platform-specific listener thread related variables
and data types as we prepare to add another backend thread
type.
[] `struct fsmonitor_daemon_backend_data` becomes `struct fsm_listen_data`
[] `state->backend_data` becomes `state->listen_data`
[] `state->error_code` becomes `state->listen_error_code`
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Refactor daemon thread startup to make it easier to start
a third thread class to monitor the health of the daemon.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach the fsmonitor--daemon to CD outside of the worktree
before starting up.
The common Git startup mechanism causes the CWD of the daemon process
to be in the root of the worktree. On Windows, this causes the daemon
process to hold a locked handle on the CWD and prevents other
processes from moving or deleting the worktree while the daemon is
running.
CD to HOME before entering main event loops.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Ignore FSEvents resulting from `xattr` changes. Git does not care about
xattr's or changes to xattr's, so don't waste time collecting these
events in the daemon nor transmitting them to clients.
Various security tools add xattrs to files and/or directories, such as
to mark them as having been downloaded. We should ignore these events
since it doesn't affect the content of the file/directory or the normal
meta-data that Git cares about.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Initialize `o->result.fsmonitor_has_run_once` based upon value
in `o->src_index->fsmonitor_has_run_once` to prevent a second
fsmonitor query during the tree traversal and possibly getting
a skewed view of the working directory.
The checkout code has already talked to the fsmonitor and the
traversal is updating the index as it traverses, so there is
no need to query the fsmonitor.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
On MacOS mark repos on NTFS or FAT32 volumes as incompatible.
The builtin FSMonitor used Unix domain sockets on MacOS for IPC
with clients. These sockets are kept in the .git directory.
Unix sockets are not supported by NTFS and FAT32, so the daemon
cannot start up.
Test for this during our compatibility checking so that client
commands do not keep trying to start the daemon.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach Git to detect remote working directories on Windows and mark them as
incompatible with FSMonitor.
With this `git fsmonitor--daemon run` will error out with a message like it
does for bare repos.
Client commands, such as `git status`, will not attempt to start the daemon.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach Git to detect remote working directories on macOS and mark them as
incompatible with FSMonitor.
With this, `git fsmonitor--daemon run` will error out with a message
like it does for bare repos.
Client commands, like `git status`, will not attempt to start the daemon.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
VFS for Git virtual repositories are incompatible with FSMonitor.
VFS for Git is a downstream fork of Git. It contains its own custom
file system watcher that is aware of the virtualization. If a working
directory is being managed by VFS for Git, we should not try to watch
it because we may get incomplete results.
We do not know anything about how VFS for Git works, but we do
know that VFS for Git working directories contain a well-defined
config setting. If it is set, mark the working directory as
incompatible.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extend generic incompatibility checkout with platform-specific
mechanism. Stub in Win32 version.
In the existing fsmonitor-settings code we have a way to mark
types of repos as incompatible with fsmonitor (whether via the
hook and IPC APIs). For example, we do this for bare repos,
since there are no files to watch.
Extend this exclusion mechanism for platform-specific reasons.
This commit just creates the framework and adds a stub for Win32.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Bare repos do not have a worktree, so there is nothing for the
daemon watch.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create a stress test to hammer on the fsmonitor daemon.
Create a client-side thread pool of n threads and have
each of them make m requests as fast as they can.
We do not currently inspect the contents of the response.
We're only interested in placing a heavy request load on
the daemon.
This test is useful for interactive testing and various
experimentation. For example, to place additional load
on the daemon while another test is running. We currently
do not have a test script that actually uses this helper.
We might add such a test in the future.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create some test repos with UTF8 characters in the pathname of the
root directory and verify that the builtin FSMonitor can watch them.
This test is mainly for Windows where we need to avoid `*A()`
routines.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach FSMonitor daemon on Windows to recognize shortname paths as
aliases of normal longname paths. FSMonitor clients, such as `git
status`, should receive the longname spelling of changed files (when
possible).
Sometimes we receive FS events using the shortname, such as when a CMD
shell runs "RENAME GIT~1 FOO" or "RMDIR GIT~1". The FS notification
arrives using whatever combination of long and shortnames were used by
the other process. (Shortnames do seem to be case normalized,
however.)
Use Windows GetLongPathNameW() to try to map the pathname spelling in
the notification event into the normalized longname spelling. (This
can fail if the file/directory is deleted, moved, or renamed, because
we are asking the FS for the mapping in response to the event and
after it has already happened, but we try.)
Special case the shortname spelling of ".git" to avoid under-reporting
these events.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We don't bother to freshen objects stored in a cruft pack individually
by updating the `.mtimes` file. This is because we can't portably `mmap`
and write into the middle of a file (i.e., to update the mtime of just
one object). Instead, we would have to rewrite the entire `.mtimes` file
which may incur some wasted effort especially if there a lot of cruft
objects and they are freshened infrequently.
Instead, force the freshening code to avoid an optimizing write by
writing out the object loose and letting it pick up a current mtime.
This works because we prefer the mtime of the loose copy of an object
when both a loose and packed one exist (whether or not the packed copy
comes from a cruft pack or not).
This could certainly do with a test and/or be included earlier in this
series/PR, but I want to wait until after I have a chance to clean up
the overly-repetitive nature of the cruft pack tests in general.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Expose the new `git repack --cruft` mode from `git gc` via a new opt-in
flag. When invoked like `git gc --cruft`, `git gc` will avoid exploding
unreachable objects as loose ones, and instead create a cruft pack and
`.mtimes` file.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When using cruft packs, the following race can occur when a geometric
repack that writes a MIDX bitmap takes place afterwords:
- First, create an unreachable object and do an all-into-one cruft
repack which stores that object in the repository's cruft pack.
- Then make that object reachable.
- Finally, do a geometric repack and write a MIDX bitmap.
Assuming that we are sufficiently unlucky as to select a commit from the
MIDX which reaches that object for bitmapping, then the `git
multi-pack-index` process will complain that that object is missing.
The reason is because we don't include cruft packs in the MIDX when
doing a geometric repack. Since the "make that object reachable" doesn't
necessarily mean that we'll create a new copy of that object in one of
the packs that will get rolled up as part of a geometric repack, it's
possible that the MIDX won't see any copies of that now-reachable
object.
Of course, it's desirable to avoid including cruft packs in the MIDX
because it causes the MIDX to store a bunch of objects which are likely
to get thrown away. But excluding that pack does open us up to the above
race.
This patch demonstrates the bug, and resolves it by including cruft
packs in the MIDX even when doing a geometric repack.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We use the `util` pointer for items in the `existing_packs` string list
to indicate which packs are going to be deleted. Since that has so far
been the only use of that `util` pointer, we just set it to 0 or 1.
But we're going to add an additional state to this field in the next
patch, so prepare for that by adding a #define for the first bit so we
can more expressively inspect the flags state.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In servers which set the pack.window configuration to a large value, we
can wind up spending quite a lot of time finding new bases when breaking
delta chains between reachable and unreachable objects while generating
a cruft pack.
Introduce a handful of `repack.cruft*` configuration variables to
control the parameters used by pack-objects when generating a cruft
pack.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Expose a way to split the contents of a repository into a main and cruft
pack when doing an all-into-one repack with `git repack --cruft -d`, and
a complementary configuration variable.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a previous patch, pack-objects learned how to generate a cruft pack
so long as no objects are dropped.
This patch teaches pack-objects to handle the case where a non-never
`--cruft-expiration` value is passed. This case is slightly more
complicated than before, because we want pack-objects to save
unreachable objects which would have been pruned when there is another
recent (i.e., non-prunable) unreachable object which reaches the other.
We'll call these objects "unreachable but reachable-from-recent".
Here is how pack-objects handles `--cruft-expiration`:
- Instead of adding all objects outside of the kept pack(s) into the
packing list, only handle the ones whose mtime is within the grace
period.
- Construct a reachability traversal whose tips are the
unreachable-but-recent objects.
- Then, walk along that traversal, stopping if we reach an object in
the kept pack. At each step along the traversal, we add the object
we are visiting to the packing list.
In the majority of these cases, any object we visit in this traversal
will already be in our packing list. But we will sometimes encounter
reachable-from-recent cruft objects, which we want to retain even if
they aged out of the grace period.
The most subtle point of this process is that we actually don't need to
bother to update the rescued object's mtime. Even though we will write
an .mtimes file with a value that is older than the expiration window,
it will continue to survive cruft repacks so long as any objects which
reach it haven't aged out.
That is, a future repack will also exclude that object from the initial
packing list, only to discover it later on when doing the reachability
traversal.
Finally, stopping early once an object is found in a kept pack is safe
to do because the kept packs ordinarily represent which packs will
survive after repacking. Assuming that it _isn't_ safe to halt a
traversal early would mean that there is some ancestor object which is
missing, which implies repository corruption (i.e., the complete set of
reachable objects isn't present).
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When generating a cruft pack, the caller within pack-objects will want
to know the precise timestamps of cruft objects (i.e., their
corresponding values in the .mtimes table) rather than the mtime of the
cruft pack itself.
Teach add_recent_packed() to lookup each object's precise mtime from the
.mtimes file if one exists (indicated by the is_cruft bit on the
packed_git structure).
A couple of small things worth noting here:
- load_pack_mtimes() needs to be called before asking for
nth_packed_mtime(), and that call is done lazily here. That function
exits early if the .mtimes file has already been opened and parsed,
so only the first call is slow.
- Checking the is_cruft bit can be done without any extra work on the
caller's behalf, since it is set up for us automatically as a
side-effect of calling add_packed_git() (just like the 'pack_keep'
and 'pack_promisor' bits).
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This function behaves very similarly to what we will need in
pack-objects in order to implement cruft packs with expiration. But it
is lacking a couple of things. Namely, it needs:
- a mechanism to communicate the timestamps of individual recent
objects to some external caller
- and, in the case of packed objects, our future caller will also want
to know the originating pack, as well as the offset within that pack
at which the object can be found
- finally, it needs a way to skip over packs which are marked as kept
in-core.
To address the first two, add a callback interface in this patch which
reports the time of each recent object, as well as a (packed_git,
off_t) pair for packed objects.
Likewise, add a new option to the packed object iterators to skip over
packs which are marked as kept in core. This option will become
implicitly tested in a future patch.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach `pack-objects` how to generate a cruft pack when no objects are
dropped (i.e., `--cruft-expiration=never`). Later patches will teach
`pack-objects` how to generate a cruft pack that prunes objects.
When generating a cruft pack which does not prune objects, we want to
collect all unreachable objects into a single pack (noting and updating
their mtimes as we accumulate them). Ordinary use will pass the result
of a `git repack -A` as a kept pack, so when this patch says "kept
pack", readers should think "reachable objects".
Generating a non-expiring cruft packs works as follows:
- Callers provide a list of every pack they know about, and indicate
which packs are about to be removed.
- All packs which are going to be removed (we'll call these the
redundant ones) are marked as kept in-core.
Any packs the caller did not mention (but are known to the
`pack-objects` process) are also marked as kept in-core. Packs not
mentioned by the caller are assumed to be unknown to them, i.e.,
they entered the repository after the caller decided which packs
should be kept and which should be discarded.
Since we do not want to include objects in these "unknown" packs
(because we don't know which of their objects are or aren't
reachable), these are also marked as kept in-core.
- Then, we enumerate all objects in the repository, and add them to
our packing list if they do not appear in an in-core kept pack.
This results in a new cruft pack which contains all known objects that
aren't included in the kept packs. When the kept pack is the result of
`git repack -A`, the resulting pack contains all unreachable objects.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A new caller in the next commit will want to immediately modify the
object_entry structure created by create_object_entry(). Instead of
forcing that caller to wastefully look-up the entry we just created,
return it from create_object_entry() instead.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the next patch, we will implement and test support for writing a
cruft pack via a special mode of `git pack-objects`. To make sure that
objects are written with the correct timestamps, and a new test-tool
that can dump the object names and corresponding timestamps from a given
`.mtimes` file.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now that the `.mtimes` format is defined, supplement the pack-write API
to be able to conditionally write an `.mtimes` file along with a pack by
setting an additional flag and passing an oidmap that contains the
timestamps corresponding to each object in the pack.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are three definitions of an identical function which converts
`the_hash_algo` into either 1 (for SHA-1) or 2 (for SHA-256). There is a
copy of this function for writing both the commit-graph and
multi-pack-index file, and another inline definition used to write the
.rev header.
Consolidate these into a single definition in chunk-format.h. It's not
clear that this is the best header to define this function in, but it
should do for now.
(Worth noting, the .rev caller expects a 4-byte unsigned, but the other
two callers work with a single unsigned byte. The consolidated version
uses the latter type, and lets the compiler widen it when required).
Another caller will be added in a subsequent patch.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This structure will be used to communicate the per-object mtimes when
writing a cruft pack. Here, we need the full packing_data structure
because the mtime information is stored in an array there, not on the
individual object_entry's themselves (to avoid paying the overhead in
structure width for operations which do not generate a cruft pack).
We haven't passed this information down before because one of the two
callers (in bulk-checkin.c) does not have a packing_data structure at
all. In that case (where no cruft pack will be generated), NULL is
passed instead.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
To store the individual mtimes of objects in a cruft pack, introduce a
new `.mtimes` format that can optionally accompany a single pack in the
repository.
The format is defined in Documentation/technical/pack-format.txt, and
stores a 4-byte network order timestamp for each object in name (index)
order.
This patch prepares for cruft packs by defining the `.mtimes` format,
and introducing a basic API that callers can use to read out individual
mtimes.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git remote -v" now shows the list-objects-filter used during
fetching from the remote, if available.
* ac/remote-v-with-object-list-filters:
builtin/remote.c: teach `-v` to list filters for promisor remotes
With a recent update to refuse access to repositories of other
people by default, "sudo make install" and "sudo git describe"
stopped working. This series intends to loosen it while keeping
the safety.
* cb/path-owner-check-with-sudo:
t0034: add negative tests and allow git init to mostly work under sudo
git-compat-util: avoid failing dir ownership checks if running privileged
t: regression git needs safe.directory when using sudo
"git -c branch.autosetupmerge=simple branch $A $B" will set the $B
as $A's upstream only when $A and $B shares the same name, and "git
-c push.default=simple" on branch $A would push to update the
branch $A at the remote $B came from. Also more places use the
sole remote, if exists, before defaulting to 'origin'.
* tk/simple-autosetupmerge:
push: new config option "push.autoSetupRemote" supports "simple" push
push: default to single remote even when not named origin
branch: new autosetupmerge option 'simple' for matching branches
Change the "flow" of how translators interact with the l10n repository
at [1] to adjust it for a new workflow of not having a po/git.pot file
in-tree at all, and to not commit line numbers to the po/*.po files
that we do track in tree.
The current workflow was added in a combination of dce37b66fb (l10n:
initial git.pot for 1.7.10 upcoming release, 2012-02-13) and
271ce198cd (Update l10n guide, 2012-02-29).
As noted in preceding commits I think that it came about due to
technical debt I'd left behind in how the "po/git.pot" file was
created, and a mis-impression that the file:line comments were needed
as anything more than a transitory translation aid.
As the updated po/README.md shows the new workflow is substantially
the same, the difference is that translators no longer need to
initially pull from the l10n coordinator for a new po/git.pot, they
can simply use git.git's canonical source repository.
The l10n coordinator is still expected to announce a release to
translate, which presumably would always be Junio's latest release
tag. I'm not certain if this part of the process is actually
important. I.e. the delta translation-wise between that tag and
"master" is usually pretty small, so perhaps translators can just work
on "master" instead.
1. https://github.com/git-l10n/git-po/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The core translation is the minimum set of work that must be done for a
new language translation.
There are over 5000 messages in the template message file "po/git.pot"
that need to be translated. It is not a piece of cake for such a huge
workload. So we used to define a small set of messages called "core
translation" that a new l10n contributor must complete before sending
pull request to the l10n coordinator.
By pulling in some parts of the git-po-helper[^1] logic, we add a new
rule to create this core translation message "po/git-core.pot":
make po/git-core.pot
To help new l10n contributors to initialized their "po/XX.pot" from
"po/git-core.pot", we also add new rules "po-init":
make po-init PO_FILE=po/XX.po
[^1]: https://github.com/git-l10n/git-po-helper/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since there is no longer a "po/git.pot" file in tree, a l10n team leader
has to run several commands to update their "po/XX.po" file:
$ make pot
$ msgmerge --add-location --backup=off -U po/XX.po po/git.pot
To make this process easier, add a new rule so that l10n team leaders
can update their "po/XX.po" with one command. E.g.:
$ make po-update PO_FILE=po/zh_CN.po
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove the "po/git.pot" file from being tracked, which started with
dce37b66fb (l10n: initial git.pot for 1.7.10 upcoming release,
2012-02-13).
The reason the po/git.pot started being checked in was because the
po/*.po files were changed a schema where we'd generate them from a
known-good snapshot of po/git.pot, instead of each translator running
"make pot" themselves.
This makes sense, but we don't need to carry this file in-tree just to
achieve that aim, and doing so has resulted in a significant amount of
"diff churn" since this method of doing it was introduced:
$ git log -p --oneline -- po/git.pot|wc -l
553743
We can instead let l10n contributors to generate "po/git.pot" in runtime
to update their own "po/XX.po", and the l10n coordinator can check
pull requests using CI pipeline.
This reverts to the schema introduced initially in cd5513a716 (i18n:
Makefile: "pot" target to extract messages marked for translation,
2011-02-22).
The actual "git rm" of po/git.pot was in preceding commit to make this
change easier to review, and to preempt the mailing list from blocking
it due to it being too large.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We get source files saved in "$(FOUND_SOURCE_FILES)" by running the
command "git ls-files" or the command "find". We tried to have the
both commands return the same list of files, but apparently the "find"
command will return more files, such as the generated headers. We can
filter out these generated headers to get closer results.
In addition to this, "$(FOUND_SOURCE_FILES)" may contain duplicate
files. E.g. "git-ls-files" may have duplicate entries for the same file
in different staging areas if there are unresolved conflicts in the
working tree. For this case, we can reduce duplicate entries by passing
the option "--deduplicate" to git-ls-files.
Junio reported that when running "make" in a working tree with
unresolved conflicts, "make" may report warnings like below:
Makefile:xxxx: target '.build/pot/po/FOO.c.po' given more than once
in the same rule
The duplicate targets are introduced by the following pattern rule we
added in the preceding commit for incremental build of "po/git.pot".
$(LOCALIZED_C_GEN_PO): .build/pot/po/%.po: %
Although we have resolved this issue by sorting to create a unique
$(LOCALIZED_C), other targets may benefit from this. Such as: tags,
cscope.out, etc.
Reported-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the preceding commit we moved away from using xgettext(1) to both
generate the po/git.pot, and to merge the incrementally generated
po/git.pot+ file as we sourced translations from C, shell and Perl.
Doing it this way, which dates back to my initial
implementation[1][2][3] was conflating two things: With xgettext(1)
the --from-code both controls what encoding is specified in the
po/git.pot's header, and what encoding we allow in source messages.
We don't ever want to allow non-ASCII in *source messages*, and doing
so has hid e.g. a buggy message introduced in
a6226fd772 (submodule--helper: convert the bulk of cmd_add() to C,
2021-08-10) from us, we'd warn about it before, but only when running
"make pot", but the operation would still succeed. Now we'll error out
on it when running "make pot".
Since the preceding Makefile changes made this easy: let's add a "make
check-pot" target with the same prerequisites as the "po/git.pot"
target, but without changing the file "po/git.pot". Running it as part
of the "static-analysis" CI target will ensure that we catch any such
issues in the future. E.g.:
$ make check-pot
XGETTEXT .build/pot/po/builtin/submodule--helper.c.po
xgettext: Non-ASCII string at builtin/submodule--helper.c:3381.
Please specify the source encoding through --from-code.
make: *** [.build/pot/po/builtin/submodule--helper.c.po] Error 1
1. cd5513a716 (i18n: Makefile: "pot" target to extract messages
marked for translation, 2011-02-22)
2. adc3b2b276 (Makefile: add xgettext target for *.sh files,
2011-05-14)
3. 5e9637c629 (i18n: add infrastructure for translating Git with
gettext, 2011-11-18)
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before commit fc0fd5b23b (Makefile: help gettext tools to cope with our
custom PRItime format, 2017-07-20), we'd consider source files as-is
with gettext, but because we need to understand PRItime in the same way
that gettext itself understands PRIuMAX, we'd first check if we had a
clean checkout, then munge all of the processed files in-place with
"sed", generate "po/git.pot", and then finally "reset --hard" to undo
our changes.
By generating "pot" snippets in ".build/pot/po" for each source file
and rewriting certain source files with PRItime macros to temporary
files in ".build/pot/po", we can avoid running "make pot" by altering
files in place and doing a "reset --hard" afterwards.
This speed of "make pot" is slower than before on an initial run,
because we run "xgettext" many times (once per source file), but it
can be boosted by parallelization. It is *much* faster for incremental
runs, and will allow us to implement related targets in subsequent
commits.
When the "pot" target was originally added in cd5513a716 (i18n:
Makefile: "pot" target to extract messages marked for translation,
2011-02-22) it behaved like a "normal" target. I.e. we'd skip the
re-generation of the po/git.pot if nothing had to be done.
Then after po/git.pot was checked in in dce37b66fb (l10n: initial
git.pot for 1.7.10 upcoming release, 2012-02-13) the target was broken
until 1f31963e92 (i18n: treat "make pot" as an explicitly-invoked
target, 2014-08-22) when it was made to depend on "FORCE". I.e. the
Makefile's dependency resolution inherently can't handle incremental
building when the target file may be updated by git (or something else
external to "make"). But this case no longer applies, so FORCE is no
longer needed.
That out of the way, the main logic change here is getting rid of the
"reset --hard":
We'll generate intermediate ".build/pot/po/%.po" files from "%", which
is handy to see at a glance what strings (if any) in a given file are
marked for translation:
$ make .build/pot/po/pretty.c.po
[...]
$ cat .build/pot/po/pretty.c.po
#: pretty.c:1051
msgid "unable to parse --pretty format"
msgstr ""
$
For these C source files which contain the PRItime macros, we will
create temporary munged "*.c" files in a tree in ".build/pot/po"
corresponding to our source tree, and have "xgettext" consider those.
The rule needs to be careful to "(cd .build/pot/po && ...)", because
otherwise the comments in the po/git.pot file wouldn't refer to the
correct source locations (they'd be prefixed with ".build/pot/po").
These temporary munged "*.c” files will be removed immediately after
the corresponding po files are generated, because some development tools
cannot ignore the duplicate source files in the ".build" directory
according to the ".gitignore" file, and that may cause trouble.
The output of the generated po/git.pot file is changed in one minor
way: Because we're using msgcat(1) instead of xgettext(1) to
concatenate the output we'll now disambiguate where "TRANSLATORS"
comments come from, in cases where a message is the same in N files,
and either only one has a "TRANSLATORS" comment, or they're
different. E.g. for the "Your edited hunk[...]" message we'll now
apply this change (comment content elided):
+#. #-#-#-#-# add-patch.c.po #-#-#-#-#
#. TRANSLATORS: do not translate [y/n]
[...]
+#. #-#-#-#-# git-add--interactive.perl.po #-#-#-#-#
#. TRANSLATORS: do not translate [y/n]
[...]
#: add-patch.c:1253 git-add--interactive.perl:1244
msgid ""
"Your edited hunk does not apply. Edit again (saying \"no\" discards!) [y/n]? "
msgstr ""
There are six such changes, and they all make the context more
understandable, as msgcat(1) is better at handling these edge cases
than xgettext(1)'s previously used "--join-existing" flag.
But filenames in the above disambiguation lines of extracted-comments
have an extra ".po" extension compared to the filenames at the file
locations. While we could rename the intermediate ".build/pot/po/%.po"
files without the ".po" extension to use more intuitive filenames in
the disambiguation lines of extracted-comments, but that will confuse
developer tools with lots of invalid C or other source files in
".build/pot/po" directory.
The addition of "--omit-header" option for xgettext makes the "pot"
snippets in ".build/pot/po/*.po" smaller. But as we'll see in a
subsequent commit this header behavior has been hiding an
encoding-related bug from us, so let's carry it forward instead of
re-generating it with xgettext(1).
The "po/git.pot" file should have a header entry, because a proper
header entry will increase the speed of creating a new po file using
msginit and set a proper "POT-Creation-Date:" field in the header
entry of a "po/XX.po" file. We use xgettext to generate a separate
header file at ".build/pot/git.header" from "/dev/null", and use this
header to assemble "po/git.pot".
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Different users may generate a different message template file
"po/git.pot". This is because the POT file is generated from
"$(LOCALIZED_C)", which is supposed to list all the sources that we
extract the strings to be translated from. But "$(LOCALIZED_C)"
includes "$(C_OBJ)", which only lists the source files used in the
current build for a specific platform and specific compiler
conditions.
Instead of using "$(C_OBJ)", we use "$(FOUND_C_SOURCES)", which lists
all source files we keep track of (or ship in a tarball extract), to
form a stable "LOCALIZED_C". We also add "$(SCALAR_SOURCES)", which
is part of "$(C_OBJ)" but not included in "$(FOUND_C_SOURCES)".
With this update, the newly generated "po/git.pot" will have 30 new
entries coming from the following C source files:
* compat/fsmonitor/fsm-listen-win32.c
* compat/mingw.c
* compat/regex/regcomp.c
* compat/simple-ipc/ipc-win32.c
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We will feed xgettext with more C source files and in different order
in subsequent commit. To generate a stable "po/git.pot" regardless of
the number and order of input source files, we sort the c, perl, and
shell source files in groups before feeding them to xgettext.
Ævar suggested that we should not pass the option "--sort-by-file" to
xgettext to sort the translatable strings, as it will mix the three
groups of source files (c, perl and shell) in the file "po/git.pot",
and change the order of translatable strings in the same line of a file.
With this update, the newly generated "po/git.pot" will have the same
entries while in a different order.
With the help of a custom diff driver as shown below,
git config --global diff.gettext-fmt.textconv \
"msgcat --no-location --sort-by-file"
and appending a new entry "*.pot diff=gettext-fmt" to git attributes,
we can see that there are no substantial changes in "po/git.pot".
We won't checkin the newly generated "po/git.pot", because we will
remove it from tree in a later commit.
Suggested-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git fetch --recurse-submodules" from multiple remotes (either from
a remote group, or "--all") used to make one extra "git fetch" in
the submodules, which has been corrected.
* jc/avoid-redundant-submodule-fetch:
fetch: do not run a redundant fetch from submodule
The way "git fetch" without "--update-head-ok" ensures that HEAD in
no worktree points at any ref being updated was too wasteful, which
has been optimized a bit.
* os/fetch-check-not-current-branch:
fetch: limit shared symref check only for local branches
Documentation update.
* pb/ggg-in-mfc-doc:
MyFirstContribution: drop PR description for GGG single-patch contributions
MyFirstContribution: reference "The cover letter" in GitGitGadget section
MyFirstContribution: reference "The cover letter" in "Preparing Email"
MyFirstContribution: add standalone section on cover letter
MyFirstContribution: add "Anatomy of a Patch Series" section
"git fetch" unnecessarily failed when an unexpected optional
section appeared in the output, which has been corrected.
* jt/fetch-peek-optional-section:
fetch-pack: make unexpected peek result non-fatal
The "--current" option of "git show-branch" should have been made
incompatible with the "--reflog" mode, but this was not enforced,
which has been corrected.
* jc/show-branch-g-current:
show-branch: -g and --current are incompatible
"make coverage-report" without first running "make coverage" did
not produce any meaningful result, which has been corrected.
* ep/coverage-report-wants-test-to-have-run:
Makefile: add a prerequisite to the coverage-report target
The FreeBSD CI build (on Cirrus-CI) has been failing in
't9001-send-email.sh' for quite some time, with an error from the
runtime linker relating to the Perl installation:
$ GIT_SEND_EMAIL_NOTTY=1 git send-email \
'--from=Example <from@example.com>' '--to=nobody@example.com' \
'--smtp-server=/tmp/cirrus-ci-build/t/trash directory.t9001-send-email/fake.sendmail' \
--compose '--subject=foo' 0001-Second.patch
ld-elf.so.1: /usr/local/lib/perl5/5.32/mach/CORE/libperl.so.5.32: Undefined symbol "strerror_l@FBSD_1.6"
This first instance is in t9001.6 but it fails similarly in several tests
in this file.
The FreeBSD image we use is FreeBSD 12.2, which is unsupported since
March 31st, 2022 [1]. Switching to a supported version, 12.3,
makes this error disappear [2].
Change the image we use to FreeBSD 12.3.
[1] https://www.freebsd.org/security/unsupported/
[2] https://lore.kernel.org/git/9cc31276-ab78-fa8a-9fb4-b19266911211@gmail.com/
Reviewed-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Prior to Git 2.35.0, git could be run from an inaccessible working
directory so long as the git repository specified by options and/or
environment variables was accessible. For example:
git init repo
mkdir -p a/b
cd a/b
chmod u-x ..
git -C "${PWD%/a/b}/repo" status
If this example seems a bit contrived, consider running with the
repository owner as a substitute UID (e.g. with runuser(1) or sudo(8))
without ensuring the working directory is accessible by that user.
The code added by e6f8861bd4 ("setup: introduce
startup_info->original_cwd") to preserve the working directory attempts
to normalize the path using strbuf_realpath(). If that fails, as in the
case above, it is treated as a fatal error.
This commit treats strbuf_realpath() errors as non-fatal. If an error
occurs, setup_original_cwd() will continue without applying removal
prevention for cwd, resulting in the pre-2.35.0 behavior. The risk
should be minimal, since git will not operate on a repository with
inaccessible ancestors, this behavior is only known to occur when cwd is
a descendant of the repository, an ancestor of cwd is inaccessible, and
no ancestors of the repository are inaccessible.
Signed-off-by: Kevin Locke <kevin@kevinlocke.name>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
`UNICODE` and `_UNICODE` are not required when building git on Windows.
Actually, they should not be predefined at all.
There're 2 evidences that `(_)UNICODE` is supposed to be nonexist:
compat/win32/trace2_win32_process_info.c:83: It uses jw_array_string
which accepts pe32.szExeFile as const char*.
t/helper/test-drop-caches.c:16: Calling to GetCurrentDirectory with
Buffer as char*.
The autotools build system never defines `UNICODE` and `_UNICODE` and
builds on Windows well.
Signed-off-by: Yuyi Wang <Strawberry_Str@hotmail.com>
Acked-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix one of the TODOs listed in the CMakeLists.txt by adding support
for building with pcre2.
As pcre2 doesn't provide cmake find module, we find it with pkgconf.
This patch also works with vcpkg on Windows, with pkgconf and pcre2
installed.
Pkgconf and pcre2 is detected automatically just like curl, expat
and iconv. The output of CMake indicates whether pcre2 is found.
Signed-off-by: Yuyi Wang <Strawberry_Str@hotmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
CMakeLists.txt didn't follow the grammar of `set`, and it will fail when
setting `USE_VCPKG` off on non-Windows platforms.
When the platform is Linux, the Makefile adds `compat/linux/procinfo.o`
to `COMPAT_OBJS`, but the CMakeLists.txt didn't add
`compat/linux/procinfo.c` to `compat_SOURCES`. It would cause linkage
error.
Signed-off-by: Yuyi Wang <Strawberry_Str@hotmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Technically, the pointer difference `end - start` _could_ be negative,
and when cast to an (unsigned) `size_t` that would cause problems. In
this instance, the symptom is:
dir.c: In function 'git_url_basename':
dir.c:3087:13: error: 'memchr' specified bound [9223372036854775808, 0]
exceeds maximum object size 9223372036854775807
[-Werror=stringop-overread]
CC ewah/bitmap.o
3087 | if (memchr(start, '/', end - start) == NULL
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
While it is a bit far-fetched to think that `end` (which is defined as
`repo + strlen(repo)`) and `start` (which starts at `repo` and never
steps beyond the NUL terminator) could result in such a negative
difference, GCC has no way of knowing that.
See also https://gcc.gnu.org/bugzilla//show_bug.cgi?id=85783.
Let's just add a safety check, primarily for GCC's benefit.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
GCC v12.x complains thusly:
compat/nedmalloc/nedmalloc.c: In function 'DestroyCaches':
compat/nedmalloc/nedmalloc.c:326:12: error: the comparison will always
evaluate as 'true' for the address of 'caches'
will never be NULL [-Werror=address]
326 | if(p->caches)
| ^
compat/nedmalloc/nedmalloc.c:196:22: note: 'caches' declared here
196 | threadcache *caches[THREADCACHEMAXCACHES];
| ^~~~~~
... and it is correct, of course.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git for Windows' SDK recently upgraded to GCC v12.x which points out
that the `pos` variable might be used even after the corresponding
memory was `realloc()`ed and therefore potentially no longer valid.
Since a subset of this SDK is used in Git's CI/PR builds, we need to fix
this to continue to be able to benefit from the CI/PR runs.
Note: This bug has been with us since 2a6b149c64 (mingw: avoid using
strbuf in syslog, 2011-10-06), and while it looks tempting to replace
the hand-rolled string manipulation with a `strbuf`-based one, that
commit's message explains why we cannot do that: The `syslog()` function
is called as part of the function in `daemon.c` which is set as the
`die()` routine, and since `strbuf_grow()` can call that function if it
runs out of memory, this would cause a nasty infinite loop that we do
not want to re-introduce.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When using a multi-pack bitmap, pack-objects will try to perform its
traversal using a call to `traverse_bitmap_commit_list()`, which calls
`add_object_entry_from_bitmap()` to add each object it finds to its
packing list.
This path can cause pack-objects to add objects from packs that don't
have open pack_fds on them, by avoiding a call to `is_pack_valid()`.
This is because we only call `is_pack_valid()` on the preferred pack (in
order to do verbatim reuse via `reuse_partial_packfile_from_bitmap()`)
and not others when loading a MIDX bitmap.
In this case, `add_object_entry_from_bitmap()` will check whether it
wants each object entry by calling `want_object_in_pack()`, which will
call `want_found_object` (since its caller already supplied a
`found_pack`). In most cases (particularly without `--local`, and when
`ignored_packed_keep_on_disk` and `ignored_packed_keep_in_core` are
both "0"), we'll take the entry from the pack contained in the MIDX
bitmap, all without an open pack_fd.
When we then try to use that entry later to assemble the actual pack,
we'll be susceptible to any simultaneous writers moving that pack out of
the way (e.g., due to a concurrent repack) without having an open file
descriptor, causing races that result in errors like:
remote: Enumerating objects: 1498802, done.
remote: fatal: packfile ./objects/pack/pack-e57d433b5a588daa37fbe946e2b28dfaec03a93e.pack cannot be accessed
remote: aborting due to possible repository corruption on the remote side.
This race can happen even with multi-pack bitmaps, since we may open a
MIDX bitmap that is being rewritten long before its packs are actually
unlinked.
Work around this by calling `is_pack_valid()` from within
`want_found_object()`, matching the behavior in
`want_object_in_pack_one()` (which has an analogous call). Most calls to
`is_pack_valid()` should be basically no-ops, since only the first call
requires us to open a file (subsequent calls realize the file is already
open, and return immediately).
Importantly, when `want_object_in_pack()` is given a non-NULL
`*found_pack`, but `want_found_object()` rejects the copy of the object
in that pack, we must reset `*found_pack` and `*found_offset` to NULL
and 0, respectively. Failing to do so could lead to other checks in
`want_object_in_pack()` (such as `want_object_in_pack_one()`) using the
same (invalid) pack as `*found_pack`, meaning that we don't call
`is_pack_valid()` because `p == *found_pack`. This can lead the caller
to believe it can use a copy of an object from an invalid pack.
An alternative approach to closing this race would have been to call
`is_pack_valid()` on _all_ packs in a multi-pack bitmap on load. This
has a couple of problems:
- it is unnecessarily expensive in the cases where we don't actually
need to open any packs (e.g., in `git rev-list --use-bitmap-index
--count`)
- more importantly, it means any time we would have hit this race,
we'll avoid using bitmaps altogether, leading to significant
slowdowns by forcing a full object traversal
Co-authored-by: Victoria Dye <vdye@github.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A subsequent patch will teach `want_object_in_pack()` to set its
`*found_pack` and `*found_offset` poitners to NULL when the provided
pack does not pass the `is_pack_valid()` check.
The `--stdin-packs` mode of `pack-objects` is not quite prepared to
handle this. To prepare it for this change, do the following two things:
- Ensure provided packs pass the `is_pack_valid()` check when
collecting the caller-provided packs into the "included" and
"excluded" lists.
- Gracefully handle any _invalid_ packs being passed to
`want_object_in_pack()`.
Calling `is_pack_valid()` early on makes it substantially less likely
that we will have to deal with a pack going away, since we'll have an
open file descriptor on its contents much earlier.
But even packs with open descriptors can become invalid in the future if
we (a) hit our open descriptor limit, forcing us to close some open
packs, and (b) one of those just-closed packs has gone away in the
meantime.
`add_object_entry_from_pack()` depends on having a non-NULL
`*found_pack`, since it passes that pointer to `packed_object_info()`,
meaning that we would SEGV if the pointer became NULL (like we propose
to do in `want_object_in_pack()` in the following patch).
But avoiding calling `packed_object_info()` entirely is OK, too, since
its only purpose is to identify which objects in the included packs are
commits, so that they can form the tips of the advisory traversal used
to discover the object namehashes.
Failing to do this means that at worst we will produce lower-quality
deltas, but it does not prevent us from generating the pack as long as
we can find a copy of each object from the disappearing pack in some
other part of the repository.
Co-authored-by: Victoria Dye <vdye@github.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before calling `for_each_object_in_pack()`, the caller
`read_packs_list_from_stdin()` loops through each of the `include_packs`
and checks that its `->util` pointer (which is used to store the `struct
packed_git *` itself) is non-NULL.
This check is redundant, because `read_packs_list_from_stdin()` already
checks that the included packs are non-NULL earlier on in the same
function (and it does not add any new entries in between).
Remove this check, since it is not doing anything in the meantime.
Co-authored-by: Victoria Dye <vdye@github.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When pack-objects adds an entry to its packing list, it marks the
packfile and offset containing the object, which we may later use during
verbatim reuse (c.f., `write_reused_pack_verbatim()`).
If the packfile in question is deleted in the background (e.g., due to a
concurrent `git repack`), we'll die() as a result of calling use_pack(),
unless we have an open file descriptor on the pack itself. 4c08018204
(pack-objects: protect against disappearing packs, 2011-10-14) worked
around this by opening the pack ahead of time before recording it as a
valid source for reuse.
4c08018204's treatment meant that we could tolerate disappearing packs,
since it ensures we always have an open file descriptor on any pack that
we mark as a valid source for reuse. This tightens the race to only
happen when we need to close an open pack's file descriptor (c.f., the
caller of `packfile.c::get_max_fd_limit()`) _and_ that pack was deleted,
in which case we'll complain that a pack could not be accessed and
die().
The pack bitmap code does this, too, since prior to dc1daacdcc
(pack-bitmap: check pack validity when opening bitmap, 2021-07-23) it
was vulnerable to the same race.
The MIDX bitmap code does not do this, and is vulnerable to the same
race. Apply the same treatment as dc1daacdcc to the routine responsible
for opening the multi-pack bitmap's preferred pack to close this race.
This patch handles the "preferred" pack (c.f., the section
"multi-pack-index reverse indexes" in
Documentation/technical/pack-format.txt) specially, since pack-objects
depends on reusing exact chunks of that pack verbatim in
reuse_partial_packfile_from_bitmap(). So if that pack cannot be loaded,
the utility of a bitmap is significantly diminished.
Similar to dc1daacdcc, we could technically just add this check in
reuse_partial_packfile_from_bitmap(), since it's possible to use a MIDX
.bitmap without needing to open any of its packs. But it's simpler to do
the check as early as possible, covering all direct uses of the
preferred pack. Note that doing this check early requires us to call
prepare_midx_pack() early, too, so move the relevant part of that loop
from load_reverse_index() into open_midx_bitmap_1().
Subsequent patches handle the non-preferred packs in a slightly
different fashion.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git archive --add-file=<path>" picked up the raw permission bits
from the path and propagated to zip output in some cases, without
normalization, which has been corrected (tar output did not have
this issue).
* jc/archive-add-file-normalize-mode:
archive: do not let on-disk mode leak to zip archives
A bit of test framework fixes with a few fixes to issues found by
valgrind.
* ab/valgrind-fixes:
commit-graph.c: don't assume that stat() succeeds
object-file: fix a unpack_loose_header() regression in 3b6a8db3b0
log test: skip a failing mkstemp() test under valgrind
tests: using custom GIT_EXEC_PATH breaks --valgrind tests
When modifying the sparse-checkout definition, the sparse-checkout
builtin calls update_sparsity() to modify the SKIP_WORKTREE bits of all
cache entries in the index. Before, we needed the index to be fully
expanded in order to ensure we had the full list of files necessary that
match the new patterns.
Insert a call to reset_sparse_directories() that expands sparse
directories that are within the new pattern list, but only far enough
that every necessary file path now exists as a cache entry. The
remaining logic within update_sparsity() will modify the SKIP_WORKTREE
bits appropriately.
This allows us to disable command_requires_full_index within the
sparse-checkout builtin. Add tests that demonstrate that we are not
expanding to a full index unnecessarily.
We can see the improved performance in the p2000 test script:
Test HEAD~1 HEAD
------------------------------------------------------------------------
2000.24: git ... (sparse-v3) 2.14(1.55+0.58) 1.57(1.03+0.53) -26.6%
2000.25: git ... (sparse-v4) 2.20(1.62+0.57) 1.58(0.98+0.59) -28.2%
These reductions of 26-28% are small compared to most examples, but the
time is dominated by writing a new copy of the base repository to the
worktree and then deleting it again. The fact that the previous index
expansion was such a large portion of the time is telling how important
it is to complete this sparse index integration.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The sparse-checkout builtin is almost completely integrated with the
sparse index, allowing the sparse-checkout boundary to be modified
without expanding a sparse index to a full one. Add a test to
p2000-sparse-operations.sh that adds a directory to the sparse-checkout
definition, then removes it. Using both operations is important to
ensure that the operation is doing the same work in each repetition as
well as leaving the test repo in a good state for later tests.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
To complete the implementation of expand_to_pattern_list(), we need to
detect when a sparse directory entry should remain sparse. This avoids a
full expansion, so we now need to use the PARTIALLY_SPARSE mode to
indicate this state.
There still are no callers to this method, but we will add one in the
next change.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The expand_to_pattern_list() method expands sparse directory entries
to their list of contained files when either the pattern list is NULL or
the directory is contained in the new pattern list's cone mode patterns.
It is possible that the pattern list has a recursive match with a
directory 'A/B/C/' and so an existing sparse directory 'A/B/' would need
to be expanded. If there exists a directory 'A/B/D/', then that
directory should not be expanded and instead we can create a sparse
directory.
To implement this, we plug into the add_path_to_index() callback for the
call to read_tree_at(). Since we now need access to both the index we
are writing and the pattern list we are comparing, create a 'struct
modify_index_context' to use as a data transfer object. It is important
that we use the given pattern list since we will use this pattern list
to change the sparse-checkout patterns and cannot use
istate->sparse_checkout_patterns.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the --no-sparse-index option is supplied, the sparse-checkout
builtin should explicitly ask to expand a sparse index to a full one.
This is currently done implicitly due to the command_requires_full_index
protection, but that will be removed in an upcoming change.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Given a 'struct cache_tree', it may be beneficial to navigate directly
to a node within that corresponds to a given path name. Create
cache_tree_find_path() for this function. It returns NULL when no such
path exists.
The implementation is adapted from do_invalidate_path() which does a
similar search but also modifies the nodes it finds along the way. The
method could be implemented simply using tail-recursion, but this while
loop does the same thing.
This new method is not currently used, but will be in an upcoming
change.
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A future change will present a temporary, in-memory mode where the index
can both contain sparse directory entries but also not be completely
collapsed to the smallest possible sparse directories. This will be
necessary for modifying the sparse-checkout definition while using a
sparse index.
For now, convert the single-bit member 'sparse_index' in 'struct
index_state' to be a an 'enum sparse_index_mode' with three modes:
* INDEX_EXPANDED (0): No sparse directories exist. This is always the
case for repositories that do not use cone-mode sparse-checkout.
* INDEX_COLLAPSED: Sparse directories may exist. Files outside the
sparse-checkout cone are reduced to sparse directory entries whenever
possible.
* INDEX_PARTIALLY_SPARSE: Sparse directories may exist. Some file
entries outside the sparse-checkout cone may exist. Running
convert_to_sparse() may further reduce those files to sparse directory
entries.
The main reason to store this extra information is to allow
convert_to_sparse() to short-circuit when the index is already in
INDEX_EXPANDED mode but to actually do the necessary work when in
INDEX_PARTIALLY_SPARSE mode.
The INDEX_PARTIALLY_SPARSE mode will be used in an upcoming change.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is the first change in a series to allow modifying the
sparse-checkout pattern set without expanding a sparse index to a full
one in the process. Here, we focus on the problem of expanding the
pattern set through a command like 'git sparse-checkout add <path>'
which needs to create new index entries for the paths now being written
to the worktree.
To achieve this, we need to be able to replace sparse directory entries
with their contained files and subdirectories. Once this is complete,
other code paths can discover those cache entries and write the
corresponding files to disk before committing the index.
We already have logic in ensure_full_index() that expands the index
entries, so we will use that as our base. Create a new method,
expand_index(), which takes a pattern list, but for now mostly ignores
it. The current implementation is only correct when the pattern list is
NULL as that does the same as ensure_full_index(). In fact,
ensure_full_index() is converted to a shim over expand_index().
A future update will actually implement expand_index() to its full
capabilities. For now, it is created and documented.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The 'sparse-index contents' test checks that the sparse index has the
correct set of sparse directories in the index after modifying the cone
mode patterns using 'git sparse-checkout set'. Add to the coverage here
by adding more complicated scenarios that were not previously tested.
In order to check paths that do not exist at HEAD, we need to modify the
test_sparse_checkout_set helper slightly:
1. Add the --skip-checks argument to the 'set' command to avoid failures
when passing paths that do not exist at HEAD.
2. When looking for the non-existence of sparse directories for the
paths in $CONE_DIRS, allow the rev-list command to fail because the
path does not exist at HEAD.
This allows us to add some interesting test cases.
Helped-by: Victoria Dye <vdye@github.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before expanding this test with more involved cases, first extract the
repeated logic into a new test_sparse_checkout_set helper. This helper
checks that 'git sparse-checkout set ...' succeeds and then verifies
that certain directories have sparse directory entries in the sparse
index. It also verifies that the in-cone directories are _not_ sparse
directory entries in the sparse index.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We used to call that function already before printing the final verdict.
However, now that we added grouping to the GitHub workflow output, we
will want to include even that part in the collapsible group for that
test case.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The full logs are contained in the `failed-tests-*.zip` artifacts that
are attached to the failed CI run. Since this is not immediately
obvious to the well-disposed reader, let's mention it explicitly.
Suggested-by: Victoria Dye <vdye@github.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes the output easier to digest.
Note: since workflow output currently cannot contain any nested groups
(see https://github.com/actions/runner/issues/802 for details), we need
to remove the explicit grouping that would span the entirety of each
failed test script.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We want to mark up the test case preamble when presenting test output in
Git's GitHub workflow. Let's suppress the non-marked-up version in that
case. Any information it would contain is included in the marked-up
variant already.
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In most instances, looking at the log of failed test cases is enough to
identify the problem.
In some (rare?) instances, a previous test case that was marked as
successful actually has information pertaining to a later test case that
fails.
To allow the page to load relatively quickly, let's only show the logs
of the failed test cases to be shown. The full logs are available for
download as artifacts, should a deeper investigation become necessary.
Co-authored-by: Victoria Dye <vdye@github.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A couple of commands exist to spruce up the output in GitHub workflows:
https://docs.github.com/en/actions/learn-github-actions/workflow-commands-for-github-actions
In addition to the `::group::<label>`/`::endgroup::` commands (which we
already use to structure the output of the build step better), we also
use `::error::`/`::notice::` to draw the attention to test failures and
to test cases that were expected to fail but didn't.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The current output of Git's GitHub workflow can be quite confusing,
especially for contributors new to the project.
To make it more helpful, let's introduce some collapsible grouping.
Initially, readers will see the high-level view of what actually
happened (did the build fail, or the test suite?). To drill down, the
respective group can be expanded.
Note: sadly, workflow output currently cannot contain any nested groups
(see https://github.com/actions/runner/issues/802 for details),
therefore we take pains to ensure to end any previous group before
starting a new one.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When investigating a test failure, the time that matters most is the
time it takes from getting aware of the failure to displaying the output
of the failing test case.
You currently have to know a lot of implementation details when
investigating test failures in the CI runs. The first step is easy: the
failed job is marked quite clearly, but when opening it, the failed step
is expanded, which in our case is the one running
`ci/run-build-and-tests.sh`. This step, most notably, only offers a
high-level view of what went wrong: it prints the output of `prove`
which merely tells the reader which test script failed.
The actually interesting part is in the detailed log of said failed
test script. But that log is shown in the CI run's step that runs
`ci/print-test-failures.sh`. And that step is _not_ expanded in the web
UI by default. It is even marked as "successful", which makes it very
easy to miss that there is useful information hidden in there.
Let's help the reader by showing the failed tests' detailed logs in the
step that is expanded automatically, i.e. directly after the test suite
failed.
This also helps the situation where the _build_ failed and the
`print-test-failures` step was executed under the assumption that the
_test suite_ failed, and consequently failed to find any failed tests.
An alternative way to implement this patch would be to source
`ci/print-test-failures.sh` in the `handle_test_failures` function to
show these logs. However, over the course of the next few commits, we
want to introduce some grouping which would be harder to achieve that
way (for example, we do want a leaner, and colored, preamble for each
failed test script, and it would be trickier to accommodate the lack of
nested groupings in GitHub workflows' output).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the web UI of GitHub workflows, failed runs are presented with the
job step that failed auto-expanded. In the current setup, this is not
helpful at all because that shows only the output of `prove`, which says
which test failed, but not in what way.
What would help understand the reader what went wrong is the verbose
test output of the failed test.
The logs of the failed runs do contain that verbose test output, but it
is shown in the _next_ step (which is marked as succeeding, and is
therefore _not_ auto-expanded). Anyone not intimately familiar with this
would completely miss the verbose test output, being left mostly
puzzled with the test failures.
We are about to show the failed test cases' output in the _same_ step,
so that the user has a much easier time to figure out what was going
wrong.
But first, we must partially revert the change that tried to improve the
CI runs by combining the `Makefile` targets to build into a single
`make` invocation. That might have sounded like a good idea at the time,
but it does make it rather impossible for the CI script to determine
whether the _build_ failed, or the _tests_. If the tests were run at
all, that is.
So let's go back to calling `make` for the build, and call `make test`
separately so that we can easily detect that _that_ invocation failed,
and react appropriately.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the test case's output, we do want newline characters, but in the XML
attributes we do not want them.
However, the `xml_attr_encode` function always adds a Line Feed at the
end (which are then encoded as `
`, even for XML attributes.
This seems not to faze Azure Pipelines' XML parser, but it still is
incorrect, so let's fix it.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code writing JUnit XML is interspersed directly with all the code in
`t/test-lib.sh`, and it is therefore not only ill-separated, but
introducing yet another output format would make the situation even
worse.
Let's introduce an abstraction layer by hiding the JUnit XML code behind
four new functions that are supposed to be called before and after each
test and test case.
This is not just an academic exercise, refactoring for refactoring's
sake. We _actually_ want to introduce such a new output format, to
make it substantially easier to diagnose test failures in our GitHub
workflow, therefore we do need this refactoring.
This commit is best viewed with `git show --color-moved
--color-moved-ws=allow-indentation-change <commit>`.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In b92cb86ea1 (travis-ci: check that all build artifacts are
.gitignore-d, 2017-12-31), a function was introduced with a code style
that is different from the surrounding code: it added the opening curly
brace on its own line, when all the existing functions in the same file
cuddle that brace on the same line as the function name.
Let's make the code style consistent again.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create a technical document to explain cruft packs. It contains a brief
overview of the problem, some background, details on the implementation,
and a couple of alternative approaches not considered here.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
macOS CI jobs have been occasionally flaky due to tentative version
skew between perforce and the homebrew packager. Instead of
failing the whole CI job, just let it skip the p4 tests when this
happens.
* cb/ci-make-p4-optional:
ci: use https, not http to download binaries from perforce.com
ci: reintroduce prevention from perforce being quarantined in macOS
ci: avoid brew for installing perforce
ci: make failure to find perforce more user friendly
Introduce and apply coccinelle rule to discourage an explicit
comparison between a pointer and NULL, and applies the clean-up to
the maintenance track.
* ep/maint-equals-null-cocci:
tree-wide: apply equals-null.cocci
tree-wide: apply equals-null.cocci
contrib/coccinnelle: add equals-null.cocci
"git show :<path>" learned to work better with the sparse-index
feature.
* ds/sparse-colon-path:
rev-parse: integrate with sparse index
object-name: diagnose trees in index properly
object-name: reject trees found in the index
show: integrate with the sparse index
t1092: add compatibility tests for 'git show'
Teach "git stash" to work better with sparse index entries.
* vd/sparse-stash:
unpack-trees: preserve index sparsity
stash: apply stash using 'merge_ort_nonrecursive()'
read-cache: set sparsity when index is new
sparse-index: expose 'is_sparse_index_allowed()'
stash: integrate with sparse index
stash: expand sparse-checkout compatibility testing
"git bisect" was too silent before it is ready to start computing
the actual bisection, which has been corrected.
* cd/bisect-messages-from-pre-flight-states:
bisect: output bisect setup status in bisect log
bisect: output state before we are ready to compute bisection
"git pull" without "--recurse-submodules=<arg>" made
submodule.recurse take precedence over fetch.recurseSubmodules by
mistake, which has been corrected.
* gc/pull-recurse-submodules:
pull: do not let submodule.recurse override fetch.recurseSubmodules
Trace2 documentation updates.
* js/trace2-doc-fixes:
trace2 docs: add missing full stop
trace2 docs: clarify what `varargs` is all about
trace2 docs: fix a JSON formatted example
trace2 docs: surround more terms in backticks
trace2 docs: "printf" is not an English word
trace2 docs: a couple of grammar fixes
"git log --since=X" will stop traversal upon seeing a commit that
is older than X, but there may be commits behind it that is younger
than X when the commit was created with a faulty clock. A new
option is added to keep digging without stopping, and instead
filter out commits with timestamp older than X.
* mv/log-since-as-filter:
log: "--since-as-filter" option is a non-terminating "--since" variant
The temporary files fed to external diff command are now generated
inside a new temporary directory under the same basename.
* rs/external-diff-tempfile:
diff: use mks_tempfile_dt()
tempfile: add mks_tempfile_dt()
New tests for the safe.directory mechanism.
* sg/safe-directory-tests-and-docs:
safe.directory: document and check that it's ignored in the environment
t0033-safe-directory: check when 'safe.directory' is ignored
t0033-safe-directory: check the error message without matching the trash dir
The previous patch demonstrates a scenario where the list of packs
written by `pack-objects` (and stored in the `names` string_list) is
out-of-order, and can thus cause us to delete packs we shouldn't.
This patch resolves that bug by ensuring that `names` is sorted in all
cases, not just when
delete_redundant && pack_everything & ALL_INTO_ONE
is true.
Because we did sort `names` in that case (which, prior to `--geometric`
repacks, was the only time we would actually delete packs, this is only
a bug for `--geometric` repacks.
It would be sufficient to only sort `names` when `delete_redundant` is
set to a non-zero value. But sorting a small list of strings is cheap,
and it is defensive against future calls to `string_list_has_string()`
on this list.
Co-discovered-by: Victoria Dye <vdye@github.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When doing a `--geometric=<d>` repack, `git repack` determines a
splitting point among packs ordered by their object count such that:
- each pack above the split has at least `<d>` times as many objects
as the next-largest pack by object count, and
- the first pack above the split has at least `<d>` times as many
object as the sum of all packs below the split line combined
`git repack` then creates a pack containing all of the objects contained
in packs below the split line by running `git pack-objects
--stdin-packs` underneath. Once packs are moved into place, then any
packs below the split line are removed, since their objects were just
combined into a new pack.
But `git repack` tries to be careful to avoid removing a pack that it
just wrote, by checking:
struct packed_git *p = geometry->pack[i];
if (string_list_has_string(&names, hash_to_hex(p->hash)))
continue;
in the `delete_redundant` and `geometric` conditional towards the end of
`cmd_repack`.
But it's possible to trick `git repack` into not recognizing a pack that
it just wrote when `names` is out-of-order (which violates
`string_list_has_string()`'s assumption that the list is sorted and thus
binary search-able).
When this happens in just the right circumstances, it is possible to
remove a pack that we just wrote, leading to object corruption.
Luckily, this is quite difficult to provoke in practice (for a couple of
reasons):
- we ordinarily write just one pack, so `names` usually contains just
one entry, and is thus sorted
- when we do write more than one pack (e.g., due to `--max-pack-size`)
we have to: (a) write a pack identical to one that already
exists, (b) have that pack be below the split line, and (c) have
the set of packs written by `pack-objects` occur in an order which
tricks `string_list_has_string()`.
Demonstrate the above scenario in a failing test, which causes `git
repack --geometric` to write a pack which occurs below the split line,
_and_ fail to recognize that it wrote that pack.
The following patch will fix this bug.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Update 'repack' to ignore packs named on the command line with the
'--keep-pack' option. Specifically, modify 'init_pack_geometry()' to treat
command line-kept packs the same way it treats packs with an on-disk '.keep'
file (that is, skip the pack and do not include it in the 'geometry'
structure).
Without this handling, a '--keep-pack' pack would be included in the
'geometry' structure. If the pack is *before* the geometry split line (with
at least one other pack and/or loose objects present), 'repack' assumes the
pack's contents are "rolled up" into another pack via 'pack-objects'.
However, because the internally-invoked 'pack-objects' properly excludes
'--keep-pack' objects, any new pack it creates will not contain the kept
objects. Finally, 'repack' deletes the '--keep-pack' as "redundant" (since
it assumes 'pack-objects' created a new pack with its contents), resulting
in possible object loss and repository corruption.
Add a test ensuring that '--keep-pack' packs are now appropriately handled.
Co-authored-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We do make sure that an attempt to merge with various forms of local
changes will "fail", but the point of stopping the merge is so that
we refrain from discarding uncommitted local changes that could be
precious. Add a few more checks for each case to make sure the
local changes are left intact.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In c7c4bdeccf (run-command API: remove "env" member, always use
"env_array", 2021-11-25), there was a push to replace
cld.env = env->v;
with
strvec_pushv(&cld.env_array, env->v);
The conversion in c7c4bdeccf was mostly plug-and-play, with the snag
that some instances of strvec_pushv() became guarded with a NULL check
to ensure that the second argument was non-NULL.
This conversion was slightly over-eager to add a conditional in
builtin/receive-pack.c::unpack(), since we know at the point that we
add the result of `tmp_objdir_env()` into the child process's
environment, that `tmp_objdir` is non-NULL.
This follows from the conditional just before our strvec_pushv() call
(which returns from the function if `tmp_objdir` was NULL), as well as
the call to tmp_objdir_add_as_alternate() just below, which relies on
its argument (`tmp_objdir`) being non-NULL.
In the meantime, this extra conditional isn't hurting anything. But it
is redundant and thus unnecessarily confusing. So let's remove it.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When 7dce19d3 (fetch/pull: Add the --recurse-submodules option,
2010-11-12) introduced the "--recurse-submodule" option, the
approach taken was to perform fetches in submodules only once, after
all the main fetching (it may usually be a fetch from a single
remote, but it could be fetching from a group of remotes using
fetch_multiple()) succeeded. Later we added "--all" to fetch from
all defined remotes, which complicated things even more.
If your project has a submodule, and you try to run "git fetch
--recurse-submodule --all", you'd see a fetch for the top-level,
which invokes another fetch for the submodule, followed by another
fetch for the same submodule. All but the last fetch for the
submodule come from a "git fetch --recurse-submodules" subprocess
that is spawned via the fetch_multiple() interface for the remotes,
and the last fetch comes from the code at the end.
Because recursive fetching from submodules is done in each fetch for
the top-level in fetch_multiple(), the last fetch in the submodule
is redundant. It only matters when fetch_one() interacts with a
single remote at the top-level.
While we are at it, there is one optimization that exists in dealing
with a group of remote, but is missing when "--all" is used. In the
former, when the group turns out to be a group of one, instead of
spawning "git fetch" as a subprocess via the fetch_multiple()
interface, we use the normal fetch_one() code path. Do the same
when handing "--all", if it turns out that we have only one remote
defined.
Reviewed-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This switch statement was recently added to make it clear that
unpack_loose_header() returns an enum value, not an int. This adds
complications for future developers if that enum gains new values, since
that developer would need to add a case statement to this switch for
little real value.
Instead, we can revert back to an 'if' statement, but make the enum
explicit by using "!= ULHR_OK" instead of assuming it has the numerical
value zero.
Co-authored-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change the parse_bundle_header() function to be non-static, and rename
it to parse_bundle_header_fd(). The parse_bundle_header() function is
already public, and it's a thin wrapper around this function. This
will be used by code that wants to pass a fd to the bundle API.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the 'url' parameter was absolute, the previous implementation would
concatenate 'remote_url' with 'url'. Instead, we want to return 'url' in
this case.
The documentation now discusses what happens when supplying two
absolute URLs.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This method was initially written in 63e95beb0 (submodule: port
resolve_relative_url from shell to C, 2016-05-15). As we will need
similar functionality in the bundle URI feature, extract this to be
available in remote.h.
The code is almost exactly the same, except for the following trivial
differences:
* Fix whitespace and wrapping issues with the prototype and argument
lists.
* Let's call starts_with_dot_{,dot_}slash_native() instead of the
functionally identical "starts_with_dot_{,dot_}slash()" wrappers
"builtin/submodule--helper.c".
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This method will be used in an upcoming extension of git-remote-curl to
download a single file over HTTP(S) by request.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Move the populating of the --keep=* option argument to "index-pack" to
a static function, a subsequent commit will make use of it in another
function.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a version of the deref_without_lazy_fetch function which can be
called with custom oi_flags and to grab information about the
"object_type". This will be used for the bundle-uri client in a
subsequent commit.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a path_match_flags() function and have the two sets of
starts_with_dot_{,dot_}slash() functions added in
63e95beb08 (submodule: port resolve_relative_url from shell to C,
2016-04-15) and a2b26ffb1a (fsck: convert gitmodules url to URL
passed to curl, 2020-04-18) be thin wrappers for it.
As the latter of those notes the fsck version was copied from the
initial builtin/submodule--helper.c version.
Since the code added in a2b26ffb1a was doing really doing the same as
win32_is_dir_sep() added in 1cadad6f65 (git clone <url>
C:\cygwin\home\USER\repo' is working (again), 2018-12-15) let's move
the latter to git-compat-util.h is a is_xplatform_dir_sep(). We can
then call either it or the platform-specific is_dir_sep() from this
new function.
Let's likewise change code in various other places that was hardcoding
checks for "'/' || '\\'" with the new is_xplatform_dir_sep(). As can
be seen in those callers some of them still concern themselves with
':' (Mac OS classic?), but let's leave the question of whether that
should be consolidated for some other time.
As we expect to make wider use of the "native" case in the future,
define and use two starts_with_dot_{,dot_}slash_native() convenience
wrappers. This makes the diff in builtin/submodule--helper.c much
smaller.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Refactor the sending of the "agent" and "object-format" capabilities
into a function.
This was added in its current form in ab67235bc4 (connect: parse v2
refs with correct hash algorithm, 2020-05-25). When we connect to a v2
server we need to know about its object-format, and it needs to know
about ours. Since most things in connect.c and transport.c piggy-back
on the eager getting of remote refs via the handshake() those commands
can make use of the just-sent-over object-format by ls-refs.
But I'm about to add a command that may come after ls-refs, and may
not, but we need the server to know about our user-agent and
object-format. So let's split this into a function.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This check was introduced in 8ee5d73137 (Fix fetch/pull when run without
--update-head-ok, 2008-10-13) in order to protect against replacing the ref
of the active branch by mistake, for example by running git fetch origin
master:master.
It was later extended in 8bc1f39f41 (fetch: protect branches checked out
in all worktrees, 2021-12-01) to scan all worktrees.
This operation is very expensive (takes about 30s in my repository) when
there are many tags or branches, and it is executed on every fetch, even if
no local heads are updated at all.
Limit it to protect only refs/heads/* to improve fetch performance.
Signed-off-by: Orgad Shaneh <orgads@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Libcurl has a CURLOPT_RESOLVE easy option that allows
the result of hostname resolution in the following
format to be passed:
[+]HOST:PORT:ADDRESS[,ADDRESS]
This way, redirects and everything operating against the
HOST+PORT will use the provided ADDRESS(s).
The following format is also allowed to stop using
hostname resolutions that have already been passed:
-HOST:PORT
See https://curl.se/libcurl/c/CURLOPT_RESOLVE.html for
more details.
Let's add a corresponding "http.curloptResolve" config
option that takes advantage of CURLOPT_RESOLVE.
Each value configured for the "http.curloptResolve" key
is passed "as is" to libcurl through CURLOPT_RESOLVE, so
it should be in one of the above 2 formats. This keeps
the implementation simple and makes us consistent with
libcurl's CURLOPT_RESOLVE, and with curl's corresponding
`--resolve` command line option.
The implementation uses CURLOPT_RESOLVE only in
get_active_slot() which is called by all the HTTP
request sending functions.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a Git server responds to a fetch request, it may send optional
sections before the packfile section. To handle this, the Git client
calls packet_reader_peek() (see process_section_header()) in order to
see what's next without consuming the line.
However, as implemented, Git errors out whenever what's peeked is not an
ordinary line. This is not only unexpected (here, we only need to know
whether the upcoming line is the section header we want) but causes
errors to include the name of a section header that is irrelevant to the
cause of the error. For example, at $DAYJOB, we have seen "fatal: error
reading section header 'shallow-info'" error messages when none of the
repositories involved are shallow.
Therefore, fix this so that the peek returns 1 if the upcoming line is
the wanted section header and nothing else. Because of this change,
reader->line may now be NULL later in the function, so update the error
message printing code accordingly.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a support library that provides one function that can be used
to run a "scriplet" of commands through sudo and that helps invoking
sudo in the slightly awkward way that is required to ensure it doesn't
block the call (if shell was allowed as tested in the prerequisite)
and it doesn't run the command through a different shell than the one
we intended.
Add additional negative tests as suggested by Junio and that use a
new workspace that is owned by root.
Document a regression that was introduced by previous commits where
root won't be able anymore to access directories they own unless
SUDO_UID is removed from their environment.
The tests document additional ways that this new restriction could
be worked around and the documentation explains why it might be instead
considered a feature, but a "fix" is planned for a future change.
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
bdc77d1d68 (Add a function to determine whether a path is owned by the
current user, 2022-03-02) checks for the effective uid of the running
process using geteuid() but didn't account for cases where that user was
root (because git was invoked through sudo or a compatible tool) and the
original uid that repository trusted for its config was no longer known,
therefore failing the following otherwise safe call:
guy@renard ~/Software/uncrustify $ sudo git describe --always --dirty
[sudo] password for guy:
fatal: unsafe repository ('/home/guy/Software/uncrustify' is owned by someone else)
Attempt to detect those cases by using the environment variables that
those tools create to keep track of the original user id, and do the
ownership check using that instead.
This assumes the environment the user is running on after going
privileged can't be tampered with, and also adds code to restrict that
the new behavior only applies if running as root, therefore keeping the
most common case, which runs unprivileged, from changing, but because of
that, it will miss cases where sudo (or an equivalent) was used to change
to another unprivileged user or where the equivalent tool used to raise
privileges didn't track the original id in a sudo compatible way.
Because of compatibility with sudo, the code assumes that uid_t is an
unsigned integer type (which is not required by the standard) but is used
that way in their codebase to generate SUDO_UID. In systems where uid_t
is signed, sudo might be also patched to NOT be unsigned and that might
be able to trigger an edge case and a bug (as described in the code), but
it is considered unlikely to happen and even if it does, the code would
just mostly fail safely, so there was no attempt either to detect it or
prevent it by the code, which is something that might change in the future,
based on expected user feedback.
Reported-by: Guy Maurel <guy.j@maurel.de>
Helped-by: SZEDER Gábor <szeder.dev@gmail.com>
Helped-by: Randall Becker <rsbecker@nexbridge.com>
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Suggested-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Originally reported after release of v2.35.2 (and other maint branches)
for CVE-2022-24765 and blocking otherwise harmless commands that were
done using sudo in a repository that was owned by the user.
Add a new test script with very basic support to allow running git
commands through sudo, so a reproduction could be implemented and that
uses only `git status` as a proxy of the issue reported.
Note that because of the way sudo interacts with the system, a much
more complete integration with the test framework will require a lot
more work and that was therefore intentionally punted for now.
The current implementation requires the execution of a special cleanup
function which should always be kept as the last "test" or otherwise
the standard cleanup functions will fail because they can't remove
the root owned directories that are used. This also means that if
failures are found while running, the specifics of the failure might
not be kept for further debugging and if the test was interrupted, it
will be necessary to clean the working directory manually before
restarting by running:
$ sudo rm -rf trash\ directory.t0034-root-safe-directory/
The test file also uses at least one initial "setup" test that creates
a parallel execution directory under the "root" sub directory, which
should be used as top level directory for all repositories that are
used in this test file. Unlike all other tests the repository provided
by the test framework should go unused.
Special care should be taken when invoking commands through sudo, since
the environment is otherwise independent from what the test framework
setup and might have changed the values for HOME, SHELL and dropped
several relevant environment variables for your test. Indeed `git status`
was used as a proxy because it doesn't even require commits in the
repository to work and usually doesn't require much from the environment
to run, but a future patch will add calls to `git init` and that will
fail to honor the default branch name, unless that setting is NOT
provided through an environment variable (which means even a CI run
could fail that test if enabled incorrectly).
A new SUDO prerequisite is provided that does some sanity checking
to make sure the sudo command that will be used allows for passwordless
execution as root without restrictions and doesn't mess with git's
execution path. This matches what is provided by the macOS agents that
are used as part of GitHub actions and probably nowhere else.
Most of those characteristics make this test mostly only suitable for
CI, but it might be executed locally if special care is taken to provide
for all of them in the local configuration and maybe making use of the
sudo credential cache by first invoking sudo, entering your password if
needed, and then invoking the test with:
$ GIT_TEST_ALLOW_SUDO=YES ./t0034-root-safe-directory.sh
If it fails to run, then it means your local setup wouldn't work for the
test because of the configuration sudo has or other system settings, and
things that might help are to comment out sudo's secure_path config, and
make sure that the account you are using has no restrictions on the
commands it can run through sudo, just like is provided for the user in
the CI.
For example (assuming a username of marta for you) something probably
similar to the following entry in your /etc/sudoers (or equivalent) file:
marta ALL=(ALL:ALL) NOPASSWD: ALL
Reported-by: SZEDER Gábor <szeder.dev@gmail.com>
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
By default, GitHub prefills the PR description using the commit message
for single-commit PRs. This results in a duplicate commit message below
the three-dash line if the contributor does not empty out the PR
description before submitting, which adds noise for reviewers.
Add a note to that effect in MyFirstContribution.txt.
This partly addresses:
https://github.com/gitgitgadget/gitgitgadget/issues/340
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "Sending Patches via GitGitGadget" section mentions that the PR
title and description will be used as the cover letter, but does not
explain what is a cover letter or what should be included in it.
Refer readers to the new "The cover letter" section added in a previous
commit.
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous commit added a standalone section on the purpose of the
cover letter, drawing inspiration from the existing content of the
"Preparing Email" section.
Adjust "Preparing Email" to reference "The cover letter", to avoid
content duplication.
Also, use the imperative mode for the cover letter subject, as is done
in "The cover letter".
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
An explanation of the purpose of the cover letter is included in the
"Sending Patches with git send-email" / "Preparing Email" section but is
missing from the "Sending Patches via GitGitGadget" section.
Add a standalone section "The cover letter" under the "Getting Started:
Anatomy of a Patch Series" header to explain what the cover letter is
used for and to draft the cover letter of the 'psuh' topic used in the
tutorial.
For now we mostly copy content from the "Sending Patches with git
send-email" section but do not adjust that section, nor the GGG section,
to reference the new section. This is done in following commits.
Also, adjust the "Preparing Email" Asciidoc anchor to avoid conflicts.
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before describing how to send patches to the mailing list either with
GitGitGadget or 'git send-email', the MyFirstContribution tutorial
includes a small "Getting Ready to Share" section where the two
different methods are briefly introduced.
Use this section to also describe what a patch series looks like once
submitted, so that readers get an understanding of the end result before
diving into how to accomplish that end result.
Start by copying the "thread overview" section of a recent contribution
from the public-inbox web UI and explaining how each commit is a
separate mail, and point out the cover letter.
Subsequent commits will move the existing description of the purpose of
the cover letter from the 'git send-email' section to this "anatomy"
section.
Also, change the wording in the introductory paragraph to use
"contributions" instead of "patches", since this makes more sense when
talking about GitHub pull requests.
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 4c28e4ada0 (commit: die before asking to edit the log
message, 2010-12-20), we have been "leaking" the "author_ident" when
prepare_to_commit() fails. Instead of returning from right there,
introduce an exit status variable and jump to the clean-up label
at the end.
Instead of explicitly releasing the resource with strbuf_release(),
mark the variable with UNLEAK() at the end, together with two other
variables that are already marked as such. If this were in a
utility function that is called number of times, but these are
different, we should explicitly release resources that grow
proportionally to the size of the problem being solved, but
cmd_commit() is like main() and there is no point in spending extra
cycles to release individual pieces of resource at the end, just
before process exit will clean everything for us for free anyway.
This fixes a leak demonstrated by e.g. "t3505-cherry-pick-empty.sh",
but unfortunately we cannot mark it or other affected tests as passing
now with "TEST_PASSES_SANITIZE_LEAK=true" as we'll need to fix many
other memory leaks before doing so.
Incidentally there are two tests that always passes the leak checker
with or without this change. Mark them as such.
This is based on an earlier patch by Ævar, but takes a different
approach that is more maintainable.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 522354d70f (Add Travis CI support, 2015-11-27) the CI has used
http://filehost.perforce.com/perforce/ to download binaries from
filehost.perforce.com, they were then moved to this script in
657343a602 (travis-ci: move Travis CI code into dedicated scripts,
2017-09-10).
Let's use https instead for good measure. I don't think we need to
worry about the DNS or network between the GitHub CI and perforce.com
being MitM'd, but using https gives us extra validation of the payload
at least, and is one less thing to worry about when checking where
else we rely on non-TLS'd http connections.
Also, use the same download site at perforce.com for Linux and macOS
tarballs for consistency.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
5ed9fc3fc8 (ci: prevent `perforce` from being quarantined, 2020-02-27)
introduces this prevention for brew, but brew has been removed in a
previous commit, so reintroduce an equivalent option to avoid a possible
regression.
This doesn't affect github actions (as configure now) and is therefore
done silently to avoid any possible scary irrelevant messages.
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Perfoce's cask in brew is meant[1] to be used only by humans, so replace
its use from the CI with a scripted binary download which is less likely
to fail, as it is done in Linux.
Kept the logic together so it will be less likely to break when moved
around as on the fly code changes in this area are settled, at which
point it will also feasable to ammend it to avoid some of the hardcoded
values by using similar variables to the ones Linux does.
In that same line, a POSIX sh syntax is used instead of the similar one
used in Linux in preparation for an unrelated future change that might
change the shell currently configured for it.
This change reintroduces the risk that the installed binaries might not
work because of being quarantined that was fixed with 5ed9fc3fc8 (ci:
prevent `perforce` from being quarantined, 2020-02-27) but fixing that
now was also punted for simplicity and since the affected cloud provider
is scheduled to be retired with an on the fly change, but should be
addressed if that other change is not integrated further.
The discussion on the need to keep 2 radically different versions of
the binaries to be tested with Linux vs macOS or how to upgrade to
newer versions now that brew won't do that automatically for us has
been punted for now as well. On that line the now obsolete comment
about it in lib.sh was originally being updated by this change but
created conflicts as it is moved around by other on the fly changes,
so will be addressed independently as well.
[1] https://github.com/Homebrew/homebrew-cask/pull/122347#discussion_r856026584
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In preparation for a future change that will make perforce installation
optional in macOS, make sure that the check for it is done without
triggering scary looking errors and add a user friendly message instead.
All other existing uses of 'type <cmd>' in our shell scripts that
check the availability of a command <cmd> send both standard output
and error stream to /dev/null to squelch "<cmd> not found" diagnostic
output, but this script left the standard error stream shown.
Redirect it just like everybody else to squelch this error message that
we fully expect to see.
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix code added in 8d84097f96 (commit-graph: expire commit-graph
files, 2019-06-18) to check the return value of the stat() system
call. Not doing so caused us to use uninitialized memory in the "Bloom
generation is limited by --max-new-filters" test in
t4216-log-bloom.sh:
+ rm -f trace.event
+ pwd
+ GIT_TRACE2_EVENT=[...]/t/trash directory.t4216-log-bloom/limits/trace.event git commit-graph write --reachable --split=replace --changed-paths --max-new-filters=2
==24835== Syscall param utimensat(times[0].tv_sec) points to uninitialised byte(s)
==24835== at 0x499E65A: __utimensat64_helper (utimensat.c:34)
==24835== by 0x4999142: utime (utime.c:36)
==24835== by 0x552BE0: mark_commit_graphs (commit-graph.c:2213)
==24835== by 0x550822: write_commit_graph (commit-graph.c:2424)
==24835== by 0x54E3A0: write_commit_graph_reachable (commit-graph.c:1681)
==24835== by 0x4374BB: graph_write (commit-graph.c:269)
==24835== by 0x436F7D: cmd_commit_graph (commit-graph.c:326)
==24835== by 0x407B9A: run_builtin (git.c:465)
==24835== by 0x406651: handle_builtin (git.c:719)
==24835== by 0x407575: run_argv (git.c:786)
==24835== by 0x406410: cmd_main (git.c:917)
==24835== by 0x511F09: main (common-main.c:56)
==24835== Address 0x1ffeffde70 is on thread 1's stack
==24835== in frame #1, created by utime (utime.c:25)
==24835== Uninitialised value was created by a stack allocation
==24835== at 0x552B50: mark_commit_graphs (commit-graph.c:2201)
==24835==
[...]
error: last command exited with $?=126
not ok 137 - Bloom generation is limited by --max-new-filters
This would happen as we stat'd the non-existing
".git/objects/info/commit-graph" file. Let's fix mark_commit_graphs()
to check the stat()'s return value, and while we're at it fix another
case added in the same commit to do the same.
The caller in expire_commit_graphs() would have been less likely to
run into this, as it's operating on files it just got from readdir(),
but it could still happen due to a race with e.g. a concurrent "rm
-rf" of the commit-graph files.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a regression in my 3b6a8db3b0 (object-file.c: use "enum" return
type for unpack_loose_header(), 2021-10-01) revealed both by running
the test suite with --valgrind, and with the amended "git fsck" test.
In practice this regression in v2.34.0 caused us to claim that we
couldn't parse the header, as opposed to not being able to unpack
it. Before the change in the C code the test_cmp added here would emit:
-error: unable to unpack header of ./objects/e6/9de29bb2d1d6434b8b29ae775ad8c2e48c5391
+error: unable to parse header of ./objects/e6/9de29bb2d1d6434b8b29ae775ad8c2e48c5391
I.e. we'd proceed to call parse_loose_header() on the uninitialized
"hdr" value, and it would have been very unlikely for that
uninitialized memory to be a valid git object.
The other callers of unpack_loose_header() were already checking the
enum values exhaustively. See 3b6a8db3b0 and
5848fb11ac (object-file.c: return ULHR_TOO_LONG on "header too long",
2021-10-01).
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Skip a test added in f1e3df3169 (t: increase test coverage of
signature verification output, 2020-03-04) when running under
valgrind. Due to valgrind's interception of mkstemp() this test will
fail with:
+ pwd
+ TMPDIR=[...]/t/trash directory.t4202-log/bogus git log --show-signature -n1 plain-fail
==7696== VG_(mkstemp): failed to create temp file: [...]/t/trash directory.t4202-log/bogus/valgrind_proc_7696_cmdline_d545ddcf
[... 10 more similar lines omitted ..]
valgrind: Startup or configuration error:
valgrind: Can't create client cmdline file in [...]/t/trash directory.t4202-log/bogus/valgrind_proc_7696_cmdline_6e542d1d
valgrind: Unable to start up properly. Giving up.
error: last command exited with $?=1
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a regression in b7d11a0f5d (tests: exercise the RUNTIME_PREFIX
feature, 2021-07-24) where tests that want to set up and test a "git"
wrapper in $PATH conflicted with the t/bin/valgrind wrapper(s) doing
the same.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the "--add-file" option is used to add the contents from an
untracked file to the archive, the permission mode bits for these
files are sent to the archive-backend specific "write_entry()"
method as-is. We normalize the mode bits for tracked files way
before we pass them to the write_entry() method; we should do the
same here.
This is not strictly needed for "tar" archive-backend, as it has its
own code to further clean them up, but "zip" archive-backend is not
so well prepared.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a bug in "git pull" where `submodule.recurse` is preferred over
`fetch.recurseSubmodules` when performing a fetch
(Documentation/config/fetch.txt says that `fetch.recurseSubmodules`
should be preferred.). Do this by passing the value of the
"--recurse-submodules" CLI option to the underlying fetch, instead of
passing a value that combines the CLI option and config variables.
In other words, this bug occurred because builtin/pull.c is conflating
two similar-sounding, but different concepts:
- Whether "git pull" itself should care about submodules e.g. whether it
should update the submodule worktrees after performing a merge.
- The value of "--recurse-submodules" to pass to the underlying "git
fetch".
Thus, when `submodule.recurse` is set, the underlying "git fetch" gets
invoked with "--recurse-submodules[=value]", overriding the value of
`fetch.recurseSubmodules`.
An alternative (and more obvious) approach to fix the bug would be to
teach "git pull" to understand `fetch.recurseSubmodules`, but the
proposed solution works better because:
- We don't maintain two identical config-parsing implementions in "git
pull" and "git fetch".
- It works better with other commands invoked by "git pull" e.g. "git
merge" won't accidentally respect `fetch.recurseSubmodules`.
Reported-by: Huang Zou <huang.zou@schrodinger.com>
Helped-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The commit summary shown after making a commit is matched to what
is given in "git status" not to use the break-rewrite heuristics.
* rs/commit-summary-wo-break-rewrite:
commit, sequencer: turn off break_opt for commit summary
Avoid problems from interaction between malloc_check and address
sanitizer.
* pw/test-malloc-with-sanitize-address:
tests: make SANITIZE=address imply TEST_NO_MALLOC_CHECK
"git rebase --keep-base <upstream> <branch-to-rebase>" computed the
commit to rebase onto incorrectly, which has been corrected.
* ah/rebase-keep-base-fix:
rebase: use correct base for --keep-base when a branch is given
This allows seeing the current intermediate status without adding a new
good or bad commit:
$ git bisect log | tail -1
# status: waiting for bad commit, 1 good commit known
Signed-off-by: Chris Down <chris@chrisdown.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 73c6de06af ("bisect: don't use invalid oid as rev when
starting") changes the behaviour of `git bisect` to consider invalid
oids as pathspecs again, as in the old shell implementation.
While that behaviour may be desirable, it can also cause confusion. For
example, while bisecting in a particular repo I encountered this:
$ git bisect start d93ff48803f0 v6.3
$
...which led to me sitting for a few moments, wondering why there's no
printout stating the first rev to check.
It turns out that the tag was actually "6.3", not "v6.3", and thus the
bisect was still silently started with only a bad rev, because
d93ff48803f0 was a valid oid and "v6.3" was silently considered to be a
pathspec.
While this behaviour may be desirable, it can be confusing, especially
with different repo conventions either using or not using "v" before
release names, or when a branch name or tag is simply misspelled on the
command line.
In order to avoid situations like this, make it more clear what we're
waiting for:
$ git bisect start d93ff48803f0 v6.3
status: waiting for good commit(s), bad commit known
We already have good output once the bisect process has begun in
earnest, so we don't need to do anything more there.
Signed-off-by: Chris Down <chris@chrisdown.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
6cd80496e9 ("gitk: Resize panes correctly when reducing window size",
2020-10-03) introduces a mechanism to record previously-set sash
positions to make sure that correct values are used while computing
resize proportions. However, if we are not using ttk, then sash
represents only the x coordinate and the recorded sash (`oldsash`) only
includes the x coordinate. When we need to access the y coordinate via
the recorded sash position, we generate the following Application Error
popup:
Error: expected integer but got ""
expected integer but got ""
expected integer but got ""
while executing
"$win sash place 0 $sash0 [lindex $s0 1]"
(procedure "resizeclistpanes" line 38)
invoked from within
"resizeclistpanes .tf.histframe.pwclist 2818"
(command bound to event)
To fix this, if we are not using ttk, we append the sash positions with
the y coordinates before recording them to match the use_ttk case.
Signed-off-by: Halil Sen <halil.sen@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
The progress meter of "git blame" was showing incorrect numbers
when processing only parts of the file.
* ea/progress-partial-blame:
blame: report correct number of lines in progress when using ranges
Reimplement "vimdiff[123]" mergetool drivers with a more generic
layout mechanism.
* fr/vimdiff-layout:
mergetools: add description to all diff/merge tools
vimdiff: add tool documentation
vimdiff: integrate layout tests in the unit tests framework ('t' folder)
vimdiff: new implementation with layout support
Various cleanups to "git p4".
* jh/p4-various-fixups: (22 commits)
git-p4: sort imports
git-p4: seperate multiple statements onto seperate lines
git-p4: move inline comments to line above
git-p4: only seperate code blocks by a single empty line
git-p4: compare to singletons with "is" and "is not"
git-p4: normalize indentation of lines in conditionals
git-p4: ensure there is a single space around all operators
git-p4: ensure every comment has a single #
git-p4: remove spaces between dictionary keys and colons
git-p4: remove redundant backslash-continuations inside brackets
git-p4: remove extraneous spaces before function arguments
git-p4: place a single space after every comma
git-p4: removed brackets when assigning multiple return values
git-p4: remove spaces around default arguments
git-p4: remove padding from lists, tuples and function arguments
git-p4: sort and de-duplcate pylint disable list
git-p4: remove commented code
git-p4: convert descriptive class and function comments into docstrings
git-p4: improve consistency of docstring formatting
git-p4: indent with 4-spaces
...
The performance of the "untracked cache" feature has been improved
when "--untracked-files=<mode>" and "status.showUntrackedFiles"
are combined.
* tk/untracked-cache-with-uall:
untracked-cache: support '--untracked-files=all' if configured
untracked-cache: test untracked-cache-bypassing behavior with -uall
When unpacking trees, set the default sparsity of the resultant index based
on repo settings and 'is_sparse_index_allowed()'.
Normally, when executing 'unpack_trees', the output index is marked sparse
when (and only when) it unpacks a sparse directory. However, an index may be
"sparse" even if it contains no sparse directories - when all files fall
inside the sparse-checkout definition or otherwise have SKIP_WORKTREE
disabled. Therefore, the output index may be marked "full" even when it is
"sparse", resulting in unnecessary 'ensure_full_index' calls when writing to
disk. Avoid this by setting the "default" index sparsity to match what is
expected for the repository.
As a consequence of this fix, the (non-merge) 'read-tree' performed when
applying a stash with untracked files no longer expands the index. Update
the corresponding test in 't1092'.
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Update 'stash' to use 'merge_ort_nonrecursive()' to apply a stash to the
current working tree. When 'git stash apply' was converted from its shell
script implementation to a builtin in 8a0fc8d19d (stash: convert apply to
builtin, 2019-02-25), 'merge_recursive_generic()' was used to merge a stash
into the working tree as part of 'git stash (apply|pop)'. However, with the
single merge base used in 'do_apply_stash()', the commit wrapping done by
'merge_recursive_generic()' is not only unnecessary, but misleading (the
*real* merge base is labeled "constructed merge base"). Therefore, a
non-recursive merge of the working tree, stashed tree, and stash base tree
is more appropriate.
There are two options for a non-recursive merge-then-update-worktree
function: 'merge_trees()' and 'merge_ort_nonrecursive()'. Use
'merge_ort_nonrecursive()' to align with the default merge strategy used by
'git merge' (6a5fb96672 (Change default merge backend from recursive to ort,
2021-08-04)) and, because merge-ort does not operate in-place on the index,
avoid unnecessary index expansion. Update tests in 't1092' verifying index
expansion for 'git stash' accordingly.
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the index read in 'do_read_index()' does not exist on-disk, mark the
index "sparse" if the executing command does not require a full index and
sparse index is otherwise enabled.
Some commands (such as 'git stash -u') implicitly create a new index (when
the 'GIT_INDEX_FILE' variable points to a non-existent file) and perform
some operation on it. However, when this index is created, it isn't created
with the same sparsity settings as the repo index. As a result, while these
indexes may be sparse during the operation, they are always expanded before
being written to disk. We can avoid that expansion by defaulting the index
to "sparse", in which case it will only be expanded if the full index is
needed.
Note that the function 'set_new_index_sparsity()' is created despite having
only a single caller because additional callers will be added in a
subsequent patch.
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Expose 'is_sparse_index_allowed()' publicly so that it may be used by
callers outside of 'sparse-index.c'. While no such callers exist yet, it
will be used in a subsequent commit.
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Enable sparse index in 'git stash' by disabling
'command_requires_full_index'.
With sparse index enabled, some subcommands of 'stash' work without
expanding the index, e.g., 'git stash', 'git stash list', 'git stash drop',
etc. Others ensure the index is expanded either directly (as in the case of
'git stash [pop|apply]', where the call to 'merge_recursive_generic()' in
'do_apply_stash()' triggers the expansion), or in a command called
internally by stash (e.g., 'git update-index' in 'git stash -u'). So, in
addition to enabling sparse index, add tests to 't1092' demonstrating which
variants of 'git stash' expand the index, and which do not.
Finally, add the option to skip writing 'untracked.txt' in
'ensure_not_expanded', and use that option to successfully apply stashed
untracked files without a conflict in 'untracked.txt'.
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add tests verifying expected 'git stash' behavior in
't1092-sparse-checkout-compatibility'. These cases establish the expected
behavior of 'git stash' in a sparse-checkout and verify consistency both
with and without a sparse index. Although no sparse index compatibility has
been integrated into 'git stash' yet, the tests are all 'expect_success' -
we don't want the cone-mode sparse-checkout behavior to change depending on
whether it is using a sparse index or not. Therefore, we expect these tests
to continue passing once sparse index is integrated with 'git stash'.
Additionally, add performance test cases for 'git stash' both with and
without untracked files. Note that, unlike the other tests in
'p2000-sparse-operations.sh', the tests added for 'stash' are combination
operations. This is done to ensure the stash/unstash is not blocked by the
modification of '$SPARSE_CONE/a' performed as part of 'test_perf_on_all'.
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
`git remote -v` (`--verbose`) lists down the names of remotes along with
their URLs. It would be beneficial for users to also specify the filter
types for promisor remotes. Something like this -
origin remote-url (fetch) [blob:none]
origin remote-url (push)
Teach `git remote -v` to also specify the filters for promisor remotes.
Closes: https://github.com/gitgitgadget/git/issues/1211
Signed-off-by: Abhradeep Chakraborty <chakrabortyabhradeep79@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
`detect-compiler` has accumulated a few compiler dependent workarounds
lately for the more and more ubiquitious gcc12. This is intended to make
CI set-ups work across tool-chain updates, but also help those
developers who build with `DEVELOPER=1`.
Alas, `detect-compiler` uses the locale dependent output of `$(CC) -v`
to parse for the version string, which fails unless it literally
contains ` version`.
Use `LANG=C $(CC) -v` instead to grep for stable output.
Signed-off-by: Michael J Gruber <git@grubix.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Get rid of a bogus and over-eager coccinelle rule.
source: <xmqq1qxd6e4x.fsf@gitster.g>
* jc/cocci-xstrdup-or-null-fix:
cocci: drop bogus xstrdup_or_null() rule
"git format-patch <args> -- <pathspec>" lost the pathspec when
showing the second and subsequent commits, which has been
corrected.
source: <c36896a1-6247-123b-4fa3-b7eb24af1897@web.de>
* rs/format-patch-pathspec-fix:
2.36 format-patch regression fix
"git fast-export -- <pathspec>" lost the pathspec when showing the
second and subsequent commits, which has been corrected.
source: <2c988c7b-0efe-4222-4a43-8124fe1a9da6@web.de>
* rs/fast-export-pathspec-fix:
2.36 fast-export regression fix
"git show <commit1> <commit2>... -- <pathspec>" lost the pathspec
when showing the second and subsequent commits, which has been
corrected.
source: <xmqqo80j87g0.fsf_-_@gitster.g>
* jc/show-pathspec-fix:
2.36 show regression fix
Regression fix for 2.36 where "git name-rev" started to sometimes
reference strings after they are freed.
This fixes a regression in 2.36 and is slate to go to 2.36.1
source: <340c8810-d912-7b18-d46e-a9d43f20216a@web.de>
* rs/name-rev-fix-free-after-use:
Revert "name-rev: release unused name strings"
"diff-tree --stdin" has been broken for about a year, but 2.36
release broke it even worse by breaking running the command with
<pathspec>, which in turn broke "gitk" and got noticed. This has
been corrected by aligning its behaviour to that of "log".
This fixes a regression in 2.36 and is slate to go to 2.36.1
source: <xmqq7d7bsu2n.fsf@gitster.g>
* jc/diff-tree-stdin-fix:
2.36 gitk/diff-tree --stdin regression fix
"git submodule update" without pathspec should silently skip an
uninitialized submodule, but it started to become noisy by mistake.
This fixes a regression in 2.36 and is slate to go to 2.36.1
source: <pull.1258.v2.git.git.1650890741430.gitgitgadget@gmail.com>
* gc/submodule-update-part2:
submodule--helper: fix initialization of warn_if_uninitialized
The example was not in valid JSON format due to a duplicate key "sid".
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We append an ellipsis and enclose it in backticks to indicate that it is
a function elsewhere, let's also use that here.
While at it, ensure the same for `waitpid()`.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-p4 is designed to run correctly under python2.7 and python3, but
its functional behavior wrt importing user-entered text differs across
these environments:
Under python2, git-p4 "naively" writes the Perforce bytestream into git
metadata (and does not set an "encoding" header on the commits); this
means that any non-utf-8 byte sequences end up creating invalidly-encoded
commit metadata in git.
Under python3, git-p4 attempts to decode the Perforce bytestream as utf-8
data, and fails badly (with an unhelpful error) when non-utf-8 data is
encountered.
Perforce clients (especially p4v) encourage user entry of changelist
descriptions (and user full names) in OS-local encoding, and store the
resulting bytestream to the server unmodified - such that different
clients can end up creating mutually-unintelligible messages. The most
common inconsistency, in many Perforce environments, is likely to be utf-8
(typical in linux) vs cp-1252 (typical in windows).
Make the changelist-description- and user-fullname-handling code
python-runtime-agnostic, introducing three "strategies" selectable via
config:
- 'passthrough', behaving as previously under python2,
- 'strict', behaving as previously under python3, and
- 'fallback', favoring utf-8 but supporting a secondary encoding when
utf-8 decoding fails, and finally escaping high-range bytes if the
decoding with the secondary encoding also fails.
Keep the python2 default behavior as-is ('legacy' strategy), but switch
the python3 default strategy to 'fallback' with default fallback encoding
'cp1252'.
Also include tests exercising these encoding strategies, documentation for
the new config, and improve the user-facing error messages when decoding
does fail.
Signed-off-by: Tao Klerks <tao@klerks.biz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The path taken by "git multi-pack-index" command from the end user
was compared with path internally prepared by the tool withut first
normalizing, which lead to duplicated paths not being noticed,
which has been corrected.
* ds/midx-normalize-pathname-before-comparison:
cache: use const char * for get_object_directory()
multi-pack-index: use --object-dir real path
midx: use real paths in lookup_multi_pack_index()
"git clone --origin X" leaked piece of memory that held value read
from the clone.defaultRemoteName configuration variable, which has
been plugged.
* jc/clone-remote-name-leak-fix:
clone: plug a miniscule leak
"git format-patch <args> -- <pathspec>" lost the pathspec when
showing the second and subsequent commits, which has been
corrected.
* rs/format-patch-pathspec-fix:
2.36 format-patch regression fix
"git fast-export -- <pathspec>" lost the pathspec when showing the
second and subsequent commits, which has been corrected.
* rs/fast-export-pathspec-fix:
2.36 fast-export regression fix
"git show <commit1> <commit2>... -- <pathspec>" lost the pathspec
when showing the second and subsequent commits, which has been
corrected.
* jc/show-pathspec-fix:
2.36 show regression fix
Add a coccinelle semantic patch necessary to reinforce the git coding style
guideline:
"Do not explicitly compute an integral value with constant 0 or '\ 0', or a
pointer value with constant NULL."
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
13092a91 (cocci: refactor common patterns to use xstrdup_or_null(),
2016-10-12) introduced a rule to rewrite this conditional call to
xstrdup(E) and an assignment to variable V:
- if (E)
- V = xstrdup(E);
into an unconditional call to xstrdup_or_null(E) and an assignment
to variable V:
+ V = xstrdup_or_null(E);
which is utterly bogus. The original code may already have an
acceptable value in V and the conditional assignment may be to
improve the value already in V with a copy of a better value E when
(and only when) E is not NULL.
The rewritten construct unconditionally discards the existing value
of V and replaces it with a copy of E, even when E is NULL, which
changes the meaning of the program.
By the way, if it were
-if (E && !V)
- V = xstrdup(E);
+V = xstrdup_or_null(E);
it would probably have been correct. But there is no existing code
that would have been improved by such a rule, so let's just remove
the bogus one without replacing with the more specific one.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The remote_name variable is first assigned a copy of the value of
the "clone.defaultremotename" configuration variable and then by the
value of the "--origin" command line option. The former is prepared
to see multiple instances of the configuration variable by freeing
the current value of the variable before a copy of the newly
discovered value gets assigned to it. The latter however blindly
assigned a copy of the new value to the variable, thereby leaking
the value read from the configuration variable.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
e900d494dc (diff: add an API for deferred freeing, 2021-02-11) added a
way to allow reusing diffopts: the no_free bit. 244c27242f (diff.[ch]:
have diff_free() call clear_pathspec(opts.pathspec), 2022-02-16) made
that mechanism mandatory.
git fast-export doesn't set no_free, so path limiting stopped working
after the first commit. Set the flag and add a basic test to make sure
only changes to the specified files are exported.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
e900d494dc (diff: add an API for deferred freeing, 2021-02-11) added a
way to allow reusing diffopts: the no_free bit. 244c27242f (diff.[ch]:
have diff_free() call clear_pathspec(opts.pathspec), 2022-02-16) made
that mechanism mandatory.
git format-patch only sets no_free when --output is given, causing it to
forget pathspecs after the first commit. Set no_free unconditionally
instead.
The existing test was unable to detect this breakage because it checks
stderr for the absence of a certain string, but format-patch writes to
stdout. Also the test was not checking the case of one commit modifying
multiple files and a pathspec limiting the diff. Replace it with a more
thorough one.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This only surfaced as a regression after 2.36 release, but the
breakage was already there with us for at least a year.
e900d494 (diff: add an API for deferred freeing, 2021-02-11)
introduced a mechanism to delay freeing resources held in
diff_options struct that need to be kept as long as the struct will
be reused to compute diff. "git log -p" was taught to utilize the
mechanism but it was done with an incorrect assumption that the
underlying helper function, cmd_log_walk(), is called only once,
and it is OK to do the freeing at the end of it.
Alas, for "git show A B", the function is called once for each
commit given, so it is not OK to free the resources until we finish
calling it for all the commits given from the command line.
During 2.36 release cycle, we started clearing the <pathspec> as
part of this freeing, which made the bug a lot more visible.
Fix this breakage by tweaking how cmd_log_walk() frees the resources
at the end and using a variant of it that does not immediately free
the resources to show each commit object from the command line in
"git show".
Protect the fix with a few new tests.
Reported-by: Daniel Li <dan@danielyli.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In some "simple" centralized workflows, users expect remote tracking
branch names to match local branch names. "git push" pushes to the
remote version/instance of the branch, and "git pull" pulls any changes
to the remote branch (changes made by the same user in another place, or
by other users).
This expectation is supported by the push.default default option "simple"
which refuses a default push for a mismatching tracking branch name, and
by the new branch.autosetupmerge option, "simple", which only sets up
remote tracking for same-name remote branches.
When a new branch has been created by the user and has not yet been
pushed (and push.default is not set to "current"), the user is prompted
with a "The current branch %s has no upstream branch" error, and
instructions on how to push and add tracking.
This error is helpful in that following the advice once per branch
"resolves" the issue for that branch forever, but inconvenient in that
for the "simple" centralized workflow, this is always the right thing to
do, so it would be better to just do it.
Support this workflow with a new config setting, push.autoSetupRemote,
which will cause a default push, when there is no remote tracking branch
configured, to push to the same-name on the remote and --set-upstream.
Also add a hint offering this new option when the "The current branch %s
has no upstream branch" error is encountered, and add corresponding tests.
Signed-off-by: Tao Klerks <tao@klerks.biz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With "push.default=current" configured, a simple "git push" will push to
the same-name branch on the current branch's branch.<name>.pushRemote, or
remote.pushDefault, or origin. If none of these are defined, the push will
fail with error "fatal: No configured push destination".
The same "default to origin if no config" behavior applies with
"push.default=matching".
Other commands use "origin" as a default when there are multiple options,
but default to the single remote when there is only one - for example,
"git checkout <something>". This "assume the single remote if there is
only one" behavior is more friendly/useful than a defaulting behavior
that only uses the name "origin" no matter what.
Update "git push" to also default to the single remote (and finally fall
back to "origin" as default if there are several), for
"push.default=current" and for other current and future remote-defaulting
push behaviors.
This change also modifies the behavior of ls-remote in a consistent way,
so defaulting not only supplies 'origin', but any single configured remote
also.
Document the change in behavior, correct incorrect assumptions in related
tests, and add test cases reflecting this new single-remote-defaulting
behavior.
Signed-off-by: Tao Klerks <tao@klerks.biz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With the default push.default option, "simple", beginners are
protected from accidentally pushing to the "wrong" branch in
centralized workflows: if the remote tracking branch they would push
to does not have the same name as the local branch, and they try to do
a "default push", they get an error and explanation with options.
There is a particular centralized workflow where this often happens:
a user branches to a new local topic branch from an existing
remote branch, eg with "checkout -b feature1 origin/master". With
the default branch.autosetupmerge configuration (value "true"), git
will automatically add origin/master as the upstream tracking branch.
When the user pushes with a default "git push", with the intention of
pushing their (new) topic branch to the remote, they get an error, and
(amongst other things) a suggestion to run "git push origin HEAD".
If they follow this suggestion the push succeeds, but on subsequent
default pushes they continue to get an error - so eventually they
figure out to add "-u" to change the tracking branch, or they spelunk
the push.default config doc as proposed and set it to "current", or
some GUI tooling does one or the other of these things for them.
When one of their coworkers later works on the same topic branch,
they don't get any of that "weirdness". They just "git checkout
feature1" and everything works exactly as they expect, with the shared
remote branch set up as remote tracking branch, and push and pull
working out of the box.
The "stable state" for this way of working is that local branches have
the same-name remote tracking branch (origin/feature1 in this
example), and multiple people can work on that remote feature branch
at the same time, trusting "git pull" to merge or rebase as required
for them to be able to push their interim changes to that same feature
branch on that same remote.
(merging from the upstream "master" branch, and merging back to it,
are separate more involved processes in this flow).
There is a problem in this flow/way of working, however, which is that
the first user, when they first branched from origin/master, ended up
with the "wrong" remote tracking branch (different from the stable
state). For a while, before they pushed (and maybe longer, if they
don't use -u/--set-upstream), their "git pull" wasn't getting other
users' changes to the feature branch - it was getting any changes from
the remote "master" branch instead (a completely different class of
changes!)
An experienced git user might say "well yeah, that's what it means to
have the remote tracking branch set to origin/master!" - but the
original user above didn't *ask* to have the remote master branch
added as remote tracking branch - that just happened automatically
when they branched their feature branch. They didn't necessarily even
notice or understand the meaning of the "set up to track 'origin/master'"
message when they created the branch - especially if they are using a
GUI.
Looking at how to fix this, you might think "OK, so disable auto setup
of remote tracking - set branch.autosetupmerge to false" - but that
will inconvenience the *second* user in this story - the one who just
wanted to start working on the topic branch. The first and second
users swap roles at different points in time of course - they should
both have a sane configuration that does the right thing in both
situations.
Make this "branches have the same name locally as on the remote"
workflow less painful / more obvious by introducing a new
branch.autosetupmerge option called "simple", to match the same-name
"push.default" option that makes similar assumptions.
This new option automatically sets up tracking in a *subset* of the
current default situations: when the original ref is a remote tracking
branch *and* has the same branch name on the remote (as the new local
branch name).
Update the error displayed when the 'push.default=simple' configuration
rejects a mismatching-upstream-name default push, to offer this new
branch.autosetupmerge option that will prevent this class of error.
With this new configuration, in the example situation above, the first
user does *not* get origin/master set up as the tracking branch for
the new local branch. If they "git pull" in their new local-only
branch, they get an error explaining there is no upstream branch -
which makes sense and is helpful. If they "git push", they get an
error explaining how to push *and* suggesting they specify
--set-upstream - which is exactly the right thing to do for them.
This new option is likely not appropriate for users intentionally
implementing a "triangular workflow" with a shared upstream tracking
branch, that they "git pull" in and a "private" feature branch that
they push/force-push to just for remote safe-keeping until they are
ready to push up to the shared branch explicitly/separately. Such
users are likely to prefer keeping the current default
merge.autosetupmerge=true behavior, and change their push.default to
"current".
Also extend the existing branch tests with three new cases testing
this option - the obvious matching-name and non-matching-name cases,
and also a non-matching-ref-type case. The matching-name case needs to
temporarily create an independent repo to fetch from, as the general
strategy of using the local repo as the remote in these tests
precludes locally branching with the same name as in the "remote".
Signed-off-by: Tao Klerks <tao@klerks.biz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Regression fix for 2.36 where "git name-rev" started to sometimes
reference strings after they are freed.
* rs/name-rev-fix-free-after-use:
Revert "name-rev: release unused name strings"
"diff-tree --stdin" has been broken for about a year, but 2.36
release broke it even worse by breaking running the command with
<pathspec>, which in turn broke "gitk" and got noticed. This has
been corrected by aligning its behaviour to that of "log".
* jc/diff-tree-stdin-fix:
2.36 gitk/diff-tree --stdin regression fix
"git submodule update" without pathspec should silently skip an
uninitialized submodule, but it started to become noisy by mistake.
* gc/submodule-update-part2:
submodule--helper: fix initialization of warn_if_uninitialized
The description of 'safe.directory' mentions that it's respected in
the system and global configs, and ignored in the repository config
and on the command line, but it doesn't mention whether it's respected
or ignored when specified via environment variables (nor does the
commit message adding 'safe.directory' [1]).
Clarify that 'safe.directory' is ignored when specified in the
environment, and add tests to make sure that it remains so.
[1] 8959555cee (setup_git_directory(): add an owner check for the
top-level directory, 2022-03-02)
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
According to the documentation 'safe.directory' "is only respected
when specified in a system or global config, not when it is specified
in a repository config or via the command line option -c
safe.directory=<path>".
Add tests to check that 'safe.directory' in the repository config or
on the command line is indeed ignored.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 8959555cee (setup_git_directory(): add an owner check for the
top-level directory, 2022-03-02) when git finds itself in a repository
owned by someone else, it aborts with a "fatal: unsafe repository
(<repo path>)" error message and an advice about how to set the
'safe.directory' config variable to mark that repository as safe.
't0033-safe-directory.sh' contains tests that check that this feature
and handling said config work as intended. To ensure that git dies
for the right reason, several of those tests check that its standard
error contains the name of that config variable, but:
- it only appears in the advice part, not in the actual error
message.
- it is interpreted as a regexp by 'grep', so, because of the dot,
it matches the name of the test script and the path of the trash
directory as well. Consequently, these tests could be fooled by
any error message that would happen to include the path of the
test repository.
Tighten these checks to look for "unsafe repository" instead.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is not obvious that the 'git rev-parse' builtin would use the sparse
index, but it is possible to parse paths out of the index using the
":<path>" syntax. The 'git rev-parse' output is only the OID of the
object found at that location, but otherwise behaves similarly to 'git
show :<path>'. This includes the failure conditions on directories and
the error messages depending on whether a path is in the worktree or
not.
The only code change required is to change the
command_requires_full_index setting in builtin/rev-parse.c, and we can
re-use many existing 'git show' tests for the rev-parse case.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When running 'git show :<path>' where '<path>' is a directory, then
there is a subtle difference between a full checkout and a sparse
checkout. The error message from diagnose_invalid_index_path() reports
whether the path is on disk or not. The full checkout will have the
directory on disk, but the path will not be in the index. The sparse
checkout could have the directory not exist, specifically when that
directory is outside of the sparse-checkout cone.
In the case of a sparse index, we have yet another state: the path can
be a sparse directory in the index. In this case, the error message from
diagnose_invalid_index_path() would erroneously say "path '<path>' is in
the index, but not at stage 0", which is false.
Add special casing around sparse directory entries so we get to the
correct error message. This requires two checks in order to get parity
with the normal sparse-checkout case.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The get_oid_with_context_1() method is used when parsing revision
arguments. One particular case is to take a ":<path>" string and search
the index for the given path.
In the case of a sparse index, this might find a sparse directory entry,
in which case the contained object is a tree. In the case of a full
index, this search within the index would fail.
In order to maintain identical return state as in a full index, inspect
the discovered cache entry to see if it is a sparse directory and reject
it. This requires being careful around the only_to_die option to be sure
we die only at the correct time.
This changes the behavior of 'git show :<sparse-dir>', but does not
bring it entirely into alignment with a full index case. It specifically
hits the wrong error message within diagnose_invalid_index_path(). That
error message will be corrected in a future change.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The 'git show' command can take an input to request the state of an
object in the index. This can lead to parsing the index in order to load
a specific file entry. Without the change presented here, a sparse index
would expand to a full one, taking much longer than usual to access a
simple file.
There is one behavioral change that happens here, though: we now can
find a sparse directory entry within the index! Commands that previously
failed because we could not find an entry in the worktree or index now
succeed because we _do_ find an entry in the index.
There might be more work to do to make other situations succeed when
looking for an indexed tree, perhaps by looking at or updating the
cache-tree extension as needed. These situations include having a full
index or asking for a directory that is within the sparse-checkout cone
(and hence is not a sparse directory entry in the index).
For now, we demonstrate how the sparse index integration is extremely
simple for files outside of the cone as well as directories within the
cone. A later change will resolve this behavior around sparse
directories.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The .warn_if_uninitialized member was introduced by 48308681
(git submodule update: have a dedicated helper for cloning,
2016-02-29) to submodule_update_clone struct and initialized to
false. When c9911c93 (submodule--helper: teach update_data more
options, 2022-03-15) moved it to update_data struct, it started
to initialize it to true but this change was not explained in
its log message.
The member is set to true only when pathspec was given, and is
used when a submodule that matched the pathspec is found
uninitialized to give diagnostic message. "submodule update"
without pathspec is supposed to iterate over all submodules
(i.e. without pathspec limitation) and update only the
initialized submodules, and finding uninitialized submodules
during the iteration is a totally expected and normal thing that
should not be warned.
[jc: added tests]
Signed-off-by: Orgad Shaneh <orgads@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This only surfaced as a regression after 2.36 release, but the
breakage was already there with us for at least a year.
The diff_free() call is to be used after we completely finished with
a diffopt structure. After "git diff A B" finishes producing
output, calling it before process exit is fine. But there are
commands that prepares diff_options struct once, compares two sets
of paths, releases resources that were used to do the comparison,
then reuses the same diff_option struct to go on to compare the next
two sets of paths, like "git log -p".
After "git log -p" finishes showing a single commit, calling it
before it goes on to the next commit is NOT fine. There is a
mechanism, the .no_free member in diff_options struct, to help "git
log" to avoid calling diff_free() after showing each commit and
instead call it just one. When the mechanism was introduced in
e900d494 (diff: add an API for deferred freeing, 2021-02-11),
however, we forgot to do the same to "diff-tree --stdin", which *is*
a moral equivalent to "git log".
During 2.36 release cycle, we started clearing the pathspec in
diff_free(), so programs like gitk that runs
git diff-tree --stdin -- <pathspec>
downstream of a pipe, processing one commit after another, started
showing irrelevant comparison outside the given <pathspec> from the
second commit. The same commit, by forgetting to teach the .no_free
mechanism, broke "diff-tree --stdin -I<regexp>" and nobody noticed
it for over a year, presumably because it is so seldom used an
option.
But <pathspec> is a different story. The breakage was very
prominently visible and was reported immediately after 2.36 was
released.
Fix this breakage by mimicking how "git log" utilizes the .no_free
member so that "diff-tree --stdin" behaves more similarly to "log".
Protect the fix with a few new tests.
Reported-by: Matthias Aßhauer <mha1993@live.de>
Helped-by: René Scharfe <l.s.r@web.de>
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The get_object_directory() method returns the exact string stored at
the_repository->objects->odb->path. The return type of "char *" implies
that the caller must keep track of the buffer and free() it when
complete. This causes significant problems later when the ODB is
accessed.
Use "const char *" as the return type to avoid this confusion. There are
no current callers that care about the non-const definition.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The --object-dir argument to 'git multi-pack-index' allows a user to
specify an alternate to use instead of the local $GITDIR. This is used
by third-party tools like VFS for Git to maintain the pack-files in a
"shared object cache" used by multiple clones.
On Windows, the user can specify a path using a Windows-style file path
with backslashes such as "C:\Path\To\ObjectDir". This same path style is
used in the .git/objects/info/alternates file, so it already matches the
path of that alternate. However, find_odb() converts these paths to
real-paths for the comparison, which use forward slashes. As of the
previous change, lookup_multi_pack_index() uses real-paths, so it
correctly finds the target multi-pack-index when given these paths.
Some commands such as 'git multi-pack-index repack' call child processes
using the object_dir value, so it can be helpful to convert the path to
the real-path before sending it to those locations.
Add a callback to convert the real path immediately upon parsing the
argument. We need to be careful that we don't store the exact value out
of get_object_directory() and free it, or we could corrupt a later use
of the_repository->objects->odb->path.
We don't use get_object_directory() for the initial instantiation in
cmd_multi_pack_index() because we need 'git multi-pack-index -h' to work
without a Git repository.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This helper looks for a parsed multi-pack-index whose object directory
matches the given object_dir. Before going into the loop over the parsed
multi-pack-indexes, it calls find_odb() to ensure that the given
object_dir is actually a known object directory.
However, find_odb() uses real-path manipulations to compare the input to
the alternate directories. This same real-path comparison is not used in
the loop, leading to potential issues with the strcmp().
Update the method to use the real-path values instead.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When cloning directly from a local repository, we load a list of refs
based on scanning the $GIT_DIR/refs/ directory of the "server"
repository. If files exist in that directory that do not parse as
hexadecimal hashes, then the ref array used by write_remote_refs()
ends up with some entries with null OIDs. This causes us to hit a BUG()
statement in ref_transaction_create():
BUG: create called without valid new_oid
This BUG() call used to be a die() until 033abf97f (Replace all
die("BUG: ...") calls by BUG() ones, 2018-05-02). Before that, the die()
was added by f04c5b552 (ref_transaction_create(): check that new_sha1 is
valid, 2015-02-17).
The original report for this bug [1] mentioned that this problem did not
exist in Git 2.27.0. The failure bisects unsurprisingly to 968f12fda
(refs: turn on GIT_REF_PARANOIA by default, 2021-09-24). When
GIT_REF_PARANOIA is enabled, this case always fails as far back as I am
able to successfully compile and test the Git codebase.
[1] https://github.com/git-for-windows/git/issues/3781
There are two approaches to consider here. One would be to remove this
BUG() statement in favor of returning with an error. There are only two
callers to ref_transaction_create(), so this would have a limited
impact.
The other approach would be to add special casing in 'git clone' to
avoid this faulty input to the method.
While I originally started with changing 'git clone', I decided that
modifying ref_transaction_create() was a more complete solution. This
prevents failing with a BUG() statement when we already have a good way
to report an error (including a reason for that error) within the
method. Both callers properly check the return value and die() with the
error message, so this is an appropriate direction.
The added test helps check against a regression, but does check that our
intended error message is handled correctly.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This reverts commit 2d53975488.
3656f84278 (name-rev: prefer shorter names over following merges,
2021-12-04) broke the assumption of 2d53975488 (name-rev: release unused
name strings, 2020-02-04) that a better name for a child is a better
name for all of its ancestors as well, because it added a penalty for
generation > 0. This leads to strings being free(3)'d that are still
needed.
079f970971 (name-rev: sort tip names before applying, 2020-02-05)
already reduced the number of free(3) calls for the use case that
motivated the original patch (name-rev --all in the Chromium repository)
from ca. 44000 to 5, and 3656f84278 eliminated even those few. So this
revert won't affect name-rev's performance on that particular repo.
Reported-by: Thomas Hurst <tom@hur.st>
Helped-by: Elijah Newren <newren@gmail.com>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "--since=<time>" option of "git log" limits the commits displayed by
the command by stopping the traversal once it sees a commit whose
timestamp is older than the given time and not digging further into its
parents.
This is OK in a history where a commit always has a newer timestamp than
any of its parents'. Once you see a commit older than the given <time>,
all ancestor commits of it are even older than the time anyway. It
poses, however, a problem when there is a commit with a wrong timestamp
that makes it appear older than its parents. Stopping traversal at the
"incorrectly old" commit will hide its ancestors that are newer than
that wrong commit and are newer than the cut-off time given with the
--since option. --max-age and --after being the synonyms to --since,
they share the same issue.
Add a new "--since-as-filter" option that is a variant of
"--since=<time>". Instead of stopping the traversal to hide an old
enough commit and its all ancestors, exclude commits with an old
timestamp from the output but still keep digging the history.
Without other traversal stopping options, this will force the command in
"git log" family to dig down the history to the root. It may be an
acceptable cost for a small project with short history and many commits
with screwy timestamps.
It is quite unlikely for us to add traversal stopper other than since,
so have this as a --since-as-filter option, rather than a separate
--as-filter, that would be probably more confusing.
Signed-off-by: Miklos Vajna <vmiklos@vmiklos.hu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a regression in 707d2f2fe8 (CI: use "$runs_on_pool", not
"$jobname" to select packages & config, 2021-11-23).
In that commit I changed CC=gcc from CC=gcc-9, but on OSX the "gcc" in
$PATH points to clang, we need to use gcc-9 instead. Likewise for the
linux-gcc job CC=gcc-8 was changed to the implicit CC=gcc, which would
select GCC 9.4.0 instead of GCC 8.4.0.
Furthermore in 25715419bf (CI: don't run "make test" twice in one
job, 2021-11-23) when the "linux-TEST-vars" job was split off from
"linux-gcc" the "cc_package: gcc-8" line was copied along with
it, so its "cc_package" line wasn't working as intended either.
As a table, this is what's changed by this commit, i.e. it only
affects the linux-gcc, linux-TEST-vars and osx-gcc jobs:
|-------------------+-----------+-------------------+-------+-------|
| jobname | vector.cc | vector.cc_package | old | new |
|-------------------+-----------+-------------------+-------+-------|
| linux-clang | clang | - | clang | clang |
| linux-sha256 | clang | - | clang | clang |
| linux-gcc | gcc | gcc-8 | gcc | gcc-8 |
| osx-clang | clang | - | clang | clang |
| osx-gcc | gcc | gcc-9 | clang | gcc-9 |
| linux-gcc-default | gcc | - | gcc | gcc |
| linux-TEST-vars | gcc | gcc-8 | gcc | gcc-8 |
|-------------------+-----------+-------------------+-------+-------|
Reported-by: Carlo Arenas <carenas@gmail.com>
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Improve the wording for a couple paragraphs in two different manuals
relating to sparse behavior.
Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While we have no current plans to actually remove --no-cone mode, we
think users would be better off not using it. Update the documentation
accordingly, including explaining why we think non-cone mode is
problematic for users.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "Internals -- Cone Pattern Set" section starts off discussing
patterns, despite the fact that cone mode is about avoiding the
patterns. This made sense back when non-cone mode was the default and
we started by discussing the full pattern set, but now that we are
changing the default, it makes more sense to discuss cone-mode first and
avoid the full discussion of patterns. Split this section into two, the
first with details about how cone mode operates, and the second
following the full pattern set section and discussing how the cone mode
patterns used under the hood relate to the full pattern set.
While at it, flesh out the "Internals -- Full Pattern Set" section a bit
to include more examples as well.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since many users like to learn from examples, provide a section in the
manual with example commands that would be used and a brief explanation
of what each does.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With cone mode as the default, it makes sense to discuss it before
non-cone mode. Also, the new default means we can just use directories
in most cases and users do not need to understand patterns or their
meanings. Let's take advantage of this to mark several sections as
"INTERNALS", notifying the user that they do not need to know all those
details in order to make use of the sparse-checkout command.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The 'init' subcommand of sparse-checkout was deprecated in ba2f3f58ac
("git-sparse-checkout.txt: update to document init/set/reapply changes",
2021-12-14), but a couple places in the manual still assumed it was the
primary way to use sparse-checkout. Correct them.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now that cone mode is the default, we'd like to focus on the arguments
to set/add being directories rather than patterns, and it probably makes
sense to provide an earlier heads up that files from leading directories
get included as well.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Make cone mode the default, and update the documentation accordingly.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add an explicit --no-cone to several sparse-checkout invocations in
preparation for changing the default to cone mode.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When "--current" is given to "git show-branch" running in the
"--reflog" mode, the code tries to reference a "reflog" message
that does not even exist. This is because the --current is not
prepared to work in that mode.
The reason "--current" exists is to support this request:
I list branches on the command line. These are the branchesI
care about and I use as anchoring points. I may or may not be on
one of these main branches. Please make sure I can view the
commits on the current branch with respect to what is in these
other branches.
And to serve that request, the code checks if the current branch is
among the ones listed on the command line, and adds it only if it is
not to the end of one array, which essentially lists the objects.
The reflog mode additionally uses another array to list reflog
messages, which the "--current" code does not add to. This leaves
one uninitialized slot at the end of the array of reflog messages,
and causes the program to show garbage or segfault.
Catch the unsupported (and meaningless) combination and exit with a
usage error.
There are other combinations of options that are incompatible but
have not been tested. Add test to cover them while adding coverage
for this new combination.
Reported-by: Gregory David <gregory.david@p1sec.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This document gathers tips, scripts and configuration file to help
people working on Git’s codebase use their favorite tools while
following Git’s coding style.
Move the part about Emacs configuration from CodingGuidelines to
ToolsForGit.txt because it's the purpose of the new file centralize the
information about tools.
But, add a mention to Documentation/ToolsForGit.txt in CodingGuidelines
because there is also information about the coding style in it.
Helped-by: Matthieu Moy <Matthieu.Moy@univ-lyon1.fr>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: COGONI Guillaume <cogoni.guillaume@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
--keep-base rebases onto the merge base of the given upstream and the
current HEAD regardless of whether a branch is given. This is contrary
to the documentation and to the option's intended purpose. Instead,
rebase onto the merge base of the given upstream and the given branch.
Signed-off-by: Alex Henrie <alexhenrie24@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git uses temporary files to pass the contents of blobs to external diff
programs and textconv filters. It calls mks_tempfile_ts() to create
them, which puts them all in the same directory. This requires adding
a random name prefix.
Use mks_tempfile_dt() instead, which allows the files to have arbitrary
names, each in their own separate temporary directory. This way they
can have the same basename as the original blob, which looks nicer in
graphical diff programs.
The test in t4020 to check the prettiness of the temporary paths was
neutered by 5476bdf0e8 (diff tests: don't ignore "git diff" exit code in
"read" loop, 2022-03-07), which removed its grep check without replacing
it with an equivalent test_cmp check. Add one that only checks the
basename of the temporary file and nothing else.
And make the test more robust while at it, by using test_when_finished
to get rid of the added file even if the test fails.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a function to create a temporary file with a certain name in a
temporary directory created using mkdtemp(3). Its result is more
sightly than the paths created by mks_tempfile_ts(), which include
a random prefix. That's useful for files passed to a program that
displays their name, e.g. an external diff tool.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are two reasons that we could return NULL early within
load_commit_graph_chain():
1. The file does not exist, so the file pointer is NULL.
2. The file exists, but is too small to contain a single hash.
These were grouped together when the function was first written in
5c84b3396 (commit-graph: load commit-graph chains, 2019-06-18) in order
to simplify how the 'chain_name' string is freed. However, the current
code leaves a narrow window where the file pointer is not closed when
the file exists, but is rejected for being too small.
Split out these cases separately to ensure we close the file in this
case.
Signed-off-by: Kleber Tarcísio <klebertarcisio@yahoo.com.br>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There is an if statement where both if and else have the same
assignment of options.type to REBASE_MERGE. Simplify
it by getting that assigmnent out of the if.
Signed-off-by: Edmundo Carmona Antoranz <eantoranz@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A couple of work around for CI breaking warnings from gcc 12.
* cb/buggy-gcc-12-workaround:
config.mak.dev: alternative workaround to gcc 12 warning in http.c
config.mak.dev: workaround gcc 12 bug affecting "pedantic" CI job
This provides a "no code change needed" option to the "fix" currently
queued as part of ab/http-gcc-12-workaround and therefore should be
reverted once that gets merged.
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Originally noticed by Peff[1], but yet to be corrected[2] and planned to
be released with Fedora 36 (scheduled for Apr 19).
dir.c: In function ‘git_url_basename’:
dir.c:3085:13: error: ‘memchr’ specified bound [9223372036854775808, 0] exceeds maximum object size 9223372036854775807 [-Werror=stringop-overread]
3085 | if (memchr(start, '/', end - start) == NULL
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fedora is used as part of the CI, and therefore that release will trigger
failures, unless the version of the image used is locked to an older
release, as an alternative.
Restricting the flag to the affected source file, as well as implementing
an independent facility to track these workarounds was specifically punted
to minimize the risk of introducing problems so close to a release.
This change should be reverted once the underlying gcc bug is solved and
which should be visible by NOT triggering a warning, otherwise.
[1] https://lore.kernel.org/git/YZQhLh2BU5Hquhpo@coredump.intra.peff.net/
[2] https://bugzilla.redhat.com/show_bug.cgi?id=2075786
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
1214aa841b (reftable: add blocksource, an abstraction for random
access reads, 2021-10-07), makes the assumption that it is ok to
free a reftable_block pointing to NULL if the size is also set to
0, but implements that using a memset call that at least in glibc
based system will trigger a runtime exception if called with a
NULL pointer as its first parameter.
Avoid doing so by adding a conditional to check for the size in all
three identically looking functions that were affected, and therefore,
still allow memset to help catch callers that might incorrectly pass
a NULL pointer with a non zero size, but avoiding the exception for
the valid cases.
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Revert the "deletion of a ref should not trigger transaction events
for loose and packed ref backends separately" that regresses the
behaviour when a ref is not modified since it was packed.
* jc/revert-ref-transaction-hook-changes:
RelNotes: revert the description on the reverted topics
Revert "fetch: increase test coverage of fetches"
Revert "Merge branch 'ps/avoid-unnecessary-hook-invocation-with-packed-refs'"
* update the following words translations: commit, untracked, stage,
cache, stash, work..., index, reset, label, check..., tags, graft,
alternate object, amend, ancestor, cherry-pick, bisect, blame, chain,
cache, bug, chunk, branch, bundle, clean, clone, commit-graph, commit
object, commit-ish, committer, cover letter, conflict, dangling,
detach, dir, dumb, fast-forward, file system, fixup, fork, fetch, Git
archive, gitdir, graft, replace ref
* correct some mispellings
* git-po-helper update
* remove some obsolete lines
* unfuzzy entries
* random translation updates
* update contact in pt_PT.po
* add the following words to the translation table: override, recurse,
print, offset, unbundle, mirror repository, multi-pack, bad,
whitespace, batch
* remove the following words of the translation table: core Git
* change the following words on the translation table: dry-run, apply,
patch, replay, blame, chain, gitdir, file system, fork, unset, handle
* some translation to the first person
* update copyright text
* word 'utilização:' to 'uso:'
* word 'pai' to 'parente'
Signed-off-by: Daniel Santos <dacs.git@brilhante.top>
Add a TODO comment indicating that we should release "diffopt" in
release_revisions(). In a preceding commit we started releasing the
"pruning" member of the same type, but handling "diffopt" will require
us to untangle the "no_free" conditions I added in e900d494dc (diff:
add an API for deferred freeing, 2021-02-11).
Let's leave a TODO comment to that effect, and so that we don't forget
refactor code that was changed to use release_revisions() in earlier
commits to stop using the "diffopt" member after a call to
release_revisions(). This works currently, but would become a logic
error as soon as we started freeing "diffopt". Doing that change now
doesn't harm anything, and future-proofs us against a later change to
release_revisions().
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Refactor the existing reset_topo_walk() into a thin wrapper for a
release_revisions_topo_walk_info() + resetting the member to "NULL",
and call release_revisions_topo_walk_info() from release_revisions().
This fixes memory leaks that have been with us ever since
"topo_walk_info" was added to revision.[ch] in
f0d9cc4196 (revision.c: begin refactoring --topo-order logic,
2018-11-01).
Due to various other leaks this makes no tests pass in their entirety,
but e.g. before this running this on git.git:
./git -P log --pretty=tformat:"%P %H | %s" --parents --full-history --topo-order -3 -- README.md
Would report under SANITIZE=leak:
SUMMARY: LeakSanitizer: 531064 byte(s) leaked in 6 allocation(s).
Now we'll free all of that memory.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extend the the release_revisions() function so that it frees the
"date_mode" in the "struct ref_info".
This uses the date_mode_release() function added in 974c919d36 (date
API: add and use a date_mode_release(), 2022-02-16). As that commit
notes "t7004-tag.sh" tests for the leaks that are being fixed
here. That test now fails "only" 44 tests, instead of the 46 it failed
before this change.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Call diff_free() on the "pruning" member of "struct rev_info". Doing
so makes several tests pass under SANITIZE=leak.
This was also the last missing piece that allows us to remove the
UNLEAK() in "cmd_diff" and "cmd_diff_index", which allows us to use
those commands as a canary for general leaks in the revisions API. See
[1] for further rationale, and 886e1084d7 (builtin/: add UNLEAKs,
2017-10-01) for the commit that added the UNLEAK() there.
1. https://lore.kernel.org/git/220218.861r00ib86.gmgdl@evledraar.gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a missing reflog_walk_info_release() to "reflog-walk.c" and use it
in release_revisions().
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Clear the "boundary_commits" object_array in release_revisions(). This
makes a few more tests pass under SANITIZE=leak, including
"t/t4126-apply-empty.sh" which started failed as an UNLEAK() in
cmd_format_patch() was removed in a preceding commit.
This also re-marks the various tests relying on "git format-patch" as
passing under "SANITIZE=leak", in the preceding "revisions API users:
use release_revisions() in builtin/log.c" commit those were marked as
failing as we removed the UNLEAK(rev) from cmd_format_patch() in
"builtin/log.c".
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extend the the release_revisions() function so that it frees the
"prune_data" in the "struct rev_info". This means that any code that
calls "release_revisions()" already can get rid of adjacent calls to
clear_pathspec().
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extend the the release_revisions() function so that it frees the
"grep_filter" in the "struct rev_info".This allows us to mark a test
as passing under "TEST_PASSES_SANITIZE_LEAK=true".
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extend the the release_revisions() function so that it frees the
"filter" in the "struct rev_info". This in combination with a
preceding change to free "cmdline" means that we can mark another set
of tests as passing under "TEST_PASSES_SANITIZE_LEAK=true".
The "filter" member was added recently in ffaa137f64 (revision: put
object filter into struct rev_info, 2022-03-09), and this fixes leaks
intruded in the subsequent leak 7940941de1 (pack-objects: use
rev.filter when possible, 2022-03-09) and 105c6f14ad (bundle: parse
filter capability, 2022-03-09).
The "builtin/pack-objects.c" leak in 7940941de1 was effectively with
us already, but the variable was referred to by a "static" file-scoped
variable. The "bundle.c " leak in 105c6f14ad was newly introduced
with the new "filter" feature for bundles.
The "t5600-clone-fail-cleanup.sh" change here to add
"TEST_PASSES_SANITIZE_LEAK=true" is one of the cases where
run-command.c in not carrying the abort() exit code upwards would have
had that test passing before, but now it *actually* passes[1]. We
should fix the lack of 1=1 mapping of SANITIZE=leak testing to actual
leaks some other time, but it's an existing edge case, let's just mark
the really-passing test as passing for now.
1. https://lore.kernel.org/git/220303.86fsnz5o9w.gmgdl@evledraar.gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extend the the release_revisions() function so that it frees the
"cmdline" in the "struct rev_info". This in combination with a
preceding change to free "commits" and "mailmap" means that we can
whitelist another test under "TEST_PASSES_SANITIZE_LEAK=true".
There was a proposal in [1] to do away with xstrdup()-ing this
add_rev_cmdline(), perhaps that would be worthwhile, but for now let's
just free() it.
We could also make that a "char *" in "struct rev_cmdline_entry"
itself, but since we own it let's expose it as a constant to outside
callers. I proposed that in [2] but have since changed my mind. See
14d30cdfc0 (ref-filter: fix memory leak in `free_array_item()`,
2019-07-10), c514c62a4f (checkout: fix leak of non-existent branch
names, 2020-08-14) and other log history hits for "free((char *)" for
prior art.
This includes the tests we had false-positive passes on before my
6798b08e84 (perl Git.pm: don't ignore signalled failure in
_cmd_close(), 2022-02-01), now they pass for real.
Since there are 66 tests matching t/t[0-9]*git-svn*.sh it's easier to
list those that don't pass than to touch most of those 66. So let's
introduce a "TEST_FAILS_SANITIZE_LEAK=true", which if set in the tests
won't cause lib-git-svn.sh to set "TEST_PASSES_SANITIZE_LEAK=true.
This change also marks all the tests that we removed
"TEST_FAILS_SANITIZE_LEAK=true" from in an earlier commit due to
removing the UNLEAK() from cmd_format_patch(), we can now assert that
its API use doesn't leak any "struct rev_info" memory.
This change also made commit "t5503-tagfollow.sh" pass on current
master, but that would regress when combined with
ps/fetch-atomic-fixup's de004e848a (t5503: simplify setup of test
which exercises failure of backfill, 2022-03-03) (through no fault of
that topic, that change started using "git clone" in the test, which
has an outstanding leak). Let's leave that test out for now to avoid
in-flight semantic conflicts.
1. https://lore.kernel.org/git/YUj%2FgFRh6pwrZalY@carlos-mbp.lan/
2. https://lore.kernel.org/git/87o88obkb1.fsf@evledraar.gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extend the the release_revisions() function so that it frees the
"mailmap" in the "struct rev_info".
The log family of functions now calls the clear_mailmap() function
added in fa8afd18e5a (revisions API: provide and use a
release_revisions(), 2021-09-19), allowing us to whitelist some tests
with "TEST_PASSES_SANITIZE_LEAK=true".
Unfortunately having a pointer to a mailmap in "struct rev_info"
instead of an embedded member that we "own" get a bit messy, as can be
seen in the change to builtin/commit.c.
When we free() this data we won't be able to tell apart a pointer to a
"mailmap" on the heap from one on the stack. As seen in
ea57bc0d41 (log: add --use-mailmap option, 2013-01-05) the "log"
family allocates it on the heap, but in the find_author_by_nickname()
code added in ea16794e43 (commit: search author pattern against
mailmap, 2013-08-23) we allocated it on the stack instead.
Ideally we'd simply change that member to a "struct string_list
mailmap" and never free() the "mailmap" itself, but that would be a
much larger change to the revisions API.
We have code that needs to hand an existing "mailmap" to a "struct
rev_info", while we could change all of that, let's not go there
now.
The complexity isn't in the ownership of the "mailmap" per-se, but
that various things assume a "rev_info.mailmap == NULL" means "doesn't
want mailmap", if we changed that to an init'd "struct string_list
we'd need to carefully refactor things to change those assumptions.
Let's instead always free() it, and simply declare that if you add
such a "mailmap" it must be allocated on the heap. Any modern libc
will correctly panic if we free() a stack variable, so this should be
safe going forward.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extend the the release_revisions() function so that it frees the
"commits" in the "struct rev_info".
We don't expect to use this "struct rev_info" again, so there's no
reason to NULL out revs->commits, as e.g. simplify_merges() and
create_boundary_commit_list() do.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use release_revisions() for users of "struct rev_list" that reach into
the "struct rev_info" and clear the "prune_data" already.
In a subsequent commit we'll teach release_revisions() to clear this
itself, but in the meantime let's invoke release_revisions() here to
clear anything else we may have missed, and for reasons of having
consistent boilerplate.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use a release_revisions() with those "struct rev_list" users which
already "UNLEAK" the struct. It may seem odd to simultaneously attempt
to free() memory, but also to explicitly ignore whether we have memory
leaks in the same.
As explained in preceding commits this is being done to use the
built-in commands as a guinea pig for whether the release_revisions()
function works as expected, we'd like to test e.g. whether we segfault
as we change it. In subsequent commits we'll then remove these
UNLEAK() as the function is made to free the memory that caused us to
add them in the first place.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In preparation for having the "log" family of functions make wider use
of release_revisions() let's have them call it just before
exiting. This changes the "log", "whatchanged", "show",
"format-patch", etc. commands, all of which live in this file.
The release_revisions() API still only frees the "pending" member, but
will learn to release more members of "struct rev_info" in subsequent
commits.
In the case of "format-patch" revert the addition of UNLEAK() in
dee839a263 (format-patch: mark rev_info with UNLEAK, 2021-12-16),
which will cause several tests that previously passed under
"TEST_PASSES_SANITIZE_LEAK=true" to start failing.
In subsequent commits we'll now be able to use those tests to check
whether that part of the API is really leaking memory, and will fix
all of those memory leaks. Removing the UNLEAK() allows us to make
incremental progress in that direction. See [1] for further details
about this approach.
Note that the release_revisions() will not be sufficient to deal with
the code in cmd_show() added in 5d7eeee2ac (git-show: grok blobs,
trees and tags, too, 2006-12-14) which clobbers the "pending" array in
the case of "OBJ_COMMIT". That will need to be dealt with by some
future follow-up work.
1. https://lore.kernel.org/git/220218.861r00ib86.gmgdl@evledraar.gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the case of cmd_main() in http-push.c we need to move the
deceleration of the "struct rev-list" into the loop over the
"remote_refs" when adding a release_revisions().
We'd previously set up the "revs" for each remote, but would
potentially leak memory on each one.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a release_revisions() to various users of "struct rev_info" which
requires a minor refactoring to a "goto cleanup" pattern to use that
function.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change the initialization of the "revision" member of "struct
stash_info" to be initialized vi a macro, and more importantly that
that initializing function be tasked to free it, usually via a "goto
cleanup" pattern.
Despite the "revision" name (and the topic of the series containing
this commit) the "stash info" has nothing to do with the "struct
rev_info". I'm making this change because in the subsequent commit
when we do want to free the "struct rev_info" via a "goto cleanup"
pattern we'd otherwise free() uninitialized memory in some cases, as
we only strbuf_init() the string in get_stash_info().
So while it's not the smallest possible change, let's convert all
users of this pattern in the file while we're at it.
A good follow-up to this change would be to change all the "ret = -1;
goto done;" in this file to instead use a "goto cleanup", and
initialize "int ret = -1" at the start of the relevant functions. That
would allow us to drop a lot of needless brace verbosity on two-line
"if" statements, but let's leave that alone for now.
To ensure that there's a 1=1 mapping between owners of the "struct
stash_info" and free_stash_info() change the assert_stash_ref()
function to be a trivial get_stash_info_assert() wrapper. The caller
will call free_stash_info(), and by returning -1 we'll eventually (via
!!ret) exit with status 1 anyway.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use release_revisions() to various users of "struct rev_list" which
need to have their "struct rev_info" zero-initialized before we can
start using it.
For the bundle.c code see the early exit case added in
3bbbe467f2 (bundle verify: error out if called without an object
database, 2019-05-27).
For the relevant bisect.c code see 45b6370812 (bisect: libify
`check_good_are_ancestors_of_bad` and its dependents, 2020-02-17).
For the submodule.c code see the "goto" on "(!left || !right || !sub)"
added in 8e6df65015 (submodule: refactor show_submodule_summary with
helper function, 2016-08-31).
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A subsequent commit will add "REV_INFO_INIT" macro adjacent to
repo_init_revisions(), unfortunately between the "struct rev_info"
itself and that function we've added various miscellaneous code
between the two over the years.
Let's move that code either lower in revision.h, giving it API docs
while we're at it, or in cases where it wasn't public API at all move
it into revision.c No lines of code are changed here, only moved
around. The only changes are the addition of new API comments.
The "tree_difference" variable could also be declared like this, which
I think would be a lot clearer, but let's leave that for now to keep
this a move-only change:
static enum {
REV_TREE_SAME,
REV_TREE_NEW, /* Only new files */
REV_TREE_OLD, /* Only files removed */
REV_TREE_DIFFERENT, /* Mixed changes */
} tree_difference = REV_TREE_SAME;
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a release_revisions() to various users of "struct rev_list" in
those straightforward cases where we only need to add the
release_revisions() call to the end of a block, and don't need to
e.g. refactor anything to use a "goto cleanup" pattern.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The users of the revision.[ch] API's "struct rev_info" are a major
source of memory leaks in the test suite under SANITIZE=leak, which in
turn adds a lot of noise when trying to mark up tests with
"TEST_PASSES_SANITIZE_LEAK=true".
The users of that API are largely one-shot, e.g. "git rev-list" or
"git log", or the "git checkout" and "git stash" being modified here
For these callers freeing the memory is arguably a waste of time, but
in many cases they've actually been trying to free the memory, and
just doing that in a buggy manner.
Let's provide a release_revisions() function for these users, and
start migrating them over per the plan outlined in [1]. Right now this
only handles the "pending" member of the struct, but more will be
added in subsequent commits.
Even though we only clear the "pending" member now, let's not leave a
trap in code like the pre-image of index_differs_from(), where we'd
start doing the wrong thing as soon as the release_revisions() learned
to clear its "diffopt". I.e. we need to call release_revisions() after
we've inspected any state in "struct rev_info".
This leaves in place e.g. clear_pathspec(&rev.prune_data) in
stash_working_tree() in builtin/stash.c, subsequent commits will teach
release_revisions() to free "prune_data" and other members that in
some cases are individually cleared by users of "struct rev_info" by
reaching into its members. Those subsequent commits will remove the
relevant calls to e.g. clear_pathspec().
We avoid amending code in index_differs_from() in diff-lib.c as well
as wt_status_collect_changes_index(), has_unstaged_changes() and
has_uncommitted_changes() in wt-status.c in a way that assumes that we
are already clearing the "diffopt" member. That will be handled in a
subsequent commit.
1. https://lore.kernel.org/git/87a6k8daeu.fsf@evledraar.gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add and apply coccinelle rules to remove "if (E)" before
"free_commit_list(E)", the function can accept NULL, and further
change cases where "E = NULL" followed to also be unconditionally.
The code changes in this commit were entirely made by the coccinelle
rule being added here, and applied with:
make contrib/coccinelle/free.cocci.patch
patch -p1 <contrib/coccinelle/free.cocci.patch
The only manual intervention here is that the the relevant code in
commit.c has been manually re-indented.
Suggested-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix two memory leaks in "struct rev_info" by freeing that memory in
cmd_format_patch(). These two are unusual special-cases in being in
the "struct rev_info", but not being "owned" by the code in
revision.c. I.e. they're members of the struct so that this code in
"builtin/log.c" can conveniently pass information code in
"log-tree.c".
See e.g. the make_cover_letter() caller of log_write_email_headers()
here in "builtin/log.c", and [1] for a demonstration of where the
"extra_headers" and "ref_message_ids" struct members are used.
See 20ff06805c (format-patch: resurrect extra headers from config,
2006-06-02) and d1566f7883 (git-format-patch: Make the second and
subsequent mails replies to the first, 2006-07-14) for the initial
introduction of "extra_headers" and "ref_message_ids".
We can count on repo_init_revisions() memset()-ing this data to 0
however, so we can count on it being either NULL or something we
allocated. In the case of "extra_headers" let's add a local "char *"
variable to hold it, to avoid the eventual cast from "const char *"
when we free() it.
1. https://lore.kernel.org/git/220401.868rsoogxf.gmgdl@evledraar.gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Follow-up on the introduction of string_list_init_nodup() and
string_list_init_dup() in the series merged in bd4232fac3 (Merge
branch 'ab/struct-init', 2021-07-16) and convert code that implicitly
relied on xcalloc() being equivalent to the initializer to use
xmalloc() and string_list_init_{no,}dup() instead.
In the case of get_unmerged() in merge-recursive.c we used the
combination of xcalloc() and assigning "1" to "strdup_strings" to get
what we'd get via string_list_init_dup(), let's use that instead.
Adjacent code in cmd_format_patch() will be changed in a subsequent
commit, since we're changing that let's change the other in-tree
patterns that do the same. Let's also convert a "x == NULL" to "!x"
per our CodingGuidelines, as we need to change the "if" line anyway.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Amend a freeing pattern added in 0906ac2b54 (blame: use changed-path
Bloom filters, 2020-04-16) to use a "goto cleanup", so that we can be
sure that we call cleanup_scoreboard().
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a memory leak that's been with us since f9500261e0 (fast-rebase:
write conflict state to working tree, index, and HEAD, 2021-05-20)
changed this code to move these strbuf_release() into an if/else
block.
We'll also add to "reflog_msg" in the "else" arm of the "if" block
being modified here, and we'll append to "branch_msg" in both
cases. But after f9500261e0 only the "if" block would free these two
"struct strbuf".
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Directly invoking make coverage-report as a target results in an error because
its prerequisites are missing,
This patch adds the compile-test prerequisite, which is run only once each time
the compile-report target is invoked. In practice, the developer may decide to
review the coverage-report results without necessarily rerunning for this
coverage-test, if it has already been run.
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We do not have to guess how common the mistake the change targets is
when describing it. Such an argument may be good while proposing a
change, but does not quite belong in the record of what has already
happened, i.e. a release note.
Helped-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This reverts commit 2a0cafd464,
as it expects a working "a ref deletion must produce a single
transaction, not one for loose and another for packed" topic,
which we do not have.
With the addition of the safe.directory in 8959555ce
(setup_git_directory(): add an owner check for the top-level directory,
2022-03-02) released in v2.35.2, we are receiving feedback from a
variety of users about the feature.
Some users have a very large list of shared repositories and find it
cumbersome to add this config for every one of them.
In a more difficult case, certain workflows involve running Git commands
within containers. The container boundary prevents any global or system
config from communicating `safe.directory` values from the host into the
container. Further, the container almost always runs as a different user
than the owner of the directory in the host.
To simplify the reactions necessary for these users, extend the
definition of the safe.directory config value to include a possible '*'
value. This value implies that all directories are safe, providing a
single setting to opt-out of this protection.
Note that an empty assignment of safe.directory clears all previous
values, and this is already the case with the "if (!value || !*value)"
condition.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It seems that nothing is ever checking to make sure the safe directories
in the configs actually have the key safe.directory, so some unrelated
config that has a value with a certain directory would also make it a
safe directory.
Signed-off-by: Matheus Valadares <me@m28.io>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is difficult to change the ownership on a directory in our test
suite, so insert a new GIT_TEST_ASSUME_DIFFERENT_OWNER environment
variable to trick Git into thinking we are in a differently-owned
directory. This allows us to test that the config is parsed correctly.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* 'master' of github.com:git/git: (25 commits)
Git 2.36-rc2
i18n: fix some badly formatted i18n strings
Git 2.36-rc1
t9902: split test to run on appropriate systems
ls-tree doc: document interaction with submodules
Documentation: add --batch-command to cat-file synopsis
git-ls-tree.txt: fix the name of "%(objectsize:padded)"
submodule-helper: fix usage string
doc: replace "--" with {litdd} in credential-cache/fsmonitor
contrib/scalar: fix 'all' target in Makefile
Documentation/Makefile: fix "make info" regression in dad9cd7d51
configure.ac: fix HAVE_SYNC_FILE_RANGE definition
git-compat-util: really support openssl as a source of entropy
ls-tree: `-l` should not imply recursive listing
Git 2.35.2
Git 2.34.2
Git 2.33.2
Git 2.32.1
Git 2.31.2
Git 2.30.3
...
Use test_path_is_file() instead of 'test -f' for better debugging
information.
Signed-off-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As the address sanitizer checks for a superset of the issues detected
by setting MALLOC_CHECK_ (which tries to detect things like double
frees and off-by-one errors) there is no need to set the latter when
compiling with -fsanitize=address.
This fixes a regression introduced by 131b94a10a ("test-lib.sh: Use
GLIBC_TUNABLES instead of MALLOC_CHECK_ on glibc >= 2.34", 2022-03-04)
which causes all the tests to fail with the message
ASan runtime does not come first in initial library list;
you should either link runtime to your application or
manually preload it with LD_PRELOAD.
when git is compiled with SANITIZE=address on systems with glibc >=
2.34. I have tested SANITIZE=leak and SANITIZE=undefined and they do
not suffer from this regression so the fix in this patch should be
sufficient.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Check if git grep works around the PCRE2 big fixed by their e0c6029
(Fix inifinite loop when a single byte newline is searched in JIT.,
2020-05-29), which affects version 10.35 and earlier.
Searching for leading whitespace also triggers the endless loop.
Set a one-second alarm to abort in case we do get hit by the bug, to
avoid having to wait forever for the test result.
Reported-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The warning about converting line endings is extremely confusing. Its
two sentences each use the word "will" without specifying a timeframe,
which makes it sound like both sentences are referring to the same
timeframe. On top of that, it uses the term "original line endings"
without saying whether "original" means LF or CRLF.
Rephrase the warning to be clear about when the line endings will be
changed and what they will be changed to.
On a platform whose native line endings are not CRLF (e.g. Linux), the
"git add" step in the following sequence triggers the warning in
question:
$ git config core.autocrlf true
$ echo 'Hello world!' >hello.txt
$ git add hello.txt
warning: LF will be replaced by CRLF in hello.txt
The file will have its original line endings in your working directory
Signed-off-by: Alex Henrie <alexhenrie24@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The externalConsole=true setting is broken for many users (launching the
debugger with such setting results in VS Code waiting forever without
actually starting the debugger). Also, this setting is a matter of user
preference, and is arguably better set in a "launch" section in the
user-wide settings.json than hardcoded in our script. Remove the line to
use VS Code's default, or the user's setting.
Add useful links in contrib/vscode/README.md to help the user to
configure VS Code and how to use the debugging feature.
Helped-by: Matthieu Moy <Matthieu.Moy@univ-lyon1.fr>
Helped-by: Derrick Stolee <derrickstolee@github.com>
Co-authored-by: BRESSAT Jonathan <git.jonathan.bressat@gmail.com>
Signed-off-by: COGONI Guillaume <cogoni.guillaume@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Tests that affect the repo in stateful ways are easier to write if we
can run setup steps outside of the measured portion of perf iteration.
This change adds a "--setup 'setup-script'" parameter to test_perf. To
make invocations easier to understand, I also moved the prerequisites to
a new --prereq parameter.
The setup facility will be used in the upcoming perf tests for batch
mode, but it already helps in some existing tests, like t5302 and t7820.
Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add basic performance tests for git commands that can add data to the
object database. We cover:
* git add
* git stash
* git update-index (via git stash)
* git unpack-objects
* git commit --all
We cover all currently available fsync methods as well.
Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add test cases to exercise batch mode for:
* 'git add'
* 'git stash'
* 'git update-index'
* 'git unpack-objects'
These tests ensure that the added data winds up in the object database.
In this change we introduce a new test helper lib-unique-files.sh. The
goal of this library is to create a tree of files that have different
oids from any other files that may have been created in the current test
repo. This helps us avoid missing validation of an object being added
due to it already being in the repo.
Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Several tests use awk to parse OIDs from the output of 'git ls-files
--stage' and 'git ls-tree'. Introduce helpers to centralize these uses
of awk.
Update t5317-pack-objects-filter-objects.sh to use the new ls-files
helper so that it has some usages to review. Other updates are left for
the future.
Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git for Windows has defaulted to core.fsyncObjectFiles=true since
September 2017. We turn on syncing of loose object files with batch mode
in upstream Git so that we can get broad coverage of the new code
upstream.
We don't actually do fsyncs in the most of the test suite, since
GIT_TEST_FSYNC is set to 0. However, we do exercise all of the
surrounding batch mode code since GIT_TEST_FSYNC merely makes the
maybe_fsync wrapper always appear to succeed.
Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The unpack-objects functionality is used by fetch, push, and fast-import
to turn the transfered data into object database entries when there are
fewer objects than the 'unpacklimit' setting.
By enabling an odb-transaction when unpacking objects, we can take advantage
of batched fsyncs.
Here are some performance numbers to justify batch mode for
unpack-objects, collected on a WSL2 Ubuntu VM.
Fsync Mode | Time for 90 objects (ms)
-------------------------------------
Off | 170
On,fsync | 760
On,batch | 230
Note that the default unpackLimit is 100 objects, so there's a 3x
benefit in the worst case. The non-batch mode fsync scales linearly
with the number of objects, so there are significant benefits even with
smaller numbers of objects.
Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The update-index functionality is used internally by 'git stash push' to
setup the internal stashed commit.
This change enables odb-transactions for update-index infrastructure to
speed up adding new objects to the object database by leveraging the
batch fsync functionality.
There is some risk with this change, since under batch fsync, the object
files will be in a tmp-objdir until update-index is complete, so callers
using the --stdin option will not see them until update-index is done.
This risk is mitigated by flushing the ODB transaction prior to
reporting any verbose output so that objects will be visible to callers
that are synchronizing with update-index by snooping its output.
Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The add_files_to_cache function is invoked internally by
builtin/commit.c and builtin/checkout.c for their flags that stage
modified files before doing the larger operation. These commands
can benefit from batched fsyncing.
Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Take advantage of the odb transaction infrastructure around writing the
cached tree to the object database.
Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When adding many objects to a repo with `core.fsync=loose-object`,
the cost of fsync'ing each object file can become prohibitive.
One major source of the cost of fsync is the implied flush of the
hardware writeback cache within the disk drive. This commit introduces
a new `core.fsyncMethod=batch` option that batches up hardware flushes.
It hooks into the bulk-checkin odb-transaction functionality, takes
advantage of tmp-objdir, and uses the writeout-only support code.
When the new mode is enabled, we do the following for each new object:
1a. Create the object in a tmp-objdir.
2a. Issue a pagecache writeback request and wait for it to complete.
At the end of the entire transaction when unplugging bulk checkin:
1b. Issue an fsync against a dummy file to flush the log and hardware
writeback cache, which should by now have seen the tmp-objdir writes.
2b. Rename all of the tmp-objdir files to their final names.
3b. When updating the index and/or refs, we assume that Git will issue
another fsync internal to that operation. This is not the default
today, but the user now has the option of syncing the index and there
is a separate patch series to implement syncing of refs.
On a filesystem with a singular journal that is updated during name
operations (e.g. create, link, rename, etc), such as NTFS, HFS+, or XFS
we would expect the fsync to trigger a journal writeout so that this
sequence is enough to ensure that the user's data is durable by the time
the git command returns. This sequence also ensures that no object files
appear in the main object store unless they are fsync-durable.
Batch mode is only enabled if core.fsync includes loose-objects. If
the legacy core.fsyncObjectFiles setting is enabled, but core.fsync does
not include loose-objects, we will use file-by-file fsyncing.
In step (1a) of the sequence, the tmp-objdir is created lazily to avoid
work if no loose objects are ever added to the ODB. We use a tmp-objdir
to maintain the invariant that no loose-objects are visible in the main
ODB unless they are properly fsync-durable. This is important since
future ODB operations that try to create an object with specific
contents will silently drop the new data if an object with the target
hash exists without checking that the loose-object contents match the
hash. Only a full git-fsck would restore the ODB to a functional state
where dataloss doesn't occur.
In step (1b) of the sequence, we issue a fsync against a dummy file
created specifically for the purpose. This method has a little higher
cost than using one of the input object files, but makes adding new
callers of this mechanism easier, since we don't need to figure out
which object file is "last" or risk sharing violations by caching the fd
of the last object file.
_Performance numbers_:
Linux - Hyper-V VM running Kernel 5.11 (Ubuntu 20.04) on a fast SSD.
Mac - macOS 11.5.1 running on a Mac mini on a 1TB Apple SSD.
Windows - Same host as Linux, a preview version of Windows 11.
Adding 500 files to the repo with 'git add' Times reported in seconds.
object file syncing | Linux | Mac | Windows
--------------------|-------|-------|--------
disabled | 0.06 | 0.35 | 0.61
fsync | 1.88 | 11.18 | 2.47
batch | 0.15 | 0.41 | 1.53
Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Make it clearer in the naming and documentation of the plug_bulk_checkin
and unplug_bulk_checkin APIs that they can be thought of as
a "transaction" to optimize operations on the object database. These
transactions may be nested so that subsystems like the cache-tree
writing code can optimize their operations without caring whether the
top-level code has a transaction active.
Add a flush_odb_transaction API that will be used in update-index to
make objects visible even if a transaction is active. The flush call may
also be useful in future cases if we hold a transaction active around
calling hooks.
Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This commit prepares for adding batch-fsync to the bulk-checkin
infrastructure.
The bulk-checkin infrastructure is currently used to batch up addition
of large blobs to a packfile. When a blob is larger than
big_file_threshold, we unconditionally add it to a pack. If bulk
checkins are 'plugged', we allow multiple large blobs to be added to a
single pack until we reach the packfile size limit; otherwise, we simply
make a new packfile for each large blob. The 'unplug' call tells us when
the series of blob additions is done so that we can finish the packfiles
and make their objects available to subsequent operations.
Stated another way, bulk-checkin allows callers to define a transaction
that adds multiple objects to the object database, where the object
database can optimize its internal operations within the transaction
boundary.
Batched fsync will fit into bulk-checkin by taking advantage of the
plug/unplug functionality to determine the appropriate time to fsync
and make newly-added objects available in the primary object database.
* Rename 'state' variable to 'bulk_checkin_packfile', since we will
later be adding 'bulk_fsync_objdir'. This also makes the variable
easier to find in the debugger, since the name is more unique.
* Rename finish_bulk_checkin to flush_bulk_checkin_packfile and call it
unconditionally from unplug_bulk_checkin. Internally it will
conditionally do a flush if there's any work to do.
* Move the 'plugged' data member of 'bulk_checkin_state' into a separate
static variable. Doing this avoids resetting the variable in
finish_bulk_checkin when zeroing the 'bulk_checkin_state'. As-is, we
seem to unintentionally disable the plugging functionality the first
time a new packfile must be created due to packfile size limits. While
disabling the plugging state only results in suboptimal behavior for
the current code, it would be fatal for the bulk-fsync functionality
later in this patch series.
The net effect of these changes is to make a clear separation between
the portion of the bulk-checkin infrastructure that is related to the
packfile (nearly all of it at present) and the part that is related to
other future optimizations of the ODB.
Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Perforce has a file type "utf8" which represents a text file with
explicit BOM. utf8-encoded files *without* BOM are stored as
regular file type "text". The "utf8" file type behaves like text
in all but one important way: it is stored, internally, without
the leading 3 BOM bytes.
git-p4 has historically imported utf8-with-BOM files (files stored,
in Perforce, as type "utf8") the same way as regular text files -
losing the BOM in the process.
Under most circumstances this issue has little functional impact,
as most systems consider the BOM to be optional and redundant, but
this *is* a correctness failure, and can have lead to practical
issues for example when BOMs are explicitly included in test files,
for example in a file encoding test suite.
Fix the handling of utf8-with-BOM files when importing changes from
p4 to git, and introduce a test that checks it is working correctly.
Signed-off-by: Tao Klerks <tao@klerks.biz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With the --branch argument of the "sync" subcommand, git-p4 enables
you to import a perforce branch/path to an arbitrary git ref, using
a full ref path, or to refs/remotes/p4/* or refs/heads/p4/*,
depending on --import-local, using a short ref name.
However, when you later want to explicitly sync such a given ref to
pick up subsequent p4 changes, it only works if the ref was placed
in the p4 path *and* has only one path component (no "/").
This limitation results from a bad assumption in the
existing-branch sync logic, and also means you cannot individually
sync branches detected by --detect-branches, as these also get a
"/" in their names.
Fix "git p4 sync --branch", when called with an existing ref, so
that it works correctly regardless of whether the ref is in the p4
path or not, and (in the case of refs in the p4 path) regardless of
whether it has a "/" in its short name or not.
Also add tests to validate that these branch-specific syncs work
as expected.
Signed-off-by: Tao Klerks <tao@klerks.biz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove include "lockfile.h" from builtin/apply.c, which is orphaned
since 6d058c8826 (apply: move lockfile into `apply_state`, 2017-10-05)
Signed-off-by: Garrit Franke <garrit@slashdev.space>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove include "strvec.h" from serve.c, which is orphaned since
f0a35c9ce5 (serve: drop "keys" strvec, 2021-09-15)
Signed-off-by: Garrit Franke <garrit@slashdev.space>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Using a named enum allows casting an integer to the enum type in both
GDB and LLDB:
$ gdb -q -ex 'b wt-status.c:44' -ex r --args ./git status
(gdb) p (enum color_wt_status) slot
$1 = WT_STATUS_ONBRANCH
$ lldb -o 'b wt-status.c:44' -o r -- ./git status
(lldb) p (color_wt_status) slot
(color_wt_status) $0 = WT_STATUS_ONBRANCH
In LLDB, it's also required to cast in the reversed direction, i.e.
cast an enum constant into its corresponding integer:
(lldb) p (int) color_wt_status::WT_STATUS_ONBRANCH
(int) $1 = 8
Name the enum listing the different RECURSE_SUBMODULES_* modes, to make
debugging easier. For example, when stepping through a part of the code
where an int is compared with a constant in this enum, it allows casting
the int to the enum type or vice-versa, after quickly checking where the
enum constant is declared and learning the enum name.
As to not make this patch a debug-only change, convert the
'fetch_recurse' member of 'struct submodule' to use the newly named
enum.
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Reviewed-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the test 'cherry-pick after renaming branch', stop checking for
the presence of a file (opos) because we are going to "grep" in it in
the same test and the lack of it will be noticed as a failure anyway.
In the test 'revert after renaming branch', instead of allowing any
random contents as long as a known phrase is not there in it, we can
expect the exact outcome---after the successful revert of "added", the
contents of file "spoo" should become identical to what was in file
"oops" in the "initial" commit. This test also contains 'test -f' that
verifies presence of a file, but we have a helper function to do the same
thing. Replace it with appropriate helper function 'test_path_is_file'
for better readability and better error messages.
In both tests, we will not notice when "git rev-parse" starts segfaulting
without emitting any output. The 'test' command will end up being just
"test =", which yields success. Use the 'test_cmp_rev' helper to make
sure we will notice such a breakage.
Signed-off-by: Khalid Masum <khalid.masum.92@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
dc6b1d92ca (wt-status: use settings from git_diff_ui_config, 2018-05-04)
disabled diffopt.break_opt for diffstats shown by git status and in
commit templates. For git status there isn't even a way to enable it.
Make the commit summary (shown after the commit) consistent by disabling
it there as well.
Reported-by: Laurent Lyaudet <laurent.lyaudet@gmail.com>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The output of `git mergetool --tool-help` and `git difftool --tool-help`
only showed the `alias` of each available merge/diff tool.
It is not always obvious what tool these `aliases` end up using (ex:
`opendiff` runs `FileMerge` and `bc` runs `Beyond Compare`).
This commit adds a short description to each of them to help the user
identify the `alias` they want.
Signed-off-by: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Running 'git {merge,diff}tool --tool-help' now also prints usage
information about the vimdiff tool (and its variants) instead of just
its name.
Two new functions ('diff_cmd_help()' and 'merge_cmd_help()') have been
added to the set of functions that each merge tool (ie. scripts found
inside "mergetools/") can overwrite to provided tool specific
information.
Right now, only 'mergetools/vimdiff' implements these functions, but
other tools are encouraged to do so in the future, specially if they
take configuration options not explained anywhere else (as it is the
case with the 'vimdiff' tool and the new 'layout' option)
Note that the function 'show_tool_names', used in the implementation of
'git mergetool --tool-help', is also used in Documentation/Makefile to
generate the list of allowed values for the configuration variables
'{diff,merge}.{gui,}tool'. Adjust the rule so its output is an Asciidoc
"description list" instead of a plain list, with the tool name as the
item and the newly added tool description as the description.
In addition, a section has been added to
"Documentation/git-mergetool.txt" to explain the new "layout"
configuration option with examples.
Helped-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* ds/partial-bundle-more:
pack-objects: lazily set up "struct rev_info", don't leak
bundle: output hash information in 'verify'
bundle: move capabilities to end of 'verify'
pack-objects: parse --filter directly into revs.filter
pack-objects: move revs out of get_object_list()
list-objects-filter: remove CL_ARG__FILTER
PEP8 recommends that all inline comments should be separated from code
by two spaces, in the "Inline Comments" section:
https://www.python.org/dev/peps/pep-0008/#inline-comments
However, because all instances of these inline comments extended to an
excessive line length, they have been moved onto a seprate line.
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
PEP8 recommends that when wrapping the arguments of conditional
statements, an extra level of indentation should be added to distinguish
arguments from the body of the statement.
This guideline is described here:
https://www.python.org/dev/peps/pep-0008/#indentation
This patch either adds the indentation, or removes unnecessary wrapping.
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
PEP8 requires that binary operators such as assignment and comparison
operators should always be surrounded by a pair of single spaces, and
recommends that all other binary operators should typically be surround
by single spaces.
The recommendation is given here in the "Other Recommendations"
section
https://www.python.org/dev/peps/pep-0008/#other-recommendations
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
PEP8 makes no specific recommendation about spaces preceding colons in
dictionary declarations, but all the code examples contained with it
declare dictionaries with a single space after the colon, and none
before.
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This patch improves consistency across git-p4 by ensuring all command
separated arguments to function invocations, tuples and lists are
separated by commas with a single space following.
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In several places, git-p4 contains code of the form:
(a, b) = foo()
In each case, multiple values are returned through a tuple or a list and
bound into multiple values.
The brackets around the assigned variables are redundant and can be
removed:
a, b = foo()
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-p4 contains configuration commands for pylint embedded in the header
comment.
Previously, these were combined onto single lines and not alphabetically
sorted. This patch breaks each disable command onto a separate line to
give cleaner diffs, removed duplicate entries, and sorts the list
alphabetically.
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previously, the script contained commented code including Python 2 print
statements. Presumably, these were used as a developer aid at some point
in history. However, the commented code is generally undesirable, and
this commented code serves no useful purpose. Therefore this patch
removes it.
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previously, a small number of functions, methods and classes were
documented using comments. This patch improves consistency by converting
these into docstrings similar to those that already exist in the script.
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This patch attempts to improve the consistency of the docstrings by
making the following changes:
- Rewraps all docstrings to a 79-character column limit.
- Adds a full stop at the end of every docstring.
- Removes any spaces after the opening triple-quotes of all
docstrings.
- Sets the hanging indent of multi-line docstrings to 3-spaces.
- Ensures that the closing triple-quotes of multi-line docstrings are
always on a new line indented by a 3-space indent.
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
PEP8 recommends that all code should be indented in 4-space units. This
guideline is described here:
https://www.python.org/dev/peps/pep-0008/#indentation
Previously git-p4 had multiple cases where code was indented with a
non-multiple of 4-spaces. This patch fixes each of these.
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Python allows the usage of compound statements where multiple statements
are written on a single line separared by semicolons. It is also
possible to add a semicolon after a single statement, however this is
generally considered to be untidy, and is unnecessary.
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the PEP8 style guidelines, top-level functions and class definitions
should be separated by two blank lines. Methods should be surrounded by
a single blank line.
This guideline is described here in the "Blank Lines" section:
https://www.python.org/dev/peps/pep-0008/#blank-lines
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Untracked cache was originally designed to only work with
"--untracked-files=normal", and is bypassed when
"--untracked-files=all" is requested, but this causes performance
issues for UI tooling that wants to see "all" on a frequent basis.
On the other hand, the conditions that altogether prevented
applicability to the "all" mode no longer seem to apply, after
several major refactors in recent years; this possibility was
discussed in
81153d02-8e7a-be59-e709-e90cd5906f3a@jeffhostetler.com and
CABPp-BFiwzzUgiTj_zu+vF5x20L0=1cf25cHwk7KZQj2YkVzXw@mail.gmail.com,
and somewhat confirmed experimentally by several users using a
version of this patch to use untracked cache with -uall for about a
year.
When 'git status' runs without using the untracked cache, on a large
repo, on windows, with fsmonitor, it can run very slowly. This can
make GUIs that need to use "-uall" (and therefore currently bypass
untracked cache) unusable when fsmonitor is enabled, on such large
repos.
To partially address this, align the supported directory flags for the
stored untracked cache data with the git config. If a user specifies
an '--untracked-files=' commandline parameter that does not align with
their 'status.showuntrackedfiles' config value, then the untracked
cache will be ignored - as it is for other unsupported situations like
when a pathspec is specified.
If the previously stored flags no longer match the current
configuration, but the currently-applicable flags do match the current
configuration, then discard the previously stored untracked cache
data.
For most users there will be no change in behavior. Users who need
'--untracked-files=all' to perform well will now have the option of
setting "status.showuntrackedfiles" to "all" for better / more
consistent performance.
Users who need '--untracked-files=all' to perform well for their
tooling AND prefer to avoid the verbosity of "all" when running
git status explicitly without options... are out of luck for now (no
change).
Users who have the "status.showuntrackedfiles" config set to "all"
and yet frequently explicitly call
'git status --untracked-files=normal' (and use the untracked cache)
are the only ones who will be disadvantaged by this change. Their
"--untracked-files=normal" calls will, after this change, no longer
use the untracked cache.
Signed-off-by: Tao Klerks <tao@klerks.biz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Untracked cache was originally designed to only work with
'--untracked-files=normal', and it gets ignored when
'--untracked-files=all' is specified instead.
Add explicit tests for this known as-designed behavior.
Signed-off-by: Tao Klerks <tao@klerks.biz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The alloc_report() function has been orphaned since its introduction
in 855419f764 (Add specialized object allocator, 2006-06-19), it
appears to have been used for demonstration purposes in that commit
message.
These might be handy to manually use in a debugger, but keeping them
and the "count" member of "alloc_state" just for that doesn't seem
worth it.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These macros were last used in 5d3679ee02 (sha1-file: drop
has_sha1_file(), 2019-01-07), so let's remove coccinelle migration
rules added 9b45f49981 (object-store: prepare has_{sha1, object}_file
to handle any repo, 2018-11-13), along with the compatibility macros
themselves.
The "These functions.." in the diff context and the general comment
about compatibility macros still applies to
"NO_THE_REPOSITORY_COMPATIBILITY_MACROS" use just a few lines below
this, so let's keep the comment.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This function hasn't been used since 449fa5ee06 (pack-bitmap-write:
ignore BITMAP_FLAG_REUSE, 2020-12-08), which was a cleanup commit
intending to get rid of the code around the reusing of bitmaps.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This macro was added in 3443546f6e (Use a *real* built-in diff
generator, 2006-03-24), but none of the xdiff code uses it, it uses
xdl_free() directly.
If we need its functionality again we'll use the FREE_AND_NULL() macro
added in 481df65f4f (git-compat-util: add a FREE_AND_NULL() wrapper
around free(ptr); ptr = NULL, 2017-06-15).
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove a comment about a Makefile knob that was removed in
f7661ce0b8 (Remove -fPIC which was only needed for Git.xs,
2006-09-29). The comment had been copied over to configure.ac in
633b423961 (Copy description of build configuration variables to
configure.ac, 2006-07-08).
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove a "struct child_process" member added in
ac2fbaa674 (run-command: add clean_on_exit_handler, 2016-10-16), but
which was never used.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change the behavior of "git -v" to be synonymous with "--version" /
"version", and "git -h" to be synonymous with "--help", but not "help".
These shorthands both display the "unknown option" message. Following
this change, "-v" displays the version, and "-h" displays the help text
of the "git" command.
It should be noted that the "-v" shorthand could be misinterpreted by
the user to mean "verbose" instead of "version", since some sub-commands
make use of it in this context. The top-level "git" command does not
have a "verbose" flag, so it's safe to introduce this shorthand
unambiguously.
Signed-off-by: Garrit Franke <garrit@slashdev.space>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create a new test case file for the different available merge tools.
Right now it only tests the 'mergetool.vimdiff.layout' option. Other
merge tools might be interested in adding their own tests here too.
Signed-off-by: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When running 'git mergetool -t vimdiff', a new configuration option
('mergetool.vimdiff.layout') can now be used to select how the user
wants the different windows, tabs and buffers to be displayed.
If the option is not provided, the layout will be the same one that was
being used before this commit (ie. two rows with LOCAL, BASE and COMMIT
in the top one and MERGED in the bottom one).
The 'vimdiff' variants ('vimdiff{1,2,3}') still work but, because they
represented nothing else than different layouts, are now internally
implemented as a subcase of 'vimdiff' with the corresponding
pre-configured 'layout'.
Again, if you don't set "mergetool.vimdiff.layout" everything will work
the same as before *but* the arguments used to call {n,g,}vim will be
others (even if you don't/shouldn't notice it):
- git mergetool -t vimdiff
> Before this commit:
{n,g,}vim -f -d -c '4wincmd w | wincmd J' $LOCAL $BASE $REMOTE $MERGED
> After this commit:
{n,g,}vim -f -c "echo | split | vertical split | 1b | wincmd l | vertical split | 2b | wincmd l | 3b | wincmd j | 4b | tabdo windo diffthis" -c "tabfirst" $LOCAL $BASE $REMOTE $MERGED
- git mergetool -t vimdiff1
> Before this commit:
{n,g,}vim -f -d -c 'echon "..."' $LOCAL $REMOTE
> After this commit:
{n,g,}vim -f -c "echo | vertical split | 1b | wincmd l | 3b | tabdo windo diffthis" -c "tabfirst" $LOCAL $BASE $REMOTE $MERGED
- git mergetool -t vimdiff2
> Before this commit:
{n,g,}vim -f -d -c 'wincmd l' $LOCAL $MERGED $REMOTE
> After this commit:
{n,g,}vim -f -c "echo | vertical split | 1b | wincmd l | vertical split | 4b | wincmd l | 3b | tabdo windo diffthis" -c "tabfirst" $LOCAL $BASE $REMOTE $MERGED
- git mergetool -t vimdiff3
> Before this commit:
{n,g,}vim -f -d -c 'hid | hid | hid' $LOCAL $REMOTE $BASE $MERGED
> After this commit:
{n,g,}vim -f -c "echo | 4b | bufdo diffthis" -c "tabfirst" $LOCAL $BASE $REMOTE $MERGED
Despite being different, I have manually verified that they generate the same
layout as before.
Signed-off-by: Fernando Ramos <greenfoo@u92.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jh/builtin-fsmonitor-part2: (150 commits)
t7527: test status with untracked-cache and fsmonitor--daemon
fsmonitor: force update index after large responses
fsmonitor--daemon: use a cookie file to sync with file system
fsmonitor--daemon: periodically truncate list of modified files
t/perf/p7519: add fsmonitor--daemon test cases
t/perf/p7519: speed up test on Windows
t/perf/p7519: fix coding style
t/helper/test-chmtime: skip directories on Windows
t/perf: avoid copying builtin fsmonitor files into test repo
t7527: create test for fsmonitor--daemon
t/helper/fsmonitor-client: create IPC client to talk to FSMonitor Daemon
help: include fsmonitor--daemon feature flag in version info
fsmonitor--daemon: implement handle_client callback
compat/fsmonitor/fsm-listen-darwin: implement FSEvent listener on MacOS
compat/fsmonitor/fsm-listen-darwin: add MacOS header files for FSEvent
compat/fsmonitor/fsm-listen-win32: implement FSMonitor backend on Windows
fsmonitor--daemon: create token-based changed path cache
fsmonitor--daemon: define token-ids
fsmonitor--daemon: add pathname classification
fsmonitor--daemon: implement 'start' command
...
Single perforce branch might be sync'ed multiple times with different
revision numbers, so it will be seen to Git as complete different
commits. This can be done by the following command:
git p4 sync --branch=NAME //perforce/path...
It is assumed, that this command applied multiple times and
peforce repository changes between command invocations.
In such situation, git p4 will see multiple perforce branches with
same name and different revision numbers. The problem is that to make
a shelve, git-p4 script will try to find "origin" branch, if not
specified in command line explicitly. And previously script selected
any branch with same name and don't mention particular revision number.
Later this may cause failure of the command "git diff-tree -r $rev^ $rev",
so shelve can't be created (due to wrong origin branch/commit).
This commit fixes the heuristic by which git p4 selects origin branch:
first it tries to select branch with same perforce path and perforce
revision, and if it fails, then selects branch with only same perforce
path (ignoring perforce revision number).
Signed-off-by: Kirill Frolov <k.frolov@samsung.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 9a5315edfd (Merge branch 'js/patch-mode-in-others-in-c',
2020-02-05), Git acquired a built-in implementation of `git add`'s
interactive mode that could be turned on via the config option
`add.interactive.useBuiltin`.
The first official Git version to support this knob was v2.26.0.
In 2df2d81ddd (add -i: use the built-in version when
feature.experimental is set, 2020-09-08), this built-in implementation
was also enabled via `feature.experimental`. The first version with this
change was v2.29.0.
More than a year (and very few bug reports) later, it is time to declare
the built-in implementation mature and to turn it on by default.
We specifically leave the `add.interactive.useBuiltin` configuration in
place, to give users an "escape hatch" in the unexpected case should
they encounter a previously undetected bug in that implementation.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The scripted version of the interactive mode of `git add` still requires
Perl, but the built-in version does not. Let's only require the PERL
prereq if testing the scripted version.
This addresses a long-standing NEEDSWORK added in 35166b1fb5 (t2016:
add a NEEDSWORK about the PERL prerequisite, 2020-10-07).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-12-01 14:21:08 -08:00
657 changed files with 186105 additions and 237015 deletions
The ordering between packs is done according to the MIDX's .rev file.
Notably, the preferred pack sorts ahead of all other packs.
+
The ordering between packs is done according to the MIDX's .rev file.
Notably, the preferred pack sorts ahead of all other packs.
The on-disk representation (described below) of a bitmap is the same regardless
of whether or not that bitmap belongs to a packfile or a MIDX. The only
@ -39,97 +39,108 @@ MIDXs, both the bit-cache and rev-cache extensions are required.
== On-disk format
- A header appears at the beginning:
* A header appears at the beginning:
4-byte signature: {'B', 'I', 'T', 'M'}
4-byte signature: :: {'B', 'I', 'T', 'M'}
2-byte version number (network byte order)
The current implementation only supports version 1
of the bitmap index (the same one as JGit).
2-byte version number (network byte order): ::
2-byte flags (network byte order)
The current implementation only supports version 1
of the bitmap index (the same one as JGit).
The following flags are supported:
2-byte flags (network byte order): ::
- BITMAP_OPT_FULL_DAG (0x1) REQUIRED
This flag must always be present. It implies that the
bitmap index has been generated for a packfile or
multi-pack index (MIDX) with full closure (i.e. where
every single object in the packfile/MIDX can find its
parent links inside the same packfile/MIDX). This is a
requirement for the bitmap index format, also present in
JGit, that greatly reduces the complexity of the
implementation.
The following flags are supported:
- BITMAP_OPT_HASH_CACHE (0x4)
If present, the end of the bitmap file contains
`N` 32-bit name-hash values, one per object in the
pack/MIDX. The format and meaning of the name-hash is
described below.
** {empty}
BITMAP_OPT_FULL_DAG (0x1) REQUIRED: :::
4-byte entry count (network byte order)
This flag must always be present. It implies that the
bitmap index has been generated for a packfile or
multi-pack index (MIDX) with full closure (i.e. where
every single object in the packfile/MIDX can find its
parent links inside the same packfile/MIDX). This is a
requirement for the bitmap index format, also present in
JGit, that greatly reduces the complexity of the
implementation.
The total count of entries (bitmapped commits) in this bitmap index.
** {empty}
BITMAP_OPT_HASH_CACHE (0x4): :::
20-byte checksum
If present, the end of the bitmap file contains
`N` 32-bit name-hash values, one per object in the
pack/MIDX. The format and meaning of the name-hash is
described below.
The SHA1 checksum of the pack/MIDX this bitmap index
belongs to.
4-byte entry count (network byte order): ::
The total count of entries (bitmapped commits) in this bitmap index.
- 4 EWAH bitmaps that act as type indexes
20-byte checksum: ::
The SHA1 checksum of the pack/MIDX this bitmap index
belongs to.
Type indexes are serialized after the hash cache in the shape
of four EWAH bitmaps stored consecutively (see Appendix A for
the serialization format of an EWAH bitmap).
* 4 EWAH bitmaps that act as type indexes
+
Type indexes are serialized after the hash cache in the shape
of four EWAH bitmaps stored consecutively (see Appendix A for
the serialization format of an EWAH bitmap).
+
There is a bitmap for each Git object type, stored in the following
order:
+
- Commits
- Trees
- Blobs
- Tags
There is a bitmap for each Git object type, stored in the following
order:
+
In each bitmap, the `n`th bit is set to true if the `n`th object
in the packfile or multi-pack index is of that type.
+
The obvious consequence is that the OR of all 4 bitmaps will result
in a full set (all bits set), and the AND of all 4 bitmaps will
result in an empty bitmap (no bits set).
- Commits
- Trees
- Blobs
- Tags
* N entries with compressed bitmaps, one for each indexed commit
+
Where `N` is the total amount of entries in this bitmap index.
Each entry contains the following:
In each bitmap, the `n`th bit is set to true if the `n`th object
in the packfile or multi-pack index is of that type.
** {empty}
4-byte object position (network byte order): ::
The position **in the index for the packfile or
multi-pack index** where the bitmap for this commit is
found.
The obvious consequence is that the OR of all 4 bitmaps will result
in a full set (all bits set), and the AND of all 4 bitmaps will
result in an empty bitmap (no bits set).
** {empty}
1-byte XOR-offset: ::
The xor offset used to compress this bitmap. For an entry
in position `x`, a XOR offset of `y` means that the actual
bitmap representing this commit is composed by XORing the
bitmap for this entry with the bitmap in entry `x-y` (i.e.
the bitmap `y` entries before this one).
+
NOTE: This compression can be recursive. In order to
XOR this entry with a previous one, the previous entry needs
to be decompressed first, and so on.
+
The hard-limit for this offset is 160 (an entry can only be
xor'ed against one of the 160 entries preceding it). This
number is always positive, and hence entries are always xor'ed
with **previous** bitmaps, not bitmaps that will come afterwards
in the index.
- N entries with compressed bitmaps, one for each indexed commit
** {empty}
1-byte flags for this bitmap: ::
At the moment the only available flag is `0x1`, which hints
that this bitmap can be re-used when rebuilding bitmap indexes
for the repository.
Where `N` is the total amount of entries in this bitmap index.
Each entry contains the following:
** The compressed bitmap itself, see Appendix A.
- 4-byte object position (network byte order)
The position **in the index for the packfile or
multi-pack index** where the bitmap for this commit is
found.
- 1-byte XOR-offset
The xor offset used to compress this bitmap. For an entry
in position `x`, a XOR offset of `y` means that the actual
bitmap representing this commit is composed by XORing the
bitmap for this entry with the bitmap in entry `x-y` (i.e.
the bitmap `y` entries before this one).
Note that this compression can be recursive. In order to
XOR this entry with a previous one, the previous entry needs
to be decompressed first, and so on.
The hard-limit for this offset is 160 (an entry can only be
xor'ed against one of the 160 entries preceding it). This
number is always positive, and hence entries are always xor'ed
with **previous** bitmaps, not bitmaps that will come afterwards
in the index.
- 1-byte flags for this bitmap
At the moment the only available flag is `0x1`, which hints
that this bitmap can be re-used when rebuilding bitmap indexes
for the repository.
- The compressed bitmap itself, see Appendix A.
* {empty}
TRAILER: ::
Trailing checksum of the preceding contents.
== Appendix A: Serialization format for an EWAH bitmap
@ -142,8 +153,8 @@ implementation:
- 4-byte number of words of the COMPRESSED bitmap, when stored
- N x 8-byte words, as specified by the previous field
This is the actual content of the compressed bitmap.
+
This is the actual content of the compressed bitmap.
- 4-byte position of the current RLW for the compressed
bitmap
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.