9af0b8dbe2 (t0000-basic: more commit-tree tests., 2006-04-26) adds
tests for commit-tree that mask the return exit from git as described
in a378fee5b0 (Documentation: add shell guidelines, 2018-10-05).
Fix the tests, to avoid pipes by using a temporary file instead.
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach "test_pause" and "debug" helpers to allow using the HOME and
TERM environment variables the user usually uses.
* pb/test-use-user-env:
test-lib-functions: keep user's debugger config files and TERM in 'debug'
test-lib-functions: optionally keep HOME, TERM and SHELL in 'test_pause'
test-lib-functions: use 'TEST_SHELL_PATH' in 'test_pause'
The "git apply -3" code path learned not to bother the lower level
merge machinery when the three-way merge can be trivially resolved
without the content level merge.
* jc/trivial-threeway-binary-merge:
apply: resolve trivial merge without hitting ll-merge with "--3way"
t1400.190 sometimes fails or even hangs because of the way it uses
fifos. Our goal is to interactively read and write lines from
update-ref, so we have two fifos, in and out. We open a descriptor
connected to "in" and redirect output to that, so that update-ref does
not see EOF as it would if we opened and closed it for each "echo" call.
But we don't do the same for the output. This leads to a race where our
"read response <out" has not yet opened the fifo, but update-ref tries
to write to it and gets SIGPIPE. This can result in the test failing, or
worse, hanging as we wait forever for somebody to write to the pipe.
This is the same proble we fixed in 4783e7ea83 (t0008: avoid SIGPIPE
race condition on fifo, 2013-07-12), and we can fix it the same way, by
opening a second long-running descriptor.
Before this patch, running:
./t1400-update-ref.sh --run=1,190 --stress
failed or hung within a few dozen iterations. After it, I ran it for
several hundred without problems.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The v2 ls-refs command may receive extra arguments from the client, one
per pkt-line. The spec is pretty clear that the arguments must come from
a specified set, but we silently ignore any unknown entries. For a
well-behaved client this doesn't matter, but it makes testing and
debugging more confusing. Let's tighten this up to match the spec.
In theory this liberal behavior _could_ be useful for extending the
protocol. But:
- every other part of the protocol requires that the server first
indicate that it supports the argument; this includes the fetch and
object-info commands, plus the "unborn" capability added to ls-refs
itself
- it's not a very good extension mechanism anyway; without the server
advertising support, clients would have no idea if the argument was
silently ignored, or accepted and simply had no effect
So we're not really losing anything by tightening this.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our table of v2 "capabilities" contains everything we might tell the
client we support. But there are differences in how we expect the client
to respond. Some of the entries are true capabilities (i.e., we expect
the client to say "yes, I support this"), and some are ones we expect
them to send as commands (with "command=ls-refs" or similar).
When we receive a capability used as a command, we complain about that.
But when we receive a command used as a capability (e.g., just "ls-refs"
in a pkt-line by itself), we silently ignore it.
This isn't really hurting anything (clients shouldn't send it, and we'll
ignore it), but we can tighten up the protocol to match what we expect
to happen.
There are two new tests here. The first one checks a capability used as
a command, which already passes. The second tests a command as a
capability, which this patch fixes.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we see a line from the client like "command=ls-refs", we parse
everything after the equals sign as a capability, which we check against
our capabilities table. If we don't recognize the command (e.g.,
"command=foo"), we'll reject it.
But in parse_command(), we use the same get_capability() parser for
parsing non-command lines. So if we see "command=ls-refs=foo", we will
feed "ls-refs=foo" to get_capability(), which will say "OK, that's
ls-refs, with value 'foo'". But then we simply ignore the value
entirely.
The client is violating the spec here, which says:
command = PKT-LINE("command=" key LF)
key = 1*(ALPHA | DIGIT | "-_")
I.e., the key is not even allowed to have an equals sign in it. Whereas
a real non-command capability does allow a value:
capability = PKT-LINE(key[=value] LF)
So by reusing the same get_capability() parser, we are mixing up the
"key" and "capability" tokens. However, since that parser tells us
whether it saw an "=", we can still use it; we just need to reject any
input that produces a non-NULL value field.
The current behavior isn't really hurting anything (the client should
never send such a request, and if it does, we just ignore the "value"
part). But since it does violate the spec, let's tighten it up to
prevent any surprising behavior.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Because each "ref-prefix" capability from the client comes in its own
pkt-line, there's no limit to the number of them that a misbehaving
client may send. We read them all into a strvec, which means the client
can waste arbitrary amounts of our memory by just sending us "ref-prefix
foo" over and over.
One possible solution is to just drop the connection when the limit is
reached. If we set it high enough, then only misbehaving or malicious
clients would hit it. But "high enough" is vague, and it's unfriendly if
we guess wrong and a legitimate client hits this.
But we can do better. Since supporting the ref-prefix capability is
optional anyway, the client has to further cull the response based on
their own patterns. So we can simply ignore the patterns once we cross a
certain threshold. Note that we have to ignore _all_ patterns, not just
the ones past our limit (since otherwise we'd send too little data).
The limit here is fairly arbitrary, and probably much higher than anyone
would need in practice. It might be worth limiting it further, if only
because we check it linearly (so with "m" local refs and "n" patterns,
we do "m * n" string comparisons). But if we care about optimizing this,
an even better solution may be a more advanced data structure anyway.
I didn't bother making the limit configurable, since it's so high and
since Git should behave correctly in either case. It wouldn't be too
hard to do, but it makes both the code and documentation more complex.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* ab/serve-cleanup:
upload-pack: document and rename --advertise-refs
serve.[ch]: remove "serve_options", split up --advertise-refs code
{upload,receive}-pack tests: add --advertise-refs tests
serve.c: move version line to advertise_capabilities()
serve: move transfer.advertiseSID check into session_id_advertise()
serve.[ch]: don't pass "struct strvec *keys" to commands
serve: use designated initializers
transport: use designated initializers
transport: rename "fetch" in transport_vtable to "fetch_refs"
serve: mark has_capability() as static
We check that git.html exists, regardless of the page the user wants to open.
Checking whether the requested page exists instead gives us a smoother user
experience in two use cases:
1) The requested page doesn't exist
When calling a git command and there is an error, most users reasonably expect
git to produce an error message on the standard error stream, but in this case
we pass the filepath to git web--browse which passes it on to a browser (or a
helper program like xdg-open or start that should in turn open a browser)
without any error and many GUI based browsers or helpers won't output such a
message onto the standard error stream.
Especially the helper programs tend to show the corresponding error message in
a message box and wait for user input before exiting. This leaves users in
interactive console sessions without an error message in their console,
without a console prompt and without the help page they expected.
2) git.html is missing for some reason, but the user asked for some other page
We currently refuse to show any local html help page when we can't find
git.html. Even if the requested help page exists. If we check for the requested
page instead, we can show the user all available pages and only error out on
those that don't exist.
Signed-off-by: Matthias Aßhauer <mha1993@live.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In preparation for a future patch that will allow building with
Unix Sockets in Windows, workaround a couple of issues from the
Mingw-W64 compatibility layer.
test -S is not able to detect that a file is a socket, so use
test -e instead (through a library function).
`mkdir -m` can't represent a valid ACL directly and fails with
permission problems, so instead call mkdir followed by chmod, which
has been enhanced to do so.
The last invocation of mkdir would likely need the same treatment
but SYMLINK is unlikely to be enabled on Windows so it has been
punted for now.
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a test to control breakages in bisect visualize command.
Signed-off-by: Miriam Rubio <mirucam@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There is a gap on bisect run test coverage related with error exits.
Add two tests to control these error cases.
Signed-off-by: Miriam Rubio <mirucam@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
9512177b68 ("rebase: add --quit to cleanup rebase, leave everything
else untouched", 2016-11-12) seems to have copied the --abort tests
but added two separate tests for the two rebase backends rather than
adding a single test into the existing testrebase() function.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The existing tests only check that HEAD points to the correct
commit after aborting, they do not check that the original branch
is checked out.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
At the end of the test we expect the state directory to be missing,
but the tests only check it is not a directory.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
$dotest holds the name of the rebase state directory and was named
after the state directory used at the time the test was added. As we
no longer use that name rename the variable to reflect its purpose.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This provides better diagnostics if a test fails
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Simplify the setup by using test_commit. Note that this replaces the
branch "pre-rebase" with a tag of the same name. "pre-rebase" is only
used as a fixed reference point so it makes sense to use a tag rather
than a branch.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 97b88dd58c ("git-rebase.sh: Fix --merge --abort failures when
path contains whitespace", 2008-05-04) started running these tests in
a subdirectory with a space in its name. At that time $TEST_DIRECTORY
did not contain a space but shortly after in 4a7aaccd83 ("Rename the
test trash directory to contain spaces.", 2008-05-04) $TEST_DIRECTORY
was changed to contain a space so we no longer need to run these tests
in a subdirectory.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As was noted in 1a85b49b87 (parse-options: make OPT_ARGUMENT() more
useful, 2019-03-14) there's only ever been one user of the
OPT_ARGUMENT(), that user was added in 20de316e33 (difftool: allow
running outside Git worktrees with --no-index, 2019-03-14).
The OPT_ARGUMENT() feature itself was added way back in
580d5bffde (parse-options: new option type to treat an option-like
parameter as an argument., 2008-03-02), but as discussed in
1a85b49b87 wasn't used until 20de316e33 in 2019.
Now that the preceding commit has migrated this code over to using
"struct strvec" to manage the "args" member of a "struct
child_process", we can just use that directly instead of relying on
OPT_ARGUMENT.
This has a minor change in behavior in that if we'll pass --no-index
we'll now always pass it as the first argument, before we'd pass it in
whatever position the caller did. Preserving this was the real value
of OPT_ARGUMENT(), but as it turns out we didn't need that either. We
can always inject it as the first argument, the other end will parse
it just the same.
Note that we cannot remove the "out" and "cpidx" members of "struct
parse_opt_ctx_t" added in 580d5bffde, while they were introduced with
OPT_ARGUMENT() we since used them for other things.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In be5d88e112 (test-tool run-command: learn to run (parts of) the
testsuite, 2019-10-04) an init pattern was added that would use
TESTSUITE_INIT, but then promptly memset() everything back to 0. We'd
then set the "dup" on the two string lists.
Our setting of "next" to "-1" thus did nothing, we'd reset it to "0"
before using it. Let's set it to "0" instead, and trust the
"STRING_LIST_INIT_DUP" to set "strdup_strings" appropriately for us.
Note that while we compile this code, there's no in-tree user for the
"testsuite" target being modified here anymore, see the discussion at
and around <nycvar.QRO.7.76.6.2109091323150.59@tvgsbejvaqbjf.bet>[1].
1. https://lore.kernel.org/git/nycvar.QRO.7.76.6.2109091323150.59@tvgsbejvaqbjf.bet/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove untracked files that are unwanted after they are done being used.
While the set of cases in this patch is certainly far from
comprehensive, it was motivated by some work to see what the fallout
would be if we were to make the removal of untracked files as a side
effect of other commands into an error. Some cases were a bit more
involved than the testcase changes included in this patch, but the ones
included here represent the simple cases. While this patch is not that
important since we are not changing the behavior of those other commands
into an error in the near term, I thought these changes were useful
anyway as an explicit documentation of the intent that these untracked
files are no longer useful.
Acked-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Acked-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These two have fallen out of use with the SHA-256 migration.
The last use of $_x40 was removed in fc7e73d7ef (t4013: improve
diff-post-processor logic, 2020-08-21) and
The last use of $_z40 was removed in 7a868c51c2 (t5562: use $ZERO_OID,
2019-12-21), but it was then needlessly refactored to be hash-agnostic
in 192b517589 (t: use hash-specific lookup tables to define test
constants, 2020-02-22). We can just remove it.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If a user deletes a file and places a directory of untracked files
there, then stashes all these changes, the untracked directory of files
cannot be restored until after the corresponding file in the way is
removed. So, restore changes to tracked files before restoring
untracked files.
There is no counterpart problem to worry about with the user deleting an
untracked file and then add a tracked one in its place. Git does not
track untracked files, and so will not know the untracked file was
deleted, and thus won't be able to stash the removal of that file.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a file is removed from the cache, but there is a file of the same
name present in the working directory, we would normally treat that file
in the working directory as untracked. However, in the case of stash,
doing that would prevent a simple 'git stash push', because the untracked
file would be in the way of restoring the deleted file.
git stash, however, blindly assumes that whatever is in the working
directory for a deleted file is wanted and passes that path along to
update-index. That causes problems when the working directory contains
a directory with the same name as the deleted file. Add some code for
this special case that will avoid passing directory names to
update-index.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are three tests here, because the second bug is documented with
two tests: a file -> directory change and a directory -> file change.
The reason for the two tests is just to verify that both are indeed
broken but that both will be fixed by the same simple change (which will
be provided in a subsequent patch).
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a client requests the v2 protocol over HTTP, they set the
Git-Protocol header. Webservers will generally make that available to
our CGI as HTTP_GIT_PROTOCOL in the environment. However, that's not
sufficient for upload-pack, etc, to respect it; they look in
GIT_PROTOCOL (without the HTTP_ prefix).
Either the webserver or the CGI is responsible for relaying that HTTP
header into the GIT_PROTOCOL variable. Traditionally, our tests have
configured the webserver to do so, but that's a burden on the server
admin. We can make this work out of the box by having the http-backend
CGI copy the contents of HTTP_GIT_PROTOCOL to GIT_PROTOCOL.
There are no new tests here. By removing the SetEnvIf line from our
test Apache config, we're now relying on this behavior of http-backend
to trigger the v2 protocol there (and there are numerous tests that fail
if this doesn't work).
There is one subtlety here: we copy HTTP_GIT_PROTOCOL only if there is
no existing GIT_PROTOCOL variable. That leaves the webserver admin free
to override the client's decision if they choose. This is unlikely to be
useful in practice, but is more flexible. And indeed, it allows the
v2-to-v0 fallback test added in the previous commit to continue working.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since we use the v2 protocol by default, the connection of a v2 client
to a v2 server is well covered by the test suite. And with the
GIT_TEST_PROTOCOL_VERSION knob, we can easily test a v0 client
connecting to a v2-aware server (which will then just speak v0). But we
have no regular tests that a v2 client, when encountering a non-v2-aware
server, will correctly fall back to using v0.
In theory this is a job for the cross-version tests in t/interop, but:
- they cover only git:// and file:// clones
- they are not part of the usual test suite, so nobody ever runs them
anyway
Since using v2 over http requires configuring the web server to pass
along the Git-Protocol header, we can easily create a situation where
the server does not respect the v2 probe, and the conversation falls
back to v0.
This works just fine. This new test is not about fixing any particular
bug, but just making sure that the system works (and continues to work)
as expected.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Doc update plus improved error reporting.
* jk/log-warn-on-bogus-encoding:
docs: use "character encoding" to refer to commit-object encoding
logmsg_reencode(): warn when iconv() fails
After "git clone --recurse-submodules", all submodules are cloned
but they are not by default recursed into by other commands. With
submodule.stickyRecursiveClone configuration set, submodule.recurse
configuration is set to true in a repository created by "clone"
with "--recurse-submodules" option.
* mk/clone-recurse-submodules:
clone: set submodule.recurse=true if submodule.stickyRecursiveClone enabled
The output from "git fast-export", when its anonymization feature
is in use, showed an annotated tag incorrectly.
* tk/fast-export-anonymized-tag-fix:
fast-export: fix anonymized tag using original length
Fixes on usage message from "git commit-graph".
* ab/commit-graph-usage:
commit-graph: show "unexpected subcommand" error
commit-graph: show usage on "commit-graph [write|verify] garbage"
commit-graph: early exit to "usage" on !argc
multi-pack-index: refactor "goto usage" pattern
commit-graph: use parse_options_concat()
commit-graph: remove redundant handling of -h
commit-graph: define common usage with a macro
Even when running "git send-email" without its own threaded
discussion support, a threading related header in one message is
carried over to the subsequent message to result in an unwanted
threading, which has been corrected.
* mh/send-email-reset-in-reply-to:
send-email: avoid incorrect header propagation
Buggy tests could damage repositories outside the throw-away test
area we created. We now by default export GIT_CEILING_DIRECTORIES
to limit the damage from such a stray test.
* sg/set-ceiling-during-tests:
test-lib: set GIT_CEILING_DIRECTORIES to protect the surrounding repository
"git upload-pack" which runs on the other side of "git fetch"
forgot to take the ref namespaces into account when handling
want-ref requests.
* ka/want-ref-in-namespace:
docs: clarify the interaction of transfer.hideRefs and namespaces
upload-pack.c: treat want-ref relative to namespace
t5730: introduce fetch command helper
The advice message that "git cherry-pick" gives when it asks
conflicted replay of a commit to be resolved by the end user has
been updated.
* zh/cherry-pick-advice:
cherry-pick: use better advice message
The t5562 script occasionally takes 60 extra seconds to complete due to
a race condition in the invoke-with-content-length.pl helper.
The way it's supposed to work is this:
- we set up a SIGCLD handler
- we kick off http-backend and write to it with a set content-length,
but _don't_ close the pipe
- we sleep for 60 seconds, assuming that SIGCLD from http-backend
finishing will interrupt us
- after the sleep finishes (whetherby 60 seconds or because it was
interrupted by the signal), we check a flag to see if our SIGCLD
handler was called. If not, then we complain.
This usually completes immediately, because the signal interrupts our
sleep. But very occasionally the child process dies _before_ we hit the
sleep, so we don't realize it. The test still completes successfully
(because our $exited flag is set), but it takes an extra 60 seconds.
There's no way to check the flag and sleep atomically. So the best we
can do with this approach is to sleep in smaller chunks (say, 1 second)
and check the flag incrementally. Then we waste a maximum of 1 second if
we lose the race. This was proposed in:
https://lore.kernel.org/git/20190218205028.32486-1-max@max630.net/
and it does work. But we can do better.
Instead of blocking on sleep and waiting for the child signal to
interrupt us, we can block on the child exiting and set an alarm signal
to trigger the timeout.
This lets us exit the script immediately when the child behaves (with no
race possible), and wait a maximum of 60 seconds when it doesn't.
Note one small subtlety: perl is very willing to restart the waitpid()
call after the alarm is delivered, even if we've thrown an exception via
die. "perldoc -f alarm" claims you can get around this with an eval/die
combo (and even has some example code), but it doesn't seem to work for
me with waitpid(); instead, we continue waiting until the child exits.
So instead, we'll instruct the child process to exit in the alarm
handler itself. In the original code this was done by calling
close($out). That would continue to work, since our child is always
http-backend, which should exit when its stdin closes. But we can be
even more robust against a hung or confused child by sending a KILL
signal, which should terminate it immediately.
Reported-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The hard work was already done with 'git merge' and the ORT strategy.
Just add extra tests to see that we get the expected results in the
non-conflict cases.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add tests to check that cherry-pick and rebase behave the same in the
sparse-index case as in the full index cases. These tests are agnostic
to GIT_TEST_MERGE_ALGORITHM, so a full CI test suite will check both the
'ort' and 'recursive' strategies on this test.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Merge conflicts happen often enough to want to avoid expanding a sparse
index when they happen, as long as those conflicts are within the
sparse-checkout cone. If a conflict exists outside of the
sparse-checkout cone, then we still need to expand before iterating over
the index entries. This is critical to do in advance because of how the
original_cache_nr is tracked to allow inserting and replacing cache
entries.
Iterate over the conflicted files and check if any paths are outside of
the sparse-checkout cone. If so, then expand the full index.
Add a test that demonstrates that we do not expand the index, even when
we hit a conflict within the sparse-checkout cone.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Allow 'git merge' to operate without expanding a sparse index, at least
not immediately. The index still will be expanded in a few cases:
1. If the merge strategy is 'recursive', then we enable
command_requires_full_index at the start of the merge_recursive()
method. We expect sparse-index users to also have the 'ort' strategy
enabled.
2. With the 'ort' strategy, if the merge results in a conflicted file,
then we expand the index before updating the working tree. The loop
that iterates over the worktree replaces index entries and tracks
'origintal_cache_nr' which can become completely wrong if the index
expands in the middle of the operation. This safety valve is
important before that loop starts. A later change will focus this
to only expand if we indeed have a conflict outside of the
sparse-checkout cone.
3. Other merge strategies are executed as a 'git merge-X' subcommand,
and those strategies are currently protected with the
'command_requires_full_index' guard.
Some test updates are required, including a mistaken 'git checkout -b'
that did not specify the base branch, causing merges to be fast-forward
merges.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In preparation for a subsequent commit that migrates code using
add_submodule_odb() to repo_submodule_init(), teach
repo_submodule_init() to support submodules with unabsorbed gitdirs.
(See the documentation for "git submodule absorbgitdirs" for more
information about absorbed and unabsorbed gitdirs.)
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git branch -D <branch>" used to refuse to remove a broken branch
ref that points at a missing commit, which has been corrected.
* rs/branch-allow-deleting-dangling:
branch: allow deleting dangling branches with --force
The delayed checkout code path in "git checkout" etc. were chatty
even when --quiet and/or --no-progress options were given.
* mt/quiet-with-delayed-checkout:
checkout: make delayed checkout respect --quiet and --no-progress
"git diff --relative" segfaulted and/or produced incorrect result
when there are unmerged paths.
* dd/diff-files-unmerged-fix:
diff-lib: ignore paths that are outside $cwd if --relative asked
"git maintenance" scheduler fix for macOS.
* js/maintenance-launchctl-fix:
maintenance: skip bootout/bootstrap when plist is registered
maintenance: create `launchctl` configuration using a lock file
The previous patches have made "git grep" no longer need to add
submodule ODBs as alternates, at least for the code paths tested in
t7814. Demonstrate this by making adding a submodule ODB as an alternate
fatal in this test.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Reviewed-by: Emily Shaffer <emilyshaffer@google.com>
Reviewed-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach Git to add submodule ODBs as alternates to the object store of
the_repository only upon the first access of an object not in
the_repository, and not when add_submodule_odb() is called.
This provides a means of gradually migrating from accessing a
submodule's object through alternates to accessing a submodule's object
by explicitly passing its repository object. Any Git command can declare
that it might access submodule objects by calling add_submodule_odb()
(as they do now), but the submodule ODBs themselves will not be added
until needed, so individual commands and/or combinations of arguments
can be migrated one by one.
[The advantage of explicit repository-object passing is code clarity (it
is clear which repository an object read is from), performance (there is
no need to linearly search through all submodule ODBs whenever an object
is accessed from any repository, whether superproject or submodule), and
the possibility of future features like partial clone submodules (which
right now is not possible because if an object is missing, we do not
know which repository to lazy-fetch into).]
This commit also introduces an environment variable that a test may set
to make the actual registration of alternates fatal, in order to
demonstrate that its codepaths do not need this registration.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Reviewed-by: Emily Shaffer <emilyshaffer@google.com>
Reviewed-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When changing the scope of a sparse-checkout using cone mode, we might
have some tracked directories go out of scope. The current logic removes
the tracked files from within those directories, but leaves the ignored
files within those directories. This is a bit unexpected to users who
have given input to Git saying they don't need those directories
anymore.
This is something that is new to the cone mode pattern type: the user
has explicitly said "I want these directories and _not_ those
directories." The typical sparse-checkout patterns more generally apply
to "I want files with with these patterns" so it is natural to leave
ignored files as they are. This focus on directories in cone mode
provides us an opportunity to change the behavior.
Leaving these ignored files in the sparse directories makes it
impossible to gain performance benefits in the sparse index. When we
track into these directories, we need to know if the files are ignored
or not, which might depend on the _tracked_ .gitignore file(s) within
the sparse directory. This depends on the indexed version of the file,
so the sparse directory must be expanded.
We must take special care to look for untracked, non-ignored files in
these directories before deleting them. We do not want to delete any
meaningful work that the users were doing in those directories and
perhaps forgot to add and commit before switching sparse-checkout
definitions. Since those untracked files might be code files that
generated ignored build output, also do not delete any ignored files
from these directories in that case. The users can recover their state
by resetting their sparse-checkout definition to include that directory
and continue. Alternatively, they can see the warning that is presented
and delete the directory themselves to regain the performance they
expect.
By deleting the sparse directories when changing scope (or running 'git
sparse-checkout reapply') we regain these performance benefits as if the
repository was in a clean state.
Since these ignored files are frequently build output or helper files
from IDEs, the users should not need the files now that the tracked
files are removed. If the tracked files reappear, then they will have
newer timestamps than the build artifacts, so the artifacts will need to
be regenerated anyway.
Use the sparse-index as a data structure in order to find the sparse
directories that can be safely deleted. Re-expand the index to a full
one if it was full before.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The sparse index is tested with the FS Monitor hook and extension since
f8fe49e (fsmonitor: integrate with sparse index, 2021-07-14). This test
was very fragile because it shared an index across sparse and non-sparse
behavior. Since that expansion and contraction could cause the index to
lose its FS Monitor bitmap and token, behavior is fragile to changes in
'git sparse-checkout set'.
Rewrite the test to use two clones of the original repo: full and
sparse. This allows us to also keep the test files (actual, expect,
trace2.txt) out of the repos we are testing with 'git status'.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a regression in my c95e3a3f0b (send-email: move trivial config
handling to Perl, 2021-05-28) where we'd pick the first config key out
of multiple defined ones, instead of using the normal "last key wins"
semantics of "git config --get".
This broke e.g. cases where a .git/config would have a different
sendemail.smtpServer than ~/.gitconfig. We'd pick the ~/.gitconfig
over .git/config, instead of preferring the repository-local
version. The same would go for /etc/gitconfig etc.
The full list of impacted config keys (the %config_settings values
which are references to scalars, not arrays) is:
sendemail.smtpencryption
sendemail.smtpserver
sendemail.smtpserverport
sendemail.smtpuser
sendemail.smtppass
sendemail.smtpdomain
sendemail.smtpauth
sendemail.smtpbatchsize
sendemail.smtprelogindelay
sendemail.tocmd
sendemail.cccmd
sendemail.aliasfiletype
sendemail.envelopesender
sendemail.confirm
sendemail.from
sendemail.assume8bitencoding
sendemail.composeencoding
sendemail.transferencoding
sendemail.sendmailcmd
I.e. having any of these set in say ~/.gitconfig and in-repo
.git/config regressed in v2.33.0 to prefer the --global one over the
--local.
To test this add a test of config priority to one of these config
variables, most don't have tests at all, but there was an existing one
for sendemail.8bitEncoding.
The "git config" (instead of "test_config") is somewhat of an
anti-pattern, but follows established conventions in
t9001-send-email.sh, likewise with any other pattern or idiom in this
test.
The populating of home/.gitconfig and setting of HOME= is copied from
a test in t0017-env-helper.sh added in 1ff750b128 (tests: make
GIT_TEST_GETTEXT_POISON a boolean, 2019-06-21). This test fails
without this bugfix, but now it works.
Reported-by: Eli Schwartz <eschwartz@archlinux.org>
Tested-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The existing mechanism for scheduling background maintenance is done
through cron. On Linux systems managed by systemd, systemd provides an
alternative to schedule recurring tasks: systemd timers.
The main motivations to implement systemd timers in addition to cron
are:
* cron is optional and Linux systems running systemd might not have it
installed.
* The execution of `crontab -l` can tell us if cron is installed but not
if the daemon is actually running.
* With systemd, each service is run in its own cgroup and its logs are
tagged by the service inside journald. With cron, all scheduled tasks
are running in the cron daemon cgroup and all the logs of the
user-scheduled tasks are pretended to belong to the system cron
service.
Concretely, a user that doesn’t have access to the system logs won’t
have access to the log of their own tasks scheduled by cron whereas
they will have access to the log of their own tasks scheduled by
systemd timer.
Although `cron` attempts to send email, that email may go unseen by
the user because these days, local mailboxes are not heavily used
anymore.
In order to schedule git maintenance, we need two unit template files:
* ~/.config/systemd/user/git-maintenance@.service
to define the command to be started by systemd and
* ~/.config/systemd/user/git-maintenance@.timer
to define the schedule at which the command should be run.
Those units are templates that are parameterized by the frequency.
Based on those templates, 3 timers are started:
* git-maintenance@hourly.timer
* git-maintenance@daily.timer
* git-maintenance@weekly.timer
The command launched by those three timers are the same as with the
other scheduling methods:
/path/to/git for-each-repo --exec-path=/path/to
--config=maintenance.repo maintenance run --schedule=%i
with the full path for git to ensure that the version of git launched
for the scheduled maintenance is the same as the one used to run
`maintenance start`.
The timer unit contains `Persistent=true` so that, if the computer is
powered down when a maintenance task should run, the task will be run
when the computer is back powered on.
Signed-off-by: Lénaïc Huard <lenaic@lhuard.fr>
Acked-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Depending on the system, different schedulers can be used to schedule
the hourly, daily and weekly executions of `git maintenance run`:
* `launchctl` for MacOS,
* `schtasks` for Windows and
* `crontab` for everything else.
`git maintenance run` now has an option to let the end-user explicitly
choose which scheduler he wants to use:
`--scheduler=auto|crontab|launchctl|schtasks`.
When `git maintenance start --scheduler=XXX` is run, it not only
registers `git maintenance run` tasks in the scheduler XXX, it also
removes the `git maintenance run` tasks from all the other schedulers to
ensure we cannot have two schedulers launching concurrent identical
tasks.
The default value is `auto` which chooses a suitable scheduler for the
system.
`git maintenance stop` doesn't have any `--scheduler` parameter because
this command will try to remove the `git maintenance run` tasks from all
the available schedulers.
Signed-off-by: Lénaïc Huard <lenaic@lhuard.fr>
Acked-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The 'debug' function in test-lib-functions.sh is used to invoke a
debugger at a specific line in a test. It inherits the value of HOME and
TERM set by 'test-lib.sh': HOME="$TRASH_DIRECTORY" and TERM=dumb.
Changing the value of HOME means that any customization configured in a
developers' debugger configuration file (like $HOME/.gdbinit or
$HOME/.lldbinit) are not available in the debugger invoked by
'test_pause'.
Changing the value of TERM to 'dumb' means that colored output
is disabled in the debugger.
To make the debugging experience with 'debug' more pleasant, leverage
the variable USER_HOME, added in the previous commit, to copy a
developer's ~/.gdbinit and ~/.lldbinit to the test HOME. We do not set
HOME to USER_HOME as in 'test_pause' to avoid user configuration in
$USER_HOME/.gitconfig from interfering with the command being debugged.
Also, add a flag to launch the debugger with the original value of
TERM, and add the same warning as for 'test_pause'.
Helped-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Acked-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The 'test_pause' function, which is designed to help interactive
debugging and exploration of tests, currently inherits the value of HOME
and TERM set by 'test-lib.sh': HOME="$TRASH_DIRECTORY" and TERM=dumb. It
also invokes the shell defined by TEST_SHELL_PATH, which defaults to
/bin/sh (through SHELL_PATH).
Changing the value of HOME means that any customization configured in a
developers' shell startup files and any Git aliases defined in their
global Git configuration file are not available in the shell invoked by
'test_pause'.
Changing the value of TERM to 'dumb' means that colored output
is disabled for all commands in that shell.
Using /bin/sh as the shell invoked by 'test_pause' is not ideal since
some platforms (i.e. Debian and derivatives) use Dash as /bin/sh, and
this shell is usually compiled without readline support, which makes for
a poor interactive command line experience.
To make the interactive command line experience in the shell invoked by
'test_pause' more pleasant, save the values of HOME and TERM in
USER_HOME and USER_TERM before changing them in test-lib.sh, and add
options to 'test_pause' to optionally use these variables to invoke the
shell. Also add an option to invoke SHELL instead of TEST_SHELL_PATH, so
that developer's interactive shell is used.
We use options instead of changing the behaviour unconditionally since
these three variables can slightly change command behaviour. Moreover,
using the original HOME means commands could overwrite files in a user's
home directory. Be explicit about these caveats in the new 'Usage'
section in test-lib-functions.sh.
Finally, add '[options]' to the test_pause synopsys in t/README, and
mention that the full list of helper functions and their options can be
found in test-lib-functions.sh.
Helped-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Acked-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
3f824e91c8 (t/Makefile: introduce TEST_SHELL_PATH, 2017-12-08)
made it easy to use a different shell for the tests than 'SHELL_PATH'
used at compile time. But 'test_pause' still invokes 'SHELL_PATH'.
If TEST_SHELL_PATH is set, invoke that shell in 'test_pause' for
consistency.
Suggested-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Acked-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The ll_binary_merge() function assumes that the ancestor blob is
different from either side of the new versions, and always fails
the merge in conflict, unless -Xours or -Xtheirs is in effect.
The normal "merge" machineries all resolve the trivial cases
(e.g. if our side changed while their side did not, the result
is ours) without triggering the file-level merge drivers, so the
assumption is warranted.
The code path in "git apply --3way", however, does not check for
the trivial three-way merge situation and always calls the
file-level merge drivers. This used to be perfectly OK back
when we always first attempted a straight patch application and
used the three-way code path only as a fallback. Any binary
patch that can be applied as a trivial three-way merge (e.g. the
patch is based exactly on the version we happen to have) would
always cleanly apply, so the ll_binary_merge() that is not
prepared to see the trivial case would not have to handle such a
case.
This no longer is true after we made "--3way" to mean "first try
three-way and then fall back to straight application", and made
"git apply -3" on a binary patch that is based on the current
version no longer apply.
Teach "git apply -3" to first check for the trivial merge cases
and resolve them without hitting the file-level merge drivers.
Signed-off-by: Jerry Zhang <jerry@skydio.com>
[jc: stolen tests from Jerry's patch]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Command line completion updates.
* fc/completion-updates:
completion: bash: add correct suffix in variables
completion: bash: fix for multiple dash commands
completion: bash: fix for suboptions with value
completion: bash: fix prefix detection in branch.*
Various bugs in "git rebase -r" have been fixed.
* pw/rebase-r-fixes:
rebase -r: fix merge -c with a merge strategy
rebase -r: don't write .git/MERGE_MSG when fast-forwarding
rebase -i: add another reword test
rebase -r: make 'merge -c' behave like reword
Checking out all the paths from HEAD during the last conflicted
step in "git rebase" and continuing would cause the step to be
skipped (which is expected), but leaves MERGE_MSG file behind in
$GIT_DIR and confuses the next "git commit", which has been
corrected.
* pw/rebase-skip-final-fix:
rebase --continue: remove .git/MERGE_MSG
rebase --apply: restore some tests
t3403: fix commit authorship
"git commit --fixup" now works with "--edit" again, after it was
broken in v2.32.
* jk/commit-edit-fixup-fix:
commit: restore --edit when combined with --fixup
The revision traversal API has been optimized by taking advantage
of the commit-graph, when available, to determine if a commit is
reachable from any of the existing refs.
* ps/connectivity-optim:
revision: avoid hitting packfiles when commits are in commit-graph
commit-graph: split out function to search commit position
revision: stop retrieving reference twice
connected: do not sort input revisions
revision: separate walk and unsorted flags
When executing git-update-ref(1) with the `--stdin` flag, then the user
can queue updates and, since e48cf33b61 (update-ref: implement
interactive transaction handling, 2020-04-02), interactively drive the
transaction's state via a set of transactional verbs. This interactivity
is somewhat broken though: while the caller can use these verbs to drive
the transaction's state, the status messages which confirm that a verb
has been processed is not flushed. The caller may thus be left hanging
waiting for the acknowledgement.
Fix the bug by flushing stdout after writing the status update. Add a
test which catches this bug.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These new performance tests demonstrate effectively the same behavior as
p5310, but use a multi-pack bitmap instead of a single-pack one.
Notably, p5326 does not create a MIDX bitmap with multiple packs. This
is so we can measure a direct comparison between it and p5310. Any
difference between the two is measuring just the overhead of using MIDX
bitmaps.
Here are the results of p5310 and p5326 together, measured at the same
time and on the same machine (using a Xenon W-2255 CPU):
Test HEAD
------------------------------------------------------------------------
5310.2: repack to disk 96.78(93.39+11.33)
5310.3: simulated clone 9.98(9.79+0.19)
5310.4: simulated fetch 1.75(4.26+0.19)
5310.5: pack to file (bitmap) 28.20(27.87+8.70)
5310.6: rev-list (commits) 0.41(0.36+0.05)
5310.7: rev-list (objects) 1.61(1.54+0.07)
5310.8: rev-list count with blob:none 0.25(0.21+0.04)
5310.9: rev-list count with blob:limit=1k 2.65(2.54+0.10)
5310.10: rev-list count with tree:0 0.23(0.19+0.04)
5310.11: simulated partial clone 4.34(4.21+0.12)
5310.13: clone (partial bitmap) 11.05(12.21+0.48)
5310.14: pack to file (partial bitmap) 31.25(34.22+3.70)
5310.15: rev-list with tree filter (partial bitmap) 0.26(0.22+0.04)
versus the same tests (this time using a multi-pack index):
Test HEAD
------------------------------------------------------------------------
5326.2: setup multi-pack index 78.99(75.29+11.58)
5326.3: simulated clone 11.78(11.56+0.22)
5326.4: simulated fetch 1.70(4.49+0.13)
5326.5: pack to file (bitmap) 28.02(27.72+8.76)
5326.6: rev-list (commits) 0.42(0.36+0.06)
5326.7: rev-list (objects) 1.65(1.58+0.06)
5326.8: rev-list count with blob:none 0.26(0.21+0.05)
5326.9: rev-list count with blob:limit=1k 2.97(2.86+0.10)
5326.10: rev-list count with tree:0 0.25(0.20+0.04)
5326.11: simulated partial clone 5.65(5.49+0.16)
5326.13: clone (partial bitmap) 12.22(13.43+0.38)
5326.14: pack to file (partial bitmap) 30.05(31.57+7.25)
5326.15: rev-list with tree filter (partial bitmap) 0.24(0.20+0.04)
There is slight overhead in "simulated clone", "simulated partial
clone", and "clone (partial bitmap)". Unsurprisingly, that overhead is
due to using the MIDX's reverse index to map between bit positions and
MIDX positions.
This can be reproduced by running "git repack -adb" along with "git
multi-pack-index write --bitmap" in a large-ish repository. Then run:
$ perf record -o pack.perf git -c core.multiPackIndex=false \
pack-objects --all --stdout >/dev/null </dev/null
$ perf record -o midx.perf git -c core.multiPackIndex=true \
pack-objects --all --stdout >/dev/null </dev/null
and compare the two with "perf diff -c delta -o 1 pack.perf midx.perf".
The most notable results are below (the next largest positive delta is
+0.14%):
# Event 'cycles'
#
# Baseline Delta Shared Object Symbol
# ........ ....... .................. ..........................
#
+5.86% git [.] nth_midxed_offset
+5.24% git [.] nth_midxed_pack_int_id
3.45% +0.97% git [.] offset_to_pack_pos
3.30% +0.57% git [.] pack_pos_to_offset
+0.30% git [.] pack_pos_to_midx
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A new p5326 introduced by the next patch will want these same tests,
interjecting its own setup in between. Move them out so that both perf
tests can reuse them.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Introduce a new 'GIT_TEST_MULTI_PACK_INDEX_WRITE_BITMAP' environment
variable to also write a multi-pack bitmap when
'GIT_TEST_MULTI_PACK_INDEX' is set.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A number of these tests are focused only on pack-based bitmaps and need
to be updated to disable 'GIT_TEST_MULTI_PACK_INDEX_WRITE_BITMAP' where
necessary.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This test is specifically about generating a midx still respecting a
pack-based bitmap file. Generating a MIDX bitmap would confuse the test.
Let's override the 'GIT_TEST_MULTI_PACK_INDEX_WRITE_BITMAP' variable to
make sure we don't do so.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Generating a MIDX bitmap confuses many of the tests in t5310, which
expect to control whether and how bitmaps are written. Since the
relevant MIDX-bitmap tests here are covered already in t5326, let's just
disable the flag for the whole t5310 script.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Generating a MIDX bitmap causes tests which repack in a partial clone to
fail because they are missing objects. Missing objects is an expected
component of tests in t0410, so disable this knob altogether. Graceful
degradation when writing a bitmap with missing objects is tested in
t5326.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This patch introduces a new test, t5326, which tests the basic
functionality of multi-pack bitmaps.
Some trivial behavior is tested, such as:
- Whether bitmaps can be generated with more than one pack.
- Whether clones can be served with all objects in the bitmap.
- Whether follow-up fetches can be served with some objects outside of
the server's bitmap
These use lib-bitmap's tests (which in turn were pulled from t5310), and
we cover cases where the MIDX represents both a single pack and multiple
packs.
In addition, some non-trivial and MIDX-specific behavior is tested, too,
including:
- Whether multi-pack bitmaps behave correctly with respect to the
pack-reuse machinery when the base for some object is selected from
a different pack than the delta.
- Whether multi-pack bitmaps correctly respect the
pack.preferBitmapTips configuration.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Subsequent tests will want to check for the existence of a multi-pack
bitmap which matches the multi-pack-index stored in the pack directory.
The multi-pack bitmap includes the hex checksum of the MIDX it
corresponds to in its filename (for example,
'$packdir/multi-pack-index-<checksum>.bitmap'). As a result, some tests
want a way to learn what '<checksum>' is.
This helper addresses that need by printing the checksum of the
repository's multi-pack-index.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We'll soon be adding a test script that will cover many of the same
bitmap concepts as t5310, but for MIDX bitmaps. Let's pull out as many
of the applicable tests as we can so we don't have to rewrite them.
There should be no functional change to t5310; we still run the same
operations in the same order.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Opening multiple instance of the same MIDX can lead to problems like two
separate packed_git structures which represent the same pack being added
to the repository's object store.
The above scenario can happen because prepare_midx_pack() checks if
`m->packs[pack_int_id]` is NULL in order to determine if a pack has been
opened and installed in the repository before. But a caller can
construct two copies of the same MIDX by calling get_multi_pack_index()
and load_multi_pack_index() since the former manipulates the
object store directly but the latter is a lower-level routine which
allocates a new MIDX for each call.
So if prepare_midx_pack() is called on multiple MIDXs with the same
pack_int_id, then that pack will be installed twice in the object
store's packed_git pointer.
This can lead to problems in, for e.g., the pack-bitmap code, which does
something like the following (in pack-bitmap.c:open_pack_bitmap()):
struct bitmap_index *bitmap_git = ...;
for (p = get_all_packs(r); p; p = p->next) {
if (open_pack_bitmap_1(bitmap_git, p) == 0)
ret = 0;
}
which is a problem if two copies of the same pack exist in the
packed_git list because pack-bitmap.c:open_pack_bitmap_1() contains a
conditional like the following:
if (bitmap_git->pack || bitmap_git->midx) {
/* ignore extra bitmap file; we can only handle one */
warning("ignoring extra bitmap file: %s", packfile->pack_name);
close(fd);
return -1;
}
Avoid this scenario by not letting write_midx_internal() open a MIDX
that isn't also pointed at by the object store. So long as this is the
case, other routines should prefer to open MIDXs with
get_multi_pack_index() or reprepare_packed_git() instead of creating
instances on their own. Because get_multi_pack_index() returns
`r->object_store->multi_pack_index` if it is non-NULL, we'll only have
one instance of a MIDX open at one time, avoiding these problems.
To encourage this, drop the `struct multi_pack_index *` parameter from
`write_midx_internal()`, and rely instead on the `object_dir` to find
(or initialize) the correct MIDX instance.
Likewise, replace the call to `close_midx()` with
`close_object_store()`, since we're about to replace the MIDX with a new
one and should invalidate the object store's memory of any MIDX that
might have existed beforehand.
Note that this now forbids passing object directories that don't belong
to alternate repositories over `--object-dir`, since before we would
have happily opened a MIDX in any directory, but now restrict ourselves
to only those reachable by `r->objects->multi_pack_index` (and alternate
MIDXs that we can see by walking the `next` pointer).
As far as I can tell, supporting arbitrary directories with
`--object-dir` was a historical accident, since even the documentation
says `<alt>` when referring to the value passed to this option.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The soon-to-be-implemented multi-pack bitmap treats object in the first
bit position specially by assuming that all objects in the pack it was
selected from are also represented from that pack in the MIDX. In other
words, the pack from which the first object was selected must also have
all of its other objects selected from that same pack in the MIDX in
case of any duplicates.
But this assumption relies on the fact that there is at least one object
in that pack to begin with; otherwise the object in the first bit
position isn't from a preferred pack, in which case we can no longer
assume that all objects in that pack were also selected from the same
pack.
Guard this assumption by checking the number of objects in the given
preferred pack, and failing if the given pack is empty.
To make sure we can safely perform this check, open any packs which are
contained in an existing MIDX via prepare_midx_pack(). The same is done
for new packs via the add_pack_to_midx() callback, but packs picked up
from a previous MIDX will not yet have these opened.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If using --object-dir to point into an object directory which belongs to
a different repository than the one in the current working directory,
such as:
git init repo
git -C repo ... # add some objects
cd alternate
git multi-pack-index --object-dir ../repo/.git/objects write
the binary will segfault trying to access the object-dir via the repo it
found, but that's not fully initialized. Worse, if we later call
clear_midx_files_ext(), we will use `the_repository` and remove files
out of the wrong object directory.
Fix this by using the given object_dir (or the object directory of
`the_repository` if `--object-dir` wasn't given) to properly to clean up
the *.rev files, avoiding the crash.
Original-patch-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The multi-pack-index command supports working with arbitrary object
directories via the `--object-dir` flag. Though this has historically
worked in arbitrary repositories (including when the command itself was
run outside of a Git repository), this has been somewhat of an accident.
For example, running:
git multi-pack-index write --object-dir=/path/to/repo/objects
outside of a Git repository causes a BUG(). This is because the
top-level `cmd_multi_pack_index()` function stops parsing when it sees
"write", and then fills in the default object directory (the result of
calling `get_object_directory()`) before handing off to
`cmd_multi_pack_index_write()`. But there is no repository to
initialize, and so calling `get_object_directory()` results in a BUG()
(indicating that the current repository is not initialized).
Another case where this doesn't quite work as expected is when operating
in a SHA-256 repository. To see the failure, try this in your shell:
git init --object-format=sha256 repo
git -C repo commit --allow-empty base
git -C repo repack -d
git multi-pack-index --object-dir=$(pwd)/repo/.git/objects write
and observe that we cannot open the `.idx` file in "repo", because the
outermost process assumes that any repository that it works in also uses
the default value of `the_hash_algo` (at the time of writing, SHA-1).
There may be compelling reasons for trying to work around these bugs,
but working in arbitrary `--object-dir`'s is non-standard enough (and
likewise, these bugs prevalent enough) that I don't think any workflows
would be broken by abandoning this behavior.
Accordingly, restrict the `multi-pack-index` builtin to only work when
inside of a Git repository (i.e., its main utility becomes selecting
which alternate to operate in), which avoids both of the bugs above.
(Note that you can still trigger a bug when writing a MIDX in an
alternate which does not use the same object format as the repository
which it is an alternate of, but that is an unrelated bug to this one).
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When 'upload-pack' runs within the context of a git namespace, treat any
'want-ref' lines the client sends as relative to that namespace.
Also check if the wanted ref is hidden via 'hideRefs'. If it is hidden,
respond with an error as if the ref didn't exist.
Helped-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Kim Altintop <kim@eagain.st>
Reviewed-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Assembling a "raw" fetch command to be fed directly to "test-tool serve-v2"
is extracted into a test helper.
Suggested-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Kim Altintop <kim@eagain.st>
Reviewed-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"enum" keyword will be introduced in PHP 8.1.
https://wiki.php.net/rfc/enumerations
Signed-off-by: USAMI Kenta <tadsan@zonu.me>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 7f40759496 (fast-export: tighten anonymize_mem() interface to
handle only strings, 2020-06-23) changed the interface used in anonymizing
strings, but failed to update the size of annotated tag messages to match
the new anonymized string.
As a result, exporting tags having messages longer than 13 characters
would create output that couldn't be parsed by fast-import,
as the data length indicated was larger than the data output.
Reset the message size when anonymizing, and add a tag with a "long"
message to the test.
Signed-off-by: Tal Kelrich <hasturkun@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If you have ever initialized a submodule, open_submodule will open it.
If you then delete the submodule's worktree directory (but don't
remove it from .gitmodules), git diff --submodule=diff would error out
as it attempted to chdir into the now-deleted working tree directory.
This only matters if the submodules git dir is absorbed. If not, then
we no longer have anywhere to run the diff. But that case does not
trigger this error, because in that case, open_submodule fails, so we
don't resolve a left commit, so we exit early, which is the only thing
we could do.
If absorbed, then we can run the diff from the submodule's absorbed
git dir (.git/modules/sm2). In practice, that's a bit more
complicated, because `git diff` expects to be run from inside a
working directory, not a git dir. So it looks in the config for
core.worktree, and does chdir("../../../sm2"), which is the very dir
that we're trying to avoid visiting because it's been deleted. We
work around this by setting GIT_WORK_TREE (and GIT_DIR) to ".". It is
little weird to set GIT_WORK_TREE to something that is not a working
tree just to avoid an unnecessary chdir, but it works.
Signed-off-by: David Turner <dturner@twosigma.com
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Bring the "commit-graph" command in line with the error output and
general pattern in cmd_multi_pack_index().
Let's test for that output, and also cover the same potential bug as
was fixed in the multi-pack-index command in
88617d11f9 (multi-pack-index: fix potential segfault without
sub-command, 2021-07-19).
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Reviewed-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change the parse_options() invocation in the commit-graph code to
error on unknown leftover argv elements, in addition to the existing
and implicit erroring via parse_options() on unknown options.
We'd already error in cmd_commit_graph() on e.g.:
git commit-graph unknown verify
git commit-graph --unknown verify
But here we're calling parse_options() twice more for the "write" and
"verify" subcommands. We did not do the same checking for leftover
argv elements there. As a result we'd silently accept garbage in these
subcommands, let's not do that.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Reviewed-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The current implementation of GIT_TEST_FAIL_PREREQS is broken in
that checking for the lack of a prerequisite would not work. Avoid
the use of "if ! test_have_prereq X" in a test script.
* bc/t5607-avoid-broken-test-fail-prereqs:
t5607: avoid using prerequisites to select algorithm
"git apply" miscounted the bytes and failed to read to the end of
binary hunks.
* jk/apply-binary-hunk-parsing-fix:
apply: keep buffer/size pair in sync when parsing binary hunks
"git pull" had various corner cases that were not well thought out
around its --rebase backend, e.g. "git pull --ff-only" did not stop
but went ahead and rebased when the history on other side is not a
descendant of our history. The series tries to fix them up.
* en/pull-conflicting-options:
pull: fix handling of multiple heads
pull: update docs & code for option compatibility with rebasing
pull: abort by default when fast-forwarding is not possible
pull: make --rebase and --no-rebase override pull.ff=only
pull: since --ff-only overrides, handle it first
pull: abort if --ff-only is given and fast-forwarding is impossible
t7601: add tests of interactions with multiple merge heads and config
t7601: test interaction of merge/rebase/fast-forward flags and options