Compare commits

..

21 Commits

Author SHA1 Message Date
af778cd9be Git 2.32.4
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-06 17:41:15 -04:00
9cbd2827c5 Sync with 2.31.5
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-06 17:40:44 -04:00
ecf9b4a443 Git 2.31.5
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-06 17:39:26 -04:00
122512967e Sync with 2.30.6
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-06 17:39:15 -04:00
abd4d67ab0 Git 2.30.6
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-06 17:38:16 -04:00
067aa8fb41 t2080: prepare for changing protocol.file.allow
Explicitly cloning over the "file://" protocol in t1092 in preparation
for merging a security release which will change the default value of
this configuration to be "user".

Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:27:18 -04:00
4a7dab5ce4 t1092: prepare for changing protocol.file.allow
Explicitly cloning over the "file://" protocol in t1092 in preparation
for merging a security release which will change the default value of
this configuration to be "user".

Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:27:14 -04:00
0ca6ead81e alias.c: reject too-long cmdline strings in split_cmdline()
This function improperly uses an int to represent the number of entries
in the resulting argument array. This allows a malicious actor to
intentionally overflow the return value, leading to arbitrary heap
writes.

Because the resulting argv array is typically passed to execv(), it may
be possible to leverage this attack to gain remote code execution on a
victim machine. This was almost certainly the case for certain
configurations of git-shell until the previous commit limited the size
of input it would accept. Other calls to split_cmdline() are typically
limited by the size of argv the OS is willing to hand us, so are
similarly protected.

So this is not strictly fixing a known vulnerability, but is a hardening
of the function that is worth doing to protect against possible unknown
vulnerabilities.

One approach to fixing this would be modifying the signature of
`split_cmdline()` to look something like:

    int split_cmdline(char *cmdline, const char ***argv, size_t *argc);

Where the return value of `split_cmdline()` is negative for errors, and
zero otherwise. If non-NULL, the `*argc` pointer is modified to contain
the size of the `**argv` array.

But this implies an absurdly large `argv` array, which more than likely
larger than the system's argument limit. So even if split_cmdline()
allowed this, it would fail immediately afterwards when we called
execv(). So instead of converting all of `split_cmdline()`'s callers to
work with `size_t` types in this patch, instead pursue the minimal fix
here to prevent ever returning an array with more than INT_MAX entries
in it.

Signed-off-by: Kevin Backhouse <kevinbackhouse@github.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
71ad7fe1bc shell: limit size of interactive commands
When git-shell is run in interactive mode (which must be enabled by
creating $HOME/git-shell-commands), it reads commands from stdin, one
per line, and executes them.

We read the commands with git_read_line_interactively(), which uses a
strbuf under the hood. That means we'll accept an input of arbitrary
size (limited only by how much heap we can allocate). That creates two
problems:

  - the rest of the code is not prepared to handle large inputs. The
    most serious issue here is that split_cmdline() uses "int" for most
    of its types, which can lead to integer overflow and out-of-bounds
    array reads and writes. But even with that fixed, we assume that we
    can feed the command name to snprintf() (via xstrfmt()), which is
    stuck for historical reasons using "int", and causes it to fail (and
    even trigger a BUG() call).

  - since the point of git-shell is to take input from untrusted or
    semi-trusted clients, it's a mild denial-of-service. We'll allocate
    as many bytes as the client sends us (actually twice as many, since
    we immediately duplicate the buffer).

We can fix both by just limiting the amount of per-command input we're
willing to receive.

We should also fix split_cmdline(), of course, which is an accident
waiting to happen, but that can come on top. Most calls to
split_cmdline(), including the other one in git-shell, are OK because
they are reading from an OS-provided argv, which is limited in practice.
This patch should eliminate the immediate vulnerabilities.

I picked 4MB as an arbitrary limit. It's big enough that nobody should
ever run into it in practice (since the point is to run the commands via
exec, we're subject to OS limits which are typically much lower). But
it's small enough that allocating it isn't that big a deal.

The code is mostly just swapping out fgets() for the strbuf call, but we
have to add a few niceties like flushing and trimming line endings. We
could simplify things further by putting the buffer on the stack, but
4MB is probably a bit much there. Note that we'll _always_ allocate 4MB,
which for normal, non-malicious requests is more than we would before
this patch. But on the other hand, other git programs are happy to use
96MB for a delta cache. And since we'd never touch most of those pages,
on a lazy-allocating OS like Linux they won't even get allocated to
actual RAM.

The ideal would be a version of strbuf_getline() that accepted a maximum
value. But for a minimal vulnerability fix, let's keep things localized
and simple. We can always refactor further on top.

The included test fails in an obvious way with ASan or UBSan (which
notice the integer overflow and out-of-bounds reads). Without them, it
fails in a less obvious way: we may segfault, or we may try to xstrfmt()
a long string, leading to a BUG(). Either way, it fails reliably before
this patch, and passes with it. Note that we don't need an EXPENSIVE
prereq on it. It does take 10-15s to fail before this patch, but with
the new limit, we fail almost immediately (and the perl process
generating 2GB of data exits via SIGPIPE).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
32696a4cbe shell: add basic tests
We have no tests of even basic functionality of git-shell. Let's add a
couple of obvious ones. This will serve as a framework for adding tests
for new things we fix, as well as making sure we don't screw anything up
too badly while doing so.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
a1d4f67c12 transport: make protocol.file.allow be "user" by default
An earlier patch discussed and fixed a scenario where Git could be used
as a vector to exfiltrate sensitive data through a Docker container when
a potential victim clones a suspicious repository with local submodules
that contain symlinks.

That security hole has since been plugged, but a similar one still
exists.  Instead of convincing a would-be victim to clone an embedded
submodule via the "file" protocol, an attacker could convince an
individual to clone a repository that has a submodule pointing to a
valid path on the victim's filesystem.

For example, if an individual (with username "foo") has their home
directory ("/home/foo") stored as a Git repository, then an attacker
could exfiltrate data by convincing a victim to clone a malicious
repository containing a submodule pointing at "/home/foo/.git" with
`--recurse-submodules`. Doing so would expose any sensitive contents in
stored in "/home/foo" tracked in Git.

For systems (such as Docker) that consider everything outside of the
immediate top-level working directory containing a Dockerfile as
inaccessible to the container (with the exception of volume mounts, and
so on), this is a violation of trust by exposing unexpected contents in
the working copy.

To mitigate the likelihood of this kind of attack, adjust the "file://"
protocol's default policy to be "user" to prevent commands that execute
without user input (including recursive submodule initialization) from
taking place by default.

Suggested-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
f4a32a550f t/t9NNN: allow local submodules
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.

Tests that interact with submodules a handful of times use
`test_config_global`.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
0d3beb71da t/t7NNN: allow local submodules
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.

Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead. Test scripts that rely on
submodules throughout use a `git config --global` during a setup test
towards the beginning of the script.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
0f21b8f468 t/t6NNN: allow local submodules
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.

Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
225d2d50cc t/t5NNN: allow local submodules
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.

Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead. Test scripts that rely on
submodules throughout use a `git config --global` during a setup test
towards the beginning of the script.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
ac7e57fa28 t/t4NNN: allow local submodules
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.

Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead. Test scripts that rely on
submodules throughout use a `git config --global` during a setup test
towards the beginning of the script.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
f8d510ed0b t/t3NNN: allow local submodules
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.

Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead. Test scripts that rely on
submodules throughout use a `git config --global` during a setup test
towards the beginning of the script.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
99f4abb8da t/2NNNN: allow local submodules
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.

Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead. Test scripts that rely on
submodules throughout use a `git config --global` during a setup test
towards the beginning of the script.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
8a96dbcb33 t/t1NNN: allow local submodules
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.

Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
7de0c306f7 t/lib-submodule-update.sh: allow local submodules
To prepare for changing the default value of `protocol.file.allow` to
"user", update the `prolog()` function in lib-submodule-update to allow
submodules to be cloned over the file protocol.

This is used by a handful of submodule-related test scripts, which
themselves will have to tweak the value of `protocol.file.allow` in
certain locations. Those will be done in subsequent commits.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
6f054f9fb3 builtin/clone.c: disallow --local clones with symlinks
When cloning a repository with `--local`, Git relies on either making a
hardlink or copy to every file in the "objects" directory of the source
repository. This is done through the callpath `cmd_clone()` ->
`clone_local()` -> `copy_or_link_directory()`.

The way this optimization works is by enumerating every file and
directory recursively in the source repository's `$GIT_DIR/objects`
directory, and then either making a copy or hardlink of each file. The
only exception to this rule is when copying the "alternates" file, in
which case paths are rewritten to be absolute before writing a new
"alternates" file in the destination repo.

One quirk of this implementation is that it dereferences symlinks when
cloning. This behavior was most recently modified in 36596fd2df (clone:
better handle symlinked files at .git/objects/, 2019-07-10), which
attempted to support `--local` clones of repositories with symlinks in
their objects directory in a platform-independent way.

Unfortunately, this behavior of dereferencing symlinks (that is,
creating a hardlink or copy of the source's link target in the
destination repository) can be used as a component in attacking a
victim by inadvertently exposing the contents of file stored outside of
the repository.

Take, for example, a repository that stores a Dockerfile and is used to
build Docker images. When building an image, Docker copies the directory
contents into the VM, and then instructs the VM to execute the
Dockerfile at the root of the copied directory. This protects against
directory traversal attacks by copying symbolic links as-is without
dereferencing them.

That is, if a user has a symlink pointing at their private key material
(where the symlink is present in the same directory as the Dockerfile,
but the key itself is present outside of that directory), the key is
unreadable to a Docker image, since the link will appear broken from the
container's point of view.

This behavior enables an attack whereby a victim is convinced to clone a
repository containing an embedded submodule (with a URL like
"file:///proc/self/cwd/path/to/submodule") which has a symlink pointing
at a path containing sensitive information on the victim's machine. If a
user is tricked into doing this, the contents at the destination of
those symbolic links are exposed to the Docker image at runtime.

One approach to preventing this behavior is to recreate symlinks in the
destination repository. But this is problematic, since symlinking the
objects directory are not well-supported. (One potential problem is that
when sharing, e.g. a "pack" directory via symlinks, different writers
performing garbage collection may consider different sets of objects to
be reachable, enabling a situation whereby garbage collecting one
repository may remove reachable objects in another repository).

Instead, prohibit the local clone optimization when any symlinks are
present in the `$GIT_DIR/objects` directory of the source repository.
Users may clone the repository again by prepending the "file://" scheme
to their clone URL, or by adding the `--no-local` option to their `git
clone` invocation.

The directory iterator used by `copy_or_link_directory()` must no longer
dereference symlinks (i.e., it *must* call `lstat()` instead of `stat()`
in order to discover whether or not there are symlinks present). This has
no bearing on the overall behavior, since we will immediately `die()` on
encounter a symlink.

Note that t5604.33 suggests that we do support local clones with
symbolic links in the source repository's objects directory, but this
was likely unintentional, or at least did not take into consideration
the problem with sharing parts of the objects directory with symbolic
links at the time. Update this test to reflect which options are and
aren't supported.

Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-01 00:23:38 -04:00
546 changed files with 46705 additions and 50063 deletions

View File

@ -2,15 +2,8 @@ env:
CIRRUS_CLONE_DEPTH: 1
freebsd_12_task:
env:
GIT_PROVE_OPTS: "--timer --jobs 10"
GIT_TEST_OPTS: "--no-chain-lint --no-bin-wrappers"
MAKEFLAGS: "-j4"
DEFAULT_TEST_TARGET: prove
DEVELOPER: 1
freebsd_instance:
image_family: freebsd-12-2
memory: 2G
image: freebsd-12-1-release-amd64
install_script:
pkg install -y gettext gmake perl5
create_user_script:

View File

@ -12,9 +12,15 @@ jobs:
check-whitespace:
runs-on: ubuntu-latest
steps:
- name: Set commit count
shell: bash
run: echo "COMMIT_DEPTH=$((1+$COMMITS))" >>$GITHUB_ENV
env:
COMMITS: ${{ github.event.pull_request.commits }}
- uses: actions/checkout@v2
with:
fetch-depth: 0
fetch-depth: ${{ env.COMMIT_DEPTH }}
- name: git log --check
id: check_out
@ -41,9 +47,25 @@ jobs:
echo "${dash} ${etc}"
;;
esac
done <<< $(git log --check --pretty=format:"---% h% s" ${{github.event.pull_request.base.sha}}..)
done <<< $(git log --check --pretty=format:"---% h% s" -${{github.event.pull_request.commits}})
if test -n "${log}"
then
echo "::set-output name=checkout::"${log}""
exit 2
fi
- name: Add Check Output as Comment
uses: actions/github-script@v3
id: add-comment
env:
log: ${{ steps.check_out.outputs.checkout }}
with:
script: |
await github.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `Whitespace errors found in workflow ${{ github.workflow }}:\n\n\`\`\`\n${process.env.log.replace(/\\n/g, "\n")}\n\`\`\``
})
if: ${{ failure() }}

View File

@ -81,21 +81,44 @@ jobs:
if: needs.ci-config.outputs.enabled == 'yes'
runs-on: windows-latest
steps:
- uses: actions/checkout@v2
- uses: git-for-windows/setup-git-for-windows-sdk@v1
- name: build
- uses: actions/checkout@v1
- name: download git-sdk-64-minimal
shell: bash
run: |
## Get artifact
urlbase=https://dev.azure.com/git-for-windows/git/_apis/build/builds
id=$(curl "$urlbase?definitions=22&statusFilter=completed&resultFilter=succeeded&\$top=1" |
jq -r ".value[] | .id")
download_url="$(curl "$urlbase/$id/artifacts" |
jq -r '.value[] | select(.name == "git-sdk-64-minimal").resource.downloadUrl')"
curl --connect-timeout 10 --retry 5 --retry-delay 0 --retry-max-time 240 \
-o artifacts.zip "$download_url"
## Unzip and remove the artifact
unzip artifacts.zip
rm artifacts.zip
- name: build
shell: powershell
env:
HOME: ${{runner.workspace}}
MSYSTEM: MINGW64
NO_PERL: 1
run: ci/make-test-artifacts.sh artifacts
- name: zip up tracked files
run: git archive -o artifacts/tracked.tar.gz HEAD
- name: upload tracked files and build artifacts
uses: actions/upload-artifact@v2
run: |
& .\git-sdk-64-minimal\usr\bin\bash.exe -lc @"
printf '%s\n' /git-sdk-64-minimal/ >>.git/info/exclude
ci/make-test-artifacts.sh artifacts
"@
- name: upload build artifacts
uses: actions/upload-artifact@v1
with:
name: windows-artifacts
path: artifacts
- name: upload git-sdk-64-minimal
uses: actions/upload-artifact@v1
with:
name: git-sdk-64-minimal
path: git-sdk-64-minimal
windows-test:
runs-on: windows-latest
needs: [windows-build]
@ -104,25 +127,37 @@ jobs:
matrix:
nr: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
steps:
- name: download tracked files and build artifacts
uses: actions/download-artifact@v2
- uses: actions/checkout@v1
- name: download build artifacts
uses: actions/download-artifact@v1
with:
name: windows-artifacts
path: ${{github.workspace}}
- name: extract tracked files and build artifacts
- name: extract build artifacts
shell: bash
run: tar xf artifacts.tar.gz && tar xf tracked.tar.gz
- uses: git-for-windows/setup-git-for-windows-sdk@v1
run: tar xf artifacts.tar.gz
- name: download git-sdk-64-minimal
uses: actions/download-artifact@v1
with:
name: git-sdk-64-minimal
path: ${{github.workspace}}/git-sdk-64-minimal/
- name: test
shell: bash
run: ci/run-test-slice.sh ${{matrix.nr}} 10
shell: powershell
run: |
& .\git-sdk-64-minimal\usr\bin\bash.exe -lc @"
# Let Git ignore the SDK
printf '%s\n' /git-sdk-64-minimal/ >>.git/info/exclude
ci/run-test-slice.sh ${{matrix.nr}} 10
"@
- name: ci/print-test-failures.sh
if: failure()
shell: bash
run: ci/print-test-failures.sh
shell: powershell
run: |
& .\git-sdk-64-minimal\usr\bin\bash.exe -lc ci/print-test-failures.sh
- name: Upload failed tests' directories
if: failure() && env.FAILED_TEST_ARTIFACTS != ''
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v1
with:
name: failed-tests-windows
path: ${{env.FAILED_TEST_ARTIFACTS}}
@ -130,12 +165,27 @@ jobs:
needs: ci-config
if: needs.ci-config.outputs.enabled == 'yes'
env:
MSYSTEM: MINGW64
NO_PERL: 1
GIT_CONFIG_PARAMETERS: "'user.name=CI' 'user.email=ci@git'"
runs-on: windows-latest
steps:
- uses: actions/checkout@v2
- uses: git-for-windows/setup-git-for-windows-sdk@v1
- uses: actions/checkout@v1
- name: download git-sdk-64-minimal
shell: bash
run: |
## Get artifact
urlbase=https://dev.azure.com/git-for-windows/git/_apis/build/builds
id=$(curl "$urlbase?definitions=22&statusFilter=completed&resultFilter=succeeded&\$top=1" |
jq -r ".value[] | .id")
download_url="$(curl "$urlbase/$id/artifacts" |
jq -r '.value[] | select(.name == "git-sdk-64-minimal").resource.downloadUrl')"
curl --connect-timeout 10 --retry 5 --retry-delay 0 --retry-max-time 240 \
-o artifacts.zip "$download_url"
## Unzip and remove the artifact
unzip artifacts.zip
rm artifacts.zip
- name: initialize vcpkg
uses: actions/checkout@v2
with:
@ -153,60 +203,75 @@ jobs:
- name: add msbuild to PATH
uses: microsoft/setup-msbuild@v1
- name: copy dlls to root
shell: cmd
run: compat\vcbuild\vcpkg_copy_dlls.bat release
shell: powershell
run: |
& compat\vcbuild\vcpkg_copy_dlls.bat release
if (!$?) { exit(1) }
- name: generate Visual Studio solution
shell: bash
run: |
cmake `pwd`/contrib/buildsystems/ -DCMAKE_PREFIX_PATH=`pwd`/compat/vcbuild/vcpkg/installed/x64-windows \
-DNO_GETTEXT=YesPlease -DPERL_TESTS=OFF -DPYTHON_TESTS=OFF -DCURL_NO_CURL_CMAKE=ON
-DMSGFMT_EXE=`pwd`/git-sdk-64-minimal/mingw64/bin/msgfmt.exe -DPERL_TESTS=OFF -DPYTHON_TESTS=OFF -DCURL_NO_CURL_CMAKE=ON
- name: MSBuild
run: msbuild git.sln -property:Configuration=Release -property:Platform=x64 -maxCpuCount:4 -property:PlatformToolset=v142
- name: bundle artifact tar
shell: bash
shell: powershell
env:
MSVC: 1
VCPKG_ROOT: ${{github.workspace}}\compat\vcbuild\vcpkg
run: |
mkdir -p artifacts &&
eval "$(make -n artifacts-tar INCLUDE_DLLS_IN_ARTIFACTS=YesPlease ARTIFACTS_DIRECTORY=artifacts NO_GETTEXT=YesPlease 2>&1 | grep ^tar)"
- name: zip up tracked files
run: git archive -o artifacts/tracked.tar.gz HEAD
- name: upload tracked files and build artifacts
uses: actions/upload-artifact@v2
& git-sdk-64-minimal\usr\bin\bash.exe -lc @"
mkdir -p artifacts &&
eval \"`$(make -n artifacts-tar INCLUDE_DLLS_IN_ARTIFACTS=YesPlease ARTIFACTS_DIRECTORY=artifacts 2>&1 | grep ^tar)\"
"@
- name: upload build artifacts
uses: actions/upload-artifact@v1
with:
name: vs-artifacts
path: artifacts
vs-test:
runs-on: windows-latest
needs: vs-build
needs: [vs-build, windows-build]
strategy:
fail-fast: false
matrix:
nr: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
steps:
- uses: git-for-windows/setup-git-for-windows-sdk@v1
- name: download tracked files and build artifacts
uses: actions/download-artifact@v2
- uses: actions/checkout@v1
- name: download git-sdk-64-minimal
uses: actions/download-artifact@v1
with:
name: git-sdk-64-minimal
path: ${{github.workspace}}/git-sdk-64-minimal/
- name: download build artifacts
uses: actions/download-artifact@v1
with:
name: vs-artifacts
path: ${{github.workspace}}
- name: extract tracked files and build artifacts
- name: extract build artifacts
shell: bash
run: tar xf artifacts.tar.gz && tar xf tracked.tar.gz
run: tar xf artifacts.tar.gz
- name: test
shell: bash
shell: powershell
env:
MSYSTEM: MINGW64
NO_SVN_TESTS: 1
GIT_TEST_SKIP_REBASE_P: 1
run: ci/run-test-slice.sh ${{matrix.nr}} 10
run: |
& .\git-sdk-64-minimal\usr\bin\bash.exe -lc @"
# Let Git ignore the SDK and the test-cache
printf '%s\n' /git-sdk-64-minimal/ /test-cache/ >>.git/info/exclude
ci/run-test-slice.sh ${{matrix.nr}} 10
"@
- name: ci/print-test-failures.sh
if: failure()
shell: bash
run: ci/print-test-failures.sh
shell: powershell
run: |
& .\git-sdk-64-minimal\usr\bin\bash.exe -lc ci/print-test-failures.sh
- name: Upload failed tests' directories
if: failure() && env.FAILED_TEST_ARTIFACTS != ''
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v1
with:
name: failed-tests-windows
path: ${{env.FAILED_TEST_ARTIFACTS}}
@ -237,14 +302,14 @@ jobs:
jobname: ${{matrix.vector.jobname}}
runs-on: ${{matrix.vector.pool}}
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v1
- run: ci/install-dependencies.sh
- run: ci/run-build-and-tests.sh
- run: ci/print-test-failures.sh
if: failure()
- name: Upload failed tests' directories
if: failure() && env.FAILED_TEST_ARTIFACTS != ''
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v1
with:
name: failed-tests-${{matrix.vector.jobname}}
path: ${{env.FAILED_TEST_ARTIFACTS}}
@ -259,8 +324,6 @@ jobs:
image: alpine
- jobname: Linux32
image: daald/ubuntu32:xenial
- jobname: pedantic
image: fedora
env:
jobname: ${{matrix.vector.jobname}}
runs-on: ubuntu-latest
@ -284,29 +347,9 @@ jobs:
jobname: StaticAnalysis
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v1
- run: ci/install-dependencies.sh
- run: ci/run-static-analysis.sh
sparse:
needs: ci-config
if: needs.ci-config.outputs.enabled == 'yes'
env:
jobname: sparse
runs-on: ubuntu-20.04
steps:
- name: Download a current `sparse` package
# Ubuntu's `sparse` version is too old for us
uses: git-for-windows/get-azure-pipelines-artifact@v0
with:
repository: git/git
definitionId: 10
artifact: sparse-20.04
- name: Install the current `sparse` package
run: sudo dpkg -i sparse-20.04/sparse_*.deb
- uses: actions/checkout@v2
- name: Install other dependencies
run: ci/install-dependencies.sh
- run: make sparse
documentation:
needs: ci-config
if: needs.ci-config.outputs.enabled == 'yes'
@ -314,6 +357,6 @@ jobs:
jobname: Documentation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v1
- run: ci/install-dependencies.sh
- run: ci/test-documentation.sh

View File

@ -551,51 +551,6 @@ Writing Documentation:
documentation, please see the documentation-related advice in the
Documentation/SubmittingPatches file).
In order to ensure the documentation is inclusive, avoid assuming
that an unspecified example person is male or female, and think
twice before using "he", "him", "she", or "her". Here are some
tips to avoid use of gendered pronouns:
- Prefer succinctness and matter-of-factly describing functionality
in the abstract. E.g.
--short:: Emit output in the short-format.
and avoid something like these overly verbose alternatives:
--short:: Use this to emit output in the short-format.
--short:: You can use this to get output in the short-format.
--short:: A user who prefers shorter output could....
--short:: Should a person and/or program want shorter output, he
she/they/it can...
This practice often eliminates the need to involve human actors in
your description, but it is a good practice regardless of the
avoidance of gendered pronouns.
- When it becomes awkward to stick to this style, prefer "you" when
addressing the the hypothetical user, and possibly "we" when
discussing how the program might react to the user. E.g.
You can use this option instead of --xyz, but we might remove
support for it in future versions.
while keeping in mind that you can probably be less verbose, e.g.
Use this instead of --xyz. This option might be removed in future
versions.
- If you still need to refer to an example person that is
third-person singular, you may resort to "singular they" to avoid
"he/she/him/her", e.g.
A contributor asks their upstream to pull from them.
Note that this sounds ungrammatical and unnatural to those who
learned that "they" is only used for third-person plural, e.g.
those who learn English as a second language in some parts of the
world.
Every user-visible change should be reflected in the documentation.
The same general rule as for code applies -- imitate the existing
conventions.

View File

@ -139,7 +139,6 @@ ASCIIDOC_CONF = -f asciidoc.conf
ASCIIDOC_COMMON = $(ASCIIDOC) $(ASCIIDOC_EXTRA) $(ASCIIDOC_CONF) \
-amanversion=$(GIT_VERSION) \
-amanmanual='Git Manual' -amansource='Git'
ASCIIDOC_DEPS = asciidoc.conf GIT-ASCIIDOCFLAGS
TXT_TO_HTML = $(ASCIIDOC_COMMON) -b $(ASCIIDOC_HTML)
TXT_TO_XML = $(ASCIIDOC_COMMON) -b $(ASCIIDOC_DOCBOOK)
MANPAGE_XSL = manpage-normal.xsl
@ -194,7 +193,6 @@ ASCIIDOC_DOCBOOK = docbook5
ASCIIDOC_EXTRA += -acompat-mode -atabsize=8
ASCIIDOC_EXTRA += -I. -rasciidoctor-extensions
ASCIIDOC_EXTRA += -alitdd='&\#x2d;&\#x2d;'
ASCIIDOC_DEPS = asciidoctor-extensions.rb GIT-ASCIIDOCFLAGS
DBLATEX_COMMON =
XMLTO_EXTRA += --skip-validation
XMLTO_EXTRA += -x manpage.xsl
@ -296,7 +294,9 @@ docdep_prereqs = \
cmd-list.made $(cmds_txt)
doc.dep : $(docdep_prereqs) $(DOC_DEP_TXT) build-docdep.perl
$(QUIET_GEN)$(PERL_PATH) ./build-docdep.perl >$@ $(QUIET_STDERR)
$(QUIET_GEN)$(RM) $@+ $@ && \
$(PERL_PATH) ./build-docdep.perl >$@+ $(QUIET_STDERR) && \
mv $@+ $@
ifneq ($(MAKECMDGOALS),clean)
-include doc.dep
@ -316,7 +316,8 @@ cmds_txt = cmds-ancillaryinterrogators.txt \
$(cmds_txt): cmd-list.made
cmd-list.made: cmd-list.perl ../command-list.txt $(MAN1_TXT)
$(QUIET_GEN)$(PERL_PATH) ./cmd-list.perl ../command-list.txt $(cmds_txt) $(QUIET_STDERR) && \
$(QUIET_GEN)$(RM) $@ && \
$(PERL_PATH) ./cmd-list.perl ../command-list.txt $(cmds_txt) $(QUIET_STDERR) && \
date >$@
mergetools_txt = mergetools-diff.txt mergetools-merge.txt
@ -324,7 +325,7 @@ mergetools_txt = mergetools-diff.txt mergetools-merge.txt
$(mergetools_txt): mergetools-list.made
mergetools-list.made: ../git-mergetool--lib.sh $(wildcard ../mergetools/*)
$(QUIET_GEN) \
$(QUIET_GEN)$(RM) $@ && \
$(SHELL_PATH) -c 'MERGE_TOOLS_DIR=../mergetools && \
. ../git-mergetool--lib.sh && \
show_tool_names can_diff "* " || :' >mergetools-diff.txt && \
@ -353,23 +354,32 @@ clean:
$(RM) manpage-base-url.xsl
$(RM) GIT-ASCIIDOCFLAGS
$(MAN_HTML): %.html : %.txt $(ASCIIDOC_DEPS)
$(QUIET_ASCIIDOC)$(TXT_TO_HTML) -d manpage -o $@ $<
$(MAN_HTML): %.html : %.txt asciidoc.conf asciidoctor-extensions.rb GIT-ASCIIDOCFLAGS
$(QUIET_ASCIIDOC)$(RM) $@+ $@ && \
$(TXT_TO_HTML) -d manpage -o $@+ $< && \
mv $@+ $@
$(OBSOLETE_HTML): %.html : %.txto $(ASCIIDOC_DEPS)
$(QUIET_ASCIIDOC)$(TXT_TO_HTML) -o $@ $<
$(OBSOLETE_HTML): %.html : %.txto asciidoc.conf asciidoctor-extensions.rb GIT-ASCIIDOCFLAGS
$(QUIET_ASCIIDOC)$(RM) $@+ $@ && \
$(TXT_TO_HTML) -o $@+ $< && \
mv $@+ $@
manpage-base-url.xsl: manpage-base-url.xsl.in
$(QUIET_GEN)sed "s|@@MAN_BASE_URL@@|$(MAN_BASE_URL)|" $< > $@
%.1 %.5 %.7 : %.xml manpage-base-url.xsl $(wildcard manpage*.xsl)
$(QUIET_XMLTO)$(XMLTO) -m $(MANPAGE_XSL) $(XMLTO_EXTRA) man $<
$(QUIET_XMLTO)$(RM) $@ && \
$(XMLTO) -m $(MANPAGE_XSL) $(XMLTO_EXTRA) man $<
%.xml : %.txt $(ASCIIDOC_DEPS)
$(QUIET_ASCIIDOC)$(TXT_TO_XML) -d manpage -o $@ $<
%.xml : %.txt asciidoc.conf asciidoctor-extensions.rb GIT-ASCIIDOCFLAGS
$(QUIET_ASCIIDOC)$(RM) $@+ $@ && \
$(TXT_TO_XML) -d manpage -o $@+ $< && \
mv $@+ $@
user-manual.xml: user-manual.txt user-manual.conf asciidoctor-extensions.rb GIT-ASCIIDOCFLAGS
$(QUIET_ASCIIDOC)$(TXT_TO_XML) -d book -o $@ $<
$(QUIET_ASCIIDOC)$(RM) $@+ $@ && \
$(TXT_TO_XML) -d book -o $@+ $< && \
mv $@+ $@
technical/api-index.txt: technical/api-index-skel.txt \
technical/api-index.sh $(patsubst %,%.txt,$(API_DOCS))
@ -390,35 +400,46 @@ XSLTOPTS += --stringparam html.stylesheet docbook-xsl.css
XSLTOPTS += --param generate.consistent.ids 1
user-manual.html: user-manual.xml $(XSLT)
$(QUIET_XSLTPROC)xsltproc $(XSLTOPTS) -o $@ $(XSLT) $<
$(QUIET_XSLTPROC)$(RM) $@+ $@ && \
xsltproc $(XSLTOPTS) -o $@+ $(XSLT) $< && \
mv $@+ $@
git.info: user-manual.texi
$(QUIET_MAKEINFO)$(MAKEINFO) --no-split -o $@ user-manual.texi
user-manual.texi: user-manual.xml
$(QUIET_DB2TEXI)$(DOCBOOK2X_TEXI) user-manual.xml --encoding=UTF-8 --to-stdout >$@+ && \
$(PERL_PATH) fix-texi.perl <$@+ >$@ && \
$(RM) $@+
$(QUIET_DB2TEXI)$(RM) $@+ $@ && \
$(DOCBOOK2X_TEXI) user-manual.xml --encoding=UTF-8 --to-stdout >$@++ && \
$(PERL_PATH) fix-texi.perl <$@++ >$@+ && \
rm $@++ && \
mv $@+ $@
user-manual.pdf: user-manual.xml
$(QUIET_DBLATEX)$(DBLATEX) -o $@ $(DBLATEX_COMMON) $<
$(QUIET_DBLATEX)$(RM) $@+ $@ && \
$(DBLATEX) -o $@+ $(DBLATEX_COMMON) $< && \
mv $@+ $@
gitman.texi: $(MAN_XML) cat-texi.perl texi.xsl
$(QUIET_DB2TEXI) \
$(QUIET_DB2TEXI)$(RM) $@+ $@ && \
($(foreach xml,$(sort $(MAN_XML)),xsltproc -o $(xml)+ texi.xsl $(xml) && \
$(DOCBOOK2X_TEXI) --encoding=UTF-8 --to-stdout $(xml)+ && \
$(RM) $(xml)+ &&) true) > $@+ && \
$(PERL_PATH) cat-texi.perl $@ <$@+ >$@ && \
$(RM) $@+
rm $(xml)+ &&) true) > $@++ && \
$(PERL_PATH) cat-texi.perl $@ <$@++ >$@+ && \
rm $@++ && \
mv $@+ $@
gitman.info: gitman.texi
$(QUIET_MAKEINFO)$(MAKEINFO) --no-split --no-validate $*.texi
$(patsubst %.txt,%.texi,$(MAN_TXT)): %.texi : %.xml
$(QUIET_DB2TEXI)$(DOCBOOK2X_TEXI) --to-stdout $*.xml >$@
$(QUIET_DB2TEXI)$(RM) $@+ $@ && \
$(DOCBOOK2X_TEXI) --to-stdout $*.xml >$@+ && \
mv $@+ $@
howto-index.txt: howto-index.sh $(HOWTO_TXT)
$(QUIET_GEN)'$(SHELL_PATH_SQ)' ./howto-index.sh $(sort $(HOWTO_TXT)) >$@
$(QUIET_GEN)$(RM) $@+ $@ && \
'$(SHELL_PATH_SQ)' ./howto-index.sh $(sort $(HOWTO_TXT)) >$@+ && \
mv $@+ $@
$(patsubst %,%.html,$(ARTICLES)) : %.html : %.txt
$(QUIET_ASCIIDOC)$(TXT_TO_HTML) $*.txt
@ -427,9 +448,10 @@ WEBDOC_DEST = /pub/software/scm/git/docs
howto/%.html: ASCIIDOC_EXTRA += -a git-relative-html-prefix=../
$(patsubst %.txt,%.html,$(HOWTO_TXT)): %.html : %.txt GIT-ASCIIDOCFLAGS
$(QUIET_ASCIIDOC) \
$(QUIET_ASCIIDOC)$(RM) $@+ $@ && \
sed -e '1,/^$$/d' $< | \
$(TXT_TO_HTML) - >$@
$(TXT_TO_HTML) - >$@+ && \
mv $@+ $@
install-webdoc : html
'$(SHELL_PATH_SQ)' ./install-webdoc.sh $(WEBDOC_DEST)
@ -470,7 +492,4 @@ doc-l10n install-l10n::
$(MAKE) -C po $@
endif
# Delete the target file on error
.DELETE_ON_ERROR:
.PHONY: FORCE

View File

@ -47,7 +47,7 @@ Veteran contributors who are especially interested in helping mentor newcomers
are present on the list. In order to avoid search indexers, group membership is
required to view messages; anyone can join and no approval is required.
==== https://web.libera.chat/#git-devel[#git-devel] on Libera Chat
==== https://webchat.freenode.net/#git-devel[#git-devel] on Freenode
This IRC channel is for conversations between Git contributors. If someone is
currently online and knows the answer to your question, you can receive help
@ -827,7 +827,7 @@ either examining recent pull requests where someone has been granted `/allow`
(https://github.com/gitgitgadget/git/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+%22%2Fallow%22[Search:
is:pr is:open "/allow"]), in which case both the author and the person who
granted the `/allow` can now `/allow` you, or by inquiring on the
https://web.libera.chat/#git-devel[#git-devel] IRC channel on Libera Chat
https://webchat.freenode.net/#git-devel[#git-devel] IRC channel on Freenode
linking your pull request and asking for someone to `/allow` you.
If the CI fails, you can update your changes with `git rebase -i` and push your

View File

@ -691,7 +691,7 @@ help understand. In our case, that means we omit trees and blobs not directly
referenced by `HEAD` or `HEAD`'s history, because we begin the walk with only
`HEAD` in the `pending` list.)
First, we'll need to `#include "list-objects-filter-options.h"` and set up the
First, we'll need to `#include "list-objects-filter-options.h`" and set up the
`struct list_objects_filter_options` at the top of the function.
----
@ -779,7 +779,7 @@ Count all the objects within and modify the print statement:
while ((oid = oidset_iter_next(&oit)))
omitted_count++;
printf("commits %d\nblobs %d\ntags %d\ntrees %d\nomitted %d\n",
printf("commits %d\nblobs %d\ntags %d\ntrees%d\nomitted %d\n",
commit_count, blob_count, tag_count, tree_count, omitted_count);
----

View File

@ -50,7 +50,7 @@ Fixes since v1.6.0.2
if the working tree is currently dirty.
* "git for-each-ref --format=%(subject)" fixed for commits with no
newline in the message body.
no newline in the message body.
* "git remote" fixed to protect printf from user input.

View File

@ -365,7 +365,7 @@ details).
(merge 2fbd4f9 mh/maint-lockfile-overflow later to maint).
* Invocations of "git checkout" used internally by "git rebase" were
counted as "checkout", and affected later "git checkout -", which took
counted as "checkout", and affected later "git checkout -" to the
the user to an unexpected place.
(merge 3bed291 rr/rebase-checkout-reflog later to maint).

View File

@ -184,8 +184,8 @@ Performance, Internal Implementation, Development Support etc.
the ref backend in use, as its format is much richer than the
normal refs, and written directly by "git fetch" as a plain file..
* An unused binary has been discarded, and a bunch of commands
have been turned into built-in.
* An unused binary has been discarded, and and a bunch of commands
have been turned into into built-in.
* A handful of places in in-tree code still relied on being able to
execute the git subcommands, especially built-ins, in "git-foo"

View File

@ -0,0 +1,60 @@
Git v2.30.6 Release Notes
=========================
This release addresses the security issues CVE-2022-39253 and
CVE-2022-39260.
Fixes since v2.30.5
-------------------
* CVE-2022-39253:
When relying on the `--local` clone optimization, Git dereferences
symbolic links in the source repository before creating hardlinks
(or copies) of the dereferenced link in the destination repository.
This can lead to surprising behavior where arbitrary files are
present in a repository's `$GIT_DIR` when cloning from a malicious
repository.
Git will no longer dereference symbolic links via the `--local`
clone mechanism, and will instead refuse to clone repositories that
have symbolic links present in the `$GIT_DIR/objects` directory.
Additionally, the value of `protocol.file.allow` is changed to be
"user" by default.
* CVE-2022-39260:
An overly-long command string given to `git shell` can result in
overflow in `split_cmdline()`, leading to arbitrary heap writes and
remote code execution when `git shell` is exposed and the directory
`$HOME/git-shell-commands` exists.
`git shell` is taught to refuse interactive commands that are
longer than 4MiB in size. `split_cmdline()` is hardened to reject
inputs larger than 2GiB.
Credit for finding CVE-2022-39253 goes to Cory Snider of Mirantis. The
fix was authored by Taylor Blau, with help from Johannes Schindelin.
Credit for finding CVE-2022-39260 goes to Kevin Backhouse of GitHub.
The fix was authored by Kevin Backhouse, Jeff King, and Taylor Blau.
Jeff King (2):
shell: add basic tests
shell: limit size of interactive commands
Kevin Backhouse (1):
alias.c: reject too-long cmdline strings in split_cmdline()
Taylor Blau (11):
builtin/clone.c: disallow `--local` clones with symlinks
t/lib-submodule-update.sh: allow local submodules
t/t1NNN: allow local submodules
t/2NNNN: allow local submodules
t/t3NNN: allow local submodules
t/t4NNN: allow local submodules
t/t5NNN: allow local submodules
t/t6NNN: allow local submodules
t/t7NNN: allow local submodules
t/t9NNN: allow local submodules
transport: make `protocol.file.allow` be "user" by default

View File

@ -0,0 +1,5 @@
Git v2.31.5 Release Notes
=========================
This release merges the security fix that appears in v2.30.6; see
the release notes for that version for details.

View File

@ -0,0 +1,5 @@
Git v2.32.4 Release Notes
=========================
This release merges the security fix that appears in v2.30.6; see
the release notes for that version for details.

View File

@ -1,279 +0,0 @@
Git 2.33 Release Notes
======================
Updates since Git 2.32
----------------------
UI, Workflows & Features
* "git send-email" learned the "--sendmail-cmd" command line option
and the "sendemail.sendmailCmd" configuration variable, which is a
more sensible approach than the current way of repurposing the
"smtp-server" that is meant to name the server to instead name the
command to talk to the server.
* The userdiff pattern for C# learned the token "record".
* "git rev-list" learns to omit the "commit <object-name>" header
lines from the output with the `--no-commit-header` option.
* "git worktree add --lock" learned to record why the worktree is
locked with a custom message.
Performance, Internal Implementation, Development Support etc.
* The code to handle the "--format" option in "for-each-ref" and
friends made too many string comparisons on %(atom)s used in the
format string, which has been corrected by converting them into
enum when the format string is parsed.
* Use the hashfile API in the codepath that writes the index file to
reduce code duplication.
* Repeated rename detections in a sequence of mergy operations have
been optimized out for the 'ort' merge strategy.
* Preliminary clean-up of tests before the main reftable changes
hits the codebase.
* The backend for "diff -G/-S" has been updated to use pcre2 engine
when available.
* Use ".DELETE_ON_ERROR" pseudo target to simplify our Makefile.
* Code cleanup around struct_type_init() functions.
* "git send-email" optimization.
* GitHub Actions / CI update.
(merge 0dc787a9f2 js/ci-windows-update later to maint).
* Object accesses in repositories with many alternate object store
have been optimized.
* "git log" has been optimized not to waste cycles to load ref
decoration data that may not be needed.
* Many "printf"-like helper functions we have have been annotated
with __attribute__() to catch placeholder/parameter mismatches.
* Tests that cover protocol bits have been updated and helpers
used there have been consolidated.
* The CI gained a new job to run "make sparse" check.
* "git status" codepath learned to work with sparsely populated index
without hydrating it fully.
* A guideline for gender neutral documentation has been added.
* Documentation on "git diff -l<n>" and diff.renameLimit have been
updated, and the defaults for these limits have been raised.
* The completion support used to offer alternate spelling of options
that exist only for compatibility, which has been corrected.
* "TEST_OUTPUT_DIRECTORY=there make test" failed to work, which has
been corrected.
* "git bundle" gained more test coverage.
* "git read-tree" had a codepath where blobs are fetched one-by-one
from the promisor remote, which has been corrected to fetch in bulk.
* Rewrite of "git submodule" in C continues.
* "git checkout" and "git commit" learn to work without unnecessarily
expanding sparse indexes.
Fixes since v2.32
-----------------
* We historically rejected a very short string as an author name
while accepting a patch e-mail, which has been loosened.
(merge 72ee47ceeb ef/mailinfo-short-name later to maint).
* The parallel checkout codepath did not initialize object ID field
used to talk to the worker processes in a futureproof way.
* Rewrite code that triggers undefined behaviour warning.
(merge aafa5df0df jn/size-t-casted-to-off-t-fix later to maint).
* The description of "fast-forward" in the glossary has been updated.
(merge e22f2daed0 ry/clarify-fast-forward-in-glossary later to maint).
* Recent "git clone" left a temporary directory behind when the
transport layer returned an failure.
(merge 6aacb7d861 jk/clone-clean-upon-transport-error later to maint).
* "git fetch" over protocol v2 left its side of the socket open after
it finished speaking, which unnecessarily wasted the resource on
the other side.
(merge ae1a7eefff jk/fetch-pack-v2-half-close-early later to maint).
* The command line completion (in contrib/) learned that "git diff"
takes the "--anchored" option.
(merge d1e7c2cac9 tb/complete-diff-anchored later to maint).
* "git-svn" tests assumed that "locale -a", which is used to pick an
available UTF-8 locale, is available everywhere. A knob has been
introduced to allow testers to specify a suitable locale to use.
(merge 482c962de4 dd/svn-test-wo-locale-a later to maint).
* Update "git subtree" to work better on Windows.
(merge 77f37de39f js/subtree-on-windows-fix later to maint).
* Remove multimail from contrib/
(merge f74d11471f js/no-more-multimail later to maint).
* Make the codebase MSAN clean.
(merge 4dbc55e87d ah/uninitialized-reads-fix later to maint).
* Work around inefficient glob substitution in older versions of bash
by rewriting parts of a test.
(merge eb87c6f559 jx/t6020-with-older-bash later to maint).
* Avoid duplicated work while building reachability bitmaps.
(merge aa9ad6fee5 jk/bitmap-tree-optim later to maint).
* We broke "GIT_SKIP_TESTS=t?000" to skip certain tests in recent
update, which got fixed.
* The side-band demultiplexer that is used to display progress output
from the remote end did not clear the line properly when the end of
line hits at a packet boundary, which has been corrected.
* Some test scripts assumed that readlink(1) was universally
installed and available, which is not the case.
(merge 7c0afdf23c jk/test-without-readlink-1 later to maint).
* Recent update to completion script (in contrib/) broke those who
use the __git_complete helper to define completion to their custom
command.
(merge cea232194d fw/complete-cmd-idx-fix later to maint).
* Output from some of our tests were affected by the width of the
terminal that they were run in, which has been corrected by
exporting a fixed value in the COLUMNS environment.
(merge c49a177bec ab/fix-columns-to-80-during-tests later to maint).
* On Windows, mergetool has been taught to find kdiff3.exe just like
it finds winmerge.exe.
(merge 47eb4c6890 ms/mergetools-kdiff3-on-windows later to maint).
* When we cannot figure out how wide the terminal is, we use a
fallback value of 80 ourselves (which cannot be avoided), but when
we run the pager, we export it in COLUMNS, which forces the pager
to use the hardcoded value, even when the pager is perfectly
capable to figure it out itself. Stop exporting COLUMNS when we
fall back on the hardcoded default value for our own use.
(merge 9b6e2c8b98 js/stop-exporting-bogus-columns later to maint).
* "git cat-file --batch-all-objects"" misbehaved when "--batch" is in
use and did not ask for certain object traits.
(merge ee02ac6164 zh/cat-file-batch-fix later to maint).
* Some code and doc clarification around "git push".
* The "union" conflict resultion variant misbehaved when used with
binary merge driver.
(merge 382b601acd jk/union-merge-binary later to maint).
* Prevent "git p4" from failing to submit changes to binary file.
(merge 54662d5958 dc/p4-binary-submit-fix later to maint).
* "git grep --and -e foo" ought to have been diagnosed as an error
but instead segfaulted, which has been corrected.
(merge fe7fe62d8d rs/grep-parser-fix later to maint).
* The merge code had funny interactions between content based rename
detection and directory rename detection.
(merge 3585d0ea23 en/merge-dir-rename-corner-case-fix later to maint).
* When rebuilding the multi-pack index file reusing an existing one,
we used to blindly trust the existing file and ended up carrying
corrupted data into the updated file, which has been corrected.
(merge f89ecf7988 tb/midx-use-checksum later to maint).
* Update the location of system-side configuration file on Windows.
(merge e355307692 js/gfw-system-config-loc-fix later to maint).
* Code recently added to support common ancestry negotiation during
"git push" did not sanity check its arguments carefully enough.
(merge eff40457a4 ab/fetch-negotiate-segv-fix later to maint).
* Update the documentation not to assume users are of certain gender
and adds to guidelines to do so.
(merge 46a237f42f ds/gender-neutral-doc later to maint).
* "git commit --allow-empty-message" won't abort the operation upon
an empty message, but the hint shown in the editor said otherwise.
(merge 6f70f00b4f hj/commit-allow-empty-message later to maint).
* The code that gives an error message in "git multi-pack-index" when
no subcommand is given tried to print a NULL pointer as a strong,
which has been corrected.
(merge 88617d11f9 tb/reverse-midx later to maint).
* CI update.
(merge a066a90db6 js/ci-check-whitespace-updates later to maint).
* Documentation fix for "git pull --rebase=no".
(merge d3236becec fc/pull-no-rebase-merges-theirs-into-ours later to maint).
* A race between repacking and using pack bitmaps has been corrected.
(merge dc1daacdcc jk/check-pack-valid-before-opening-bitmap later to maint).
* The local changes stashed by "git merge --autostash" were lost when
the merge failed in certain ways, which has been corrected.
* Windows rmdir() equivalent behaves differently from POSIX ones in
that when used on a symbolic link that points at a directory, the
target directory gets removed, which has been corrected.
(merge 3e7d4888e5 tb/mingw-rmdir-symlink-to-directory later to maint).
* Other code cleanup, docfix, build fix, etc.
(merge bfe35a6165 ah/doc-describe later to maint).
(merge f302c1e4aa jc/clarify-revision-range later to maint).
(merge 3127ff90ea tl/fix-packfile-uri-doc later to maint).
(merge a84216c684 jk/doc-color-pager later to maint).
(merge 4e0a64a713 ab/trace2-squelch-gcc-warning later to maint).
(merge 225f7fa847 ps/rev-list-object-type-filter later to maint).
(merge 5317dfeaed dd/honor-users-tar-in-tests later to maint).
(merge ace6d8e3d6 tk/partial-clone-repack-doc later to maint).
(merge 7ba68e0cf1 js/trace2-discard-event-docfix later to maint).
(merge 8603c419d3 fc/doc-default-to-upstream-config later to maint).
(merge 1d72b604ef jk/revision-squelch-gcc-warning later to maint).
(merge abcb66c614 ar/typofix later to maint).
(merge 9853830787 ah/graph-typofix later to maint).
(merge aac578492d ab/config-hooks-path-testfix later to maint).
(merge 98c7656a18 ar/more-typofix later to maint).
(merge 6fb9195f6c jk/doc-max-pack-size later to maint).
(merge 4184cbd635 ar/mailinfo-memcmp-to-skip-prefix later to maint).
(merge 91d2347033 ar/doc-libera-chat-in-my-first-contrib later to maint).
(merge 338abb0f04 ab/cmd-foo-should-return later to maint).
(merge 546096a5cb ab/xdiff-bug-cleanup later to maint).
(merge b7b793d1e7 ab/progress-cleanup later to maint).
(merge d94f9b8e90 ba/object-info later to maint).
(merge 52ff891c03 ar/test-code-cleanup later to maint).
(merge a0538e5c8b dd/document-log-decorate-default later to maint).
(merge ce24797d38 mr/cmake later to maint).
(merge 9eb542f2ee ab/pre-auto-gc-hook-test later to maint).
(merge 9fffc38583 bk/doc-commit-typofix later to maint).
(merge 1cf823d8f0 ks/submodule-cleanup later to maint).
(merge ebbf5d2b70 js/config-mak-windows-pcre-fix later to maint).
(merge 617480d75b hn/refs-iterator-peel-returns-boolean later to maint).
(merge 6a24cc71ed ar/submodule-helper-include-cleanup later to maint).
(merge 5632e838f8 rs/khash-alloc-cleanup later to maint).
(merge b1d87fbaf1 jk/typofix later to maint).
(merge e04170697a ab/gitignore-discovery-doc later to maint).
(merge 8232a0ff48 dl/packet-read-response-end-fix later to maint).
(merge eb448631fb dl/diff-merge-base later to maint).
(merge c510928a25 hn/refs-debug-empty-prefix later to maint).
(merge ddcb189d9d tb/bitmap-type-filter-comment-fix later to maint).
(merge 878b399734 pb/submodule-recurse-doc later to maint).
(merge 734283855f jk/config-env-doc later to maint).
(merge 482e1488a9 ab/getcwd-test later to maint).
(merge f0b922473e ar/doc-markup-fix later to maint).

View File

@ -1,138 +0,0 @@
Git 2.33.1 Release Notes
========================
This primarily is to backport various fixes accumulated during the
development towards Git 2.34, the next feature release.
Fixes since v2.33
-----------------
* The unicode character width table (used for output alignment) has
been updated.
* Input validation of "git pack-objects --stdin-packs" has been
corrected.
* Bugfix for common ancestor negotiation recently introduced in "git
push" codepath.
* "git pull" had various corner cases that were not well thought out
around its --rebase backend, e.g. "git pull --ff-only" did not stop
but went ahead and rebased when the history on other side is not a
descendant of our history. The series tries to fix them up.
* "git apply" miscounted the bytes and failed to read to the end of
binary hunks.
* "git range-diff" code clean-up.
* "git commit --fixup" now works with "--edit" again, after it was
broken in v2.32.
* Use upload-artifacts v1 (instead of v2) for 32-bit linux, as the
new version has a blocker bug for that architecture.
* Checking out all the paths from HEAD during the last conflicted
step in "git rebase" and continuing would cause the step to be
skipped (which is expected), but leaves MERGE_MSG file behind in
$GIT_DIR and confuses the next "git commit", which has been
corrected.
* Various bugs in "git rebase -r" have been fixed.
* mmap() imitation used to call xmalloc() that dies upon malloc()
failure, which has been corrected to just return an error to the
caller to be handled.
* "git diff --relative" segfaulted and/or produced incorrect result
when there are unmerged paths.
* The delayed checkout code path in "git checkout" etc. were chatty
even when --quiet and/or --no-progress options were given.
* "git branch -D <branch>" used to refuse to remove a broken branch
ref that points at a missing commit, which has been corrected.
* Build update for Apple clang.
* The parser for the "--nl" option of "git column" has been
corrected.
* "git upload-pack" which runs on the other side of "git fetch"
forgot to take the ref namespaces into account when handling
want-ref requests.
* The sparse-index support can corrupt the index structure by storing
a stale and/or uninitialized data, which has been corrected.
* Buggy tests could damage repositories outside the throw-away test
area we created. We now by default export GIT_CEILING_DIRECTORIES
to limit the damage from such a stray test.
* Even when running "git send-email" without its own threaded
discussion support, a threading related header in one message is
carried over to the subsequent message to result in an unwanted
threading, which has been corrected.
* The output from "git fast-export", when its anonymization feature
is in use, showed an annotated tag incorrectly.
* Recent "diff -m" changes broke "gitk", which has been corrected.
* "git maintenance" scheduler fix for macOS.
* A pathname in an advice message has been made cut-and-paste ready.
* The "git apply -3" code path learned not to bother the lower level
merge machinery when the three-way merge can be trivially resolved
without the content level merge.
* The code that optionally creates the *.rev reverse index file has
been optimized to avoid needless computation when it is not writing
the file out.
* "git range-diff -I... <range> <range>" segfaulted, which has been
corrected.
* The order in which various files that make up a single (conceptual)
packfile has been reevaluated and straightened up. This matters in
correctness, as an incomplete set of files must not be shown to a
running Git.
* The "mode" word is useless in a call to open(2) that does not
create a new file. Such a call in the files backend of the ref
subsystem has been cleaned up.
* "git update-ref --stdin" failed to flush its output as needed,
which potentially led the conversation to a deadlock.
* When "git am --abort" fails to abort correctly, it still exited
with exit status of 0, which has been corrected.
* Correct nr and alloc members of strvec struct to be of type size_t.
* "git stash", where the tentative change involves changing a
directory to a file (or vice versa), was confused, which has been
corrected.
* "git clone" from a repository whose HEAD is unborn into a bare
repository didn't follow the branch name the other side used, which
is corrected.
* "git cvsserver" had a long-standing bug in its authentication code,
which has finally been corrected (it is unclear and is a separate
question if anybody is seriously using it, though).
* "git difftool --dir-diff" mishandled symbolic links.
* Sensitive data in the HTTP trace were supposed to be redacted, but
we failed to do so in HTTP/2 requests.
* "make clean" has been updated to remove leftover .depend/
directories, even when it is not told to use them to compute header
dependencies.
* Protocol v0 clients can get stuck parsing a malformed feature line.
Also contains various documentation updates and code clean-ups.

View File

@ -1,15 +0,0 @@
Git v2.33.2 Release Notes
=========================
This release merges up the fixes that appear in v2.30.3, v2.31.2
and v2.32.1 to address the security issue CVE-2022-24765; see
the release notes for these versions for details.
In addition, it contains the following fixes:
* Squelch over-eager warning message added during this cycle.
* A bug in "git rebase -r" has been fixed.
* One CI task based on Fedora image noticed a not-quite-kosher
construct recently, which has been corrected.

View File

@ -1,4 +0,0 @@
Git Documentation/RelNotes/2.33.3.txt Release Notes
=========================
This release merges up the fixes that appear in v2.33.3.

View File

@ -1,6 +0,0 @@
Git v2.33.4 Release Notes
=========================
This release merges up the fixes that appear in v2.30.5, v2.31.4
and v2.32.3 to address the security issue CVE-2022-29187; see
the release notes for these versions for details.

View File

@ -377,7 +377,7 @@ notes for details).
on that order.
* "git show 'HEAD:Foo[BAR]Baz'" did not interpret the argument as a
rev, i.e. the object named by the pathname with wildcard
rev, i.e. the object named by the the pathname with wildcard
characters in a tree object.
(merge aac4fac nd/dwim-wildcards-as-pathspecs later to maint).

View File

@ -74,9 +74,10 @@ the feature triggers the new behavior when it should, and to show the
feature does not trigger when it shouldn't. After any code change, make
sure that the entire test suite passes.
Pushing to a fork of https://github.com/git/git will use their CI
integration to test your changes on Linux, Mac and Windows. See the
<<GHCI,GitHub CI>> section for details.
If you have an account at GitHub (and you can get one for free to work
on open source projects), you can use their Travis CI integration to
test your changes on Linux, Mac (and hopefully soon Windows). See
GitHub-Travis CI hints section for details.
Do not forget to update the documentation to describe the updated
behavior and make sure that the resulting documentation set formats
@ -166,85 +167,6 @@ or, on an older version of Git without support for --pretty=reference:
git show -s --date=short --pretty='format:%h (%s, %ad)' <commit>
....
[[sign-off]]
=== Certify your work by adding your `Signed-off-by` trailer
To improve tracking of who did what, we ask you to certify that you
wrote the patch or have the right to pass it on under the same license
as ours, by "signing off" your patch. Without sign-off, we cannot
accept your patches.
If (and only if) you certify the below D-C-O:
[[dco]]
.Developer's Certificate of Origin 1.1
____
By making a contribution to this project, I certify that:
a. The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
b. The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
c. The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
d. I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
____
you add a "Signed-off-by" trailer to your commit, that looks like
this:
....
Signed-off-by: Random J Developer <random@developer.example.org>
....
This line can be added by Git if you run the git-commit command with
the -s option.
Notice that you can place your own `Signed-off-by` trailer when
forwarding somebody else's patch with the above rules for
D-C-O. Indeed you are encouraged to do so. Do not forget to
place an in-body "From: " line at the beginning to properly attribute
the change to its true author (see (2) above).
This procedure originally came from the Linux kernel project, so our
rule is quite similar to theirs, but what exactly it means to sign-off
your patch differs from project to project, so it may be different
from that of the project you are accustomed to.
[[real-name]]
Also notice that a real name is used in the `Signed-off-by` trailer. Please
don't hide your real name.
[[commit-trailers]]
If you like, you can put extra tags at the end:
. `Reported-by:` is used to credit someone who found the bug that
the patch attempts to fix.
. `Acked-by:` says that the person who is more familiar with the area
the patch attempts to modify liked the patch.
. `Reviewed-by:`, unlike the other tags, can only be offered by the
reviewers themselves when they are completely satisfied with the
patch after a detailed analysis.
. `Tested-by:` is used to indicate that the person applied the patch
and found it to have the desired effect.
You can also create your own tag or use one that's in common usage
such as "Thanks-to:", "Based-on-patch-by:", or "Mentored-by:".
[[git-tools]]
=== Generate your patch using Git tools out of your commits.
@ -380,6 +302,86 @@ Do not forget to add trailers such as `Acked-by:`, `Reviewed-by:` and
`Tested-by:` lines as necessary to credit people who helped your
patch, and "cc:" them when sending such a final version for inclusion.
[[sign-off]]
=== Certify your work by adding your `Signed-off-by` trailer
To improve tracking of who did what, we ask you to certify that you
wrote the patch or have the right to pass it on under the same license
as ours, by "signing off" your patch. Without sign-off, we cannot
accept your patches.
If (and only if) you certify the below D-C-O:
[[dco]]
.Developer's Certificate of Origin 1.1
____
By making a contribution to this project, I certify that:
a. The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
b. The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
c. The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
d. I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
____
you add a "Signed-off-by" trailer to your commit, that looks like
this:
....
Signed-off-by: Random J Developer <random@developer.example.org>
....
This line can be added by Git if you run the git-commit command with
the -s option.
Notice that you can place your own `Signed-off-by` trailer when
forwarding somebody else's patch with the above rules for
D-C-O. Indeed you are encouraged to do so. Do not forget to
place an in-body "From: " line at the beginning to properly attribute
the change to its true author (see (2) above).
This procedure originally came from the Linux kernel project, so our
rule is quite similar to theirs, but what exactly it means to sign-off
your patch differs from project to project, so it may be different
from that of the project you are accustomed to.
[[real-name]]
Also notice that a real name is used in the `Signed-off-by` trailer. Please
don't hide your real name.
[[commit-trailers]]
If you like, you can put extra tags at the end:
. `Reported-by:` is used to credit someone who found the bug that
the patch attempts to fix.
. `Acked-by:` says that the person who is more familiar with the area
the patch attempts to modify liked the patch.
. `Reviewed-by:`, unlike the other tags, can only be offered by the
reviewer and means that she is completely satisfied that the patch
is ready for application. It is usually offered only after a
detailed review.
. `Tested-by:` is used to indicate that the person applied the patch
and found it to have the desired effect.
You can also create your own tag or use one that's in common usage
such as "Thanks-to:", "Based-on-patch-by:", or "Mentored-by:".
== Subsystems with dedicated maintainers
Some parts of the system have dedicated maintainers with their own
@ -448,12 +450,13 @@ their trees themselves.
entitled "What's cooking in git.git" and "What's in git.git" giving
the status of various proposed changes.
== GitHub CI[[GHCI]]]
[[travis]]
== GitHub-Travis CI hints
With an account at GitHub, you can use GitHub CI to test your changes
on Linux, Mac and Windows. See
https://github.com/git/git/actions/workflows/main.yml for examples of
recent CI runs.
With an account at GitHub (you can get one for free to work on open
source projects), you can use Travis CI to test your changes on Linux,
Mac (and hopefully soon Windows). You can find a successful example
test build here: https://travis-ci.org/git/git/builds/120473209
Follow these steps for the initial setup:
@ -461,18 +464,31 @@ Follow these steps for the initial setup:
You can find detailed instructions how to fork here:
https://help.github.com/articles/fork-a-repo/
After the initial setup, CI will run whenever you push new changes
. Open the Travis CI website: https://travis-ci.org
. Press the "Sign in with GitHub" button.
. Grant Travis CI permissions to access your GitHub account.
You can find more information about the required permissions here:
https://docs.travis-ci.com/user/github-oauth-scopes
. Open your Travis CI profile page: https://travis-ci.org/profile
. Enable Travis CI builds for your Git fork.
After the initial setup, Travis CI will run whenever you push new changes
to your fork of Git on GitHub. You can monitor the test state of all your
branches here: https://github.com/<Your GitHub handle>/git/actions/workflows/main.yml
branches here: https://travis-ci.org/__<Your GitHub handle>__/git/branches
If a branch did not pass all test cases then it is marked with a red
cross. In that case you can click on the failing job and navigate to
"ci/run-build-and-tests.sh" and/or "ci/print-test-failures.sh". You
can also download "Artifacts" which are tarred (or zipped) archives
with test data relevant for debugging.
cross. In that case you can click on the failing Travis CI job and
scroll all the way down in the log. Find the line "<-- Click here to see
detailed test output!" and click on the triangle next to the log line
number to expand the detailed test output. Here is such a failing
example: https://travis-ci.org/git/git/jobs/122676187
Then fix the problem and push your fix to your GitHub fork. This will
trigger a new CI build to ensure all tests pass.
Fix the problem and push your fix to your Git fork. This will trigger
a new Travis CI build to ensure all tests pass.
[[mua]]
== MUA specific hints

View File

@ -27,7 +27,7 @@ blame.ignoreRevsFile::
file names will reset the list of ignored revisions. This option will
be handled before the command line option `--ignore-revs-file`.
blame.markUnblamableLines::
blame.markUnblamables::
Mark lines that were changed by an ignored revision that we could not
attribute to another commit with a '*' in the output of
linkgit:git-blame[1].

View File

@ -127,9 +127,8 @@ color.interactive.<slot>::
interactive commands.
color.pager::
A boolean to specify whether `auto` color modes should colorize
output going to the pager. Defaults to true; set this to false
if your pager does not understand ANSI color codes.
A boolean to enable/disable colored output when the pager is in
use (default is true).
color.push::
A boolean to enable/disable color in push errors. May be set to

View File

@ -118,10 +118,9 @@ diff.orderFile::
relative to the top of the working tree.
diff.renameLimit::
The number of files to consider in the exhaustive portion of
copy/rename detection; equivalent to the 'git diff' option
`-l`. If not set, the default value is currently 1000. This
setting has no effect if rename detection is turned off.
The number of files to consider when performing the copy/rename
detection; equivalent to the 'git diff' option `-l`. This setting
has no effect if rename detection is turned off.
diff.renames::
Whether and how Git detects renames. If set to "false",

View File

@ -69,8 +69,7 @@ fetch.negotiationAlgorithm::
setting defaults to "skipping".
Unknown values will cause 'git fetch' to error out.
+
See also the `--negotiate-only` and `--negotiation-tip` options to
linkgit:git-fetch[1].
See also the `--negotiation-tip` option for linkgit:git-fetch[1].
fetch.showForcedUpdates::
Set to false to enable `--no-show-forced-updates` in

View File

@ -11,7 +11,7 @@ gui.displayUntracked::
in the file list. The default is "true".
gui.encoding::
Specifies the default character encoding to use for displaying of
Specifies the default encoding to use for displaying of
file contents in linkgit:git-gui[1] and linkgit:gitk[1].
It can be overridden by setting the 'encoding' attribute
for relevant files (see linkgit:gitattributes[5]).

View File

@ -14,7 +14,7 @@ merge.defaultToUpstream::
branches at the remote named by `branch.<current branch>.remote`
are consulted, and then they are mapped via `remote.<remote>.fetch`
to their corresponding remote-tracking branches, and the tips of
these tracking branches are merged. Defaults to true.
these tracking branches are merged.
merge.ff::
By default, Git does not create an extra merge commit when merging
@ -33,12 +33,10 @@ merge.verifySignatures::
include::fmt-merge-msg.txt[]
merge.renameLimit::
The number of files to consider in the exhaustive portion of
rename detection during a merge. If not specified, defaults
to the value of diff.renameLimit. If neither
merge.renameLimit nor diff.renameLimit are specified,
currently defaults to 7000. This setting has no effect if
rename detection is turned off.
The number of files to consider when performing rename detection
during a merge; if not specified, defaults to the value of
diff.renameLimit. This setting has no effect if rename detection
is turned off.
merge.renames::
Whether Git detects renames. If set to "false", rename detection

View File

@ -99,23 +99,12 @@ pack.packSizeLimit::
packing to a file when repacking, i.e. the git:// protocol
is unaffected. It can be overridden by the `--max-pack-size`
option of linkgit:git-repack[1]. Reaching this limit results
in the creation of multiple packfiles.
+
Note that this option is rarely useful, and may result in a larger total
on-disk size (because Git will not store deltas between packs), as well
as worse runtime performance (object lookup within multiple packs is
slower than a single pack, and optimizations like reachability bitmaps
cannot cope with multiple packs).
+
If you need to actively run Git using smaller packfiles (e.g., because your
filesystem does not support large files), this option may help. But if
your goal is to transmit a packfile over a medium that supports limited
sizes (e.g., removable media that cannot store the whole repository),
you are likely better off creating a single large packfile and splitting
it using a generic multi-volume archive tool (e.g., Unix `split`).
+
The minimum size allowed is limited to 1 MiB. The default is unlimited.
Common unit suffixes of 'k', 'm', or 'g' are supported.
in the creation of multiple packfiles; which in turn prevents
bitmaps from being created.
The minimum size allowed is limited to 1 MiB.
The default is unlimited.
Common unit suffixes of 'k', 'm', or 'g' are
supported.
pack.useBitmaps::
When true, git will use pack bitmaps (if available) when packing

View File

@ -1,10 +1,10 @@
protocol.allow::
If set, provide a user defined default policy for all protocols which
don't explicitly have a policy (`protocol.<name>.allow`). By default,
if unset, known-safe protocols (http, https, git, ssh, file) have a
if unset, known-safe protocols (http, https, git, ssh) have a
default policy of `always`, known-dangerous protocols (ext) have a
default policy of `never`, and all other protocols have a default
policy of `user`. Supported policies:
default policy of `never`, and all other protocols (including file)
have a default policy of `user`. Supported policies:
+
--

View File

@ -24,14 +24,15 @@ push.default::
* `tracking` - This is a deprecated synonym for `upstream`.
* `simple` - pushes the current branch with the same name on the remote.
* `simple` - in centralized workflow, work like `upstream` with an
added safety to refuse to push if the upstream branch's name is
different from the local one.
+
If you are working on a centralized workflow (pushing to the same repository you
pull from, which is typically `origin`), then you need to configure an upstream
branch with the same name.
When pushing to a remote that is different from the remote you normally
pull from, work as `current`. This is the safest option and is suited
for beginners.
+
This mode is the default since Git 2.0, and is the safest option suited for
beginners.
This mode has become the default in Git 2.0.
* `matching` - push all branches having the same name on both ends.
This makes the repository you are pushing to remember the set of

View File

@ -8,6 +8,9 @@ sendemail.smtpEncryption::
See linkgit:git-send-email[1] for description. Note that this
setting is not subject to the 'identity' mechanism.
sendemail.smtpssl (deprecated)::
Deprecated alias for 'sendemail.smtpEncryption = ssl'.
sendemail.smtpsslcertpath::
Path to ca-certificates (either a directory or a single file).
Set it to an empty string to disable certificate verification.

View File

@ -58,9 +58,8 @@ submodule.active::
commands. See linkgit:gitsubmodules[7] for details.
submodule.recurse::
A boolean indicating if commands should enable the `--recurse-submodules`
option by default.
Applies to all commands that support this option
Specifies if commands recurse into submodules by default. This
applies to all commands that have a `--recurse-submodules` option
(`checkout`, `fetch`, `grep`, `pull`, `push`, `read-tree`, `reset`,
`restore` and `switch`) except `clone` and `ls-files`.
Defaults to false.

View File

@ -52,17 +52,13 @@ If you have multiple hideRefs values, later entries override earlier ones
(and entries in more-specific config files override less-specific ones).
+
If a namespace is in use, the namespace prefix is stripped from each
reference before it is matched against `transfer.hiderefs` patterns. In
order to match refs before stripping, add a `^` in front of the ref name. If
you combine `!` and `^`, `!` must be specified first.
+
reference before it is matched against `transfer.hiderefs` patterns.
For example, if `refs/heads/master` is specified in `transfer.hideRefs` and
the current namespace is `foo`, then `refs/namespaces/foo/refs/heads/master`
is omitted from the advertisements. If `uploadpack.allowRefInWant` is set,
`upload-pack` will treat `want-ref refs/heads/master` in a protocol v2
`fetch` command as if `refs/namespaces/foo/refs/heads/master` did not exist.
`receive-pack`, on the other hand, will still advertise the object id the
ref is pointing to without mentioning its name (a so-called ".have" line).
is omitted from the advertisements but `refs/heads/master` and
`refs/namespaces/bar/refs/heads/master` are still advertised as so-called
"have" lines. In order to match refs before stripping, add a `^` in front of
the ref name. If you combine `!` and `^`, `!` must be specified first.
+
Even if you hide refs, a client may still be able to steal the target
objects via the techniques described in the "SECURITY" section of the

View File

@ -588,17 +588,11 @@ When used together with `-B`, omit also the preimage in the deletion part
of a delete/create pair.
-l<num>::
The `-M` and `-C` options involve some preliminary steps that
can detect subsets of renames/copies cheaply, followed by an
exhaustive fallback portion that compares all remaining
unpaired destinations to all relevant sources. (For renames,
only remaining unpaired sources are relevant; for copies, all
original sources are relevant.) For N sources and
destinations, this exhaustive check is O(N^2). This option
prevents the exhaustive portion of rename/copy detection from
running if the number of source/destination files involved
exceeds the specified number. Defaults to diff.renameLimit.
Note that a value of 0 is treated as unlimited.
The `-M` and `-C` options require O(n^2) processing time where n
is the number of potential rename/copy targets. This
option prevents rename/copy detection from running if
the number of rename/copy targets exceeds the specified
number.
ifndef::git-format-patch[]
--diff-filter=[(A|C|D|M|R|T|U|X|B)...[*]]::

View File

@ -62,17 +62,8 @@ The argument to this option may be a glob on ref names, a ref, or the (possibly
abbreviated) SHA-1 of a commit. Specifying a glob is equivalent to specifying
this option multiple times, one for each matching ref name.
+
See also the `fetch.negotiationAlgorithm` and `push.negotiate`
configuration variables documented in linkgit:git-config[1], and the
`--negotiate-only` option below.
--negotiate-only::
Do not fetch anything from the server, and instead print the
ancestors of the provided `--negotiation-tip=*` arguments,
which we have in common with the server.
+
Internally this is used to implement the `push.negotiate` option, see
linkgit:git-config[1].
See also the `fetch.negotiationAlgorithm` configuration variable
documented in linkgit:git-config[1].
--dry-run::
Show what would be done, without making any changes.

View File

@ -178,8 +178,6 @@ default. You can use `--no-utf8` to override this.
--abort::
Restore the original branch and abort the patching operation.
Revert contents of files involved in the am operation to their
pre-am state.
--quit::
Abort the patching operation but keep HEAD and the index

View File

@ -118,8 +118,7 @@ OPTIONS
Reset <branchname> to <startpoint>, even if <branchname> exists
already. Without `-f`, 'git branch' refuses to change an existing branch.
In combination with `-d` (or `--delete`), allow deleting the
branch irrespective of its merged status, or whether it even
points to a valid commit. In combination with
branch irrespective of its merged status. In combination with
`-m` (or `--move`), allow renaming the branch even if the new
branch name already exists, the same applies for `-c` (or `--copy`).

View File

@ -40,8 +40,8 @@ OPTIONS
-------
-o <path>::
--output-directory <path>::
Place the resulting bug report file in `<path>` instead of the current
directory.
Place the resulting bug report file in `<path>` instead of the root of
the Git repository.
-s <format>::
--suffix <format>::

View File

@ -18,48 +18,21 @@ SYNOPSIS
DESCRIPTION
-----------
Create, unpack, and manipulate "bundle" files. Bundles are used for
the "offline" transfer of Git objects without an active "server"
sitting on the other side of the network connection.
Some workflows require that one or more branches of development on one
machine be replicated on another machine, but the two machines cannot
be directly connected, and therefore the interactive Git protocols (git,
ssh, http) cannot be used.
They can be used to create both incremental and full backups of a
repository, and to relay the state of the references in one repository
to another.
The 'git bundle' command packages objects and references in an archive
at the originating machine, which can then be imported into another
repository using 'git fetch', 'git pull', or 'git clone',
after moving the archive by some means (e.g., by sneakernet).
Git commands that fetch or otherwise "read" via protocols such as
`ssh://` and `https://` can also operate on bundle files. It is
possible linkgit:git-clone[1] a new repository from a bundle, to use
linkgit:git-fetch[1] to fetch from one, and to list the references
contained within it with linkgit:git-ls-remote[1]. There's no
corresponding "write" support, i.e.a 'git push' into a bundle is not
supported.
See the "EXAMPLES" section below for examples of how to use bundles.
BUNDLE FORMAT
-------------
Bundles are `.pack` files (see linkgit:git-pack-objects[1]) with a
header indicating what references are contained within the bundle.
Like the the packed archive format itself bundles can either be
self-contained, or be created using exclusions.
See the "OBJECT PREREQUISITES" section below.
Bundles created using revision exclusions are "thin packs" created
using the `--thin` option to linkgit:git-pack-objects[1], and
unbundled using the `--fix-thin` option to linkgit:git-index-pack[1].
There is no option to create a "thick pack" when using revision
exclusions, users should not be concerned about the difference. By
using "thin packs" bundles created using exclusions are smaller in
size. That they're "thin" under the hood is merely noted here as a
curiosity, and as a reference to other documentation
See link:technical/bundle-format.html[the `bundle-format`
documentation] for more details and the discussion of "thin pack" in
link:technical/pack-format.html[the pack format documentation] for
further details.
As no
direct connection between the repositories exists, the user must specify a
basis for the bundle that is held by the destination repository: the
bundle assumes that all objects in the basis are already in the
destination repository.
OPTIONS
-------
@ -144,88 +117,28 @@ unbundle <file>::
SPECIFYING REFERENCES
---------------------
Revisions must accompanied by reference names to be packaged in a
bundle.
More than one reference may be packaged, and more than one set of prerequisite objects can
be specified. The objects packaged are those not contained in the
union of the prerequisites.
The 'git bundle create' command resolves the reference names for you
using the same rules as `git rev-parse --abbrev-ref=loose`. Each
prerequisite can be specified explicitly (e.g. `^master~10`), or implicitly
(e.g. `master~10..master`, `--since=10.days.ago master`).
All of these simple cases are OK (assuming we have a "master" and
"next" branch):
----------------
$ git bundle create master.bundle master
$ echo master | git bundle create master.bundle --stdin
$ git bundle create master-and-next.bundle master next
$ (echo master; echo next) | git bundle create master-and-next.bundle --stdin
----------------
And so are these (and the same but omitted `--stdin` examples):
----------------
$ git bundle create recent-master.bundle master~10..master
$ git bundle create recent-updates.bundle master~10..master next~5..next
----------------
A revision name or a range whose right-hand-side cannot be resolved to
a reference is not accepted:
----------------
$ git bundle create HEAD.bundle $(git rev-parse HEAD)
fatal: Refusing to create empty bundle.
$ git bundle create master-yesterday.bundle master~10..master~5
fatal: Refusing to create empty bundle.
----------------
OBJECT PREREQUISITES
--------------------
When creating bundles it is possible to create a self-contained bundle
that can be unbundled in a repository with no common history, as well
as providing negative revisions to exclude objects needed in the
earlier parts of the history.
Feeding a revision such as `new` to `git bundle create` will create a
bundle file that contains all the objects reachable from the revision
`new`. That bundle can be unbundled in any repository to obtain a full
history that leads to the revision `new`:
----------------
$ git bundle create full.bundle new
----------------
A revision range such as `old..new` will produce a bundle file that
will require the revision `old` (and any objects reachable from it)
to exist for the bundle to be "unbundle"-able:
----------------
$ git bundle create full.bundle old..new
----------------
A self-contained bundle without any prerequisites can be extracted
into anywhere, even into an empty repository, or be cloned from
(i.e., `new`, but not `old..new`).
'git bundle' will only package references that are shown by
'git show-ref': this includes heads, tags, and remote heads. References
such as `master~1` cannot be packaged, but are perfectly suitable for
defining the basis. More than one reference may be packaged, and more
than one basis can be specified. The objects packaged are those not
contained in the union of the given bases. Each basis can be
specified explicitly (e.g. `^master~10`), or implicitly (e.g.
`master~10..master`, `--since=10.days.ago master`).
It is very important that the basis used be held by the destination.
It is okay to err on the side of caution, causing the bundle file
to contain objects already in the destination, as these are ignored
when unpacking at the destination.
`git clone` can use any bundle created without negative refspecs
(e.g., `new`, but not `old..new`).
If you want to match `git clone --mirror`, which would include your
refs such as `refs/remotes/*`, use `--all`.
If you want to provide the same set of refs that a clone directly
from the source repository would get, use `--branches --tags` for
the `<git-rev-list-args>`.
The 'git bundle verify' command can be used to check whether your
recipient repository has the required prerequisite commits for a
bundle.
EXAMPLES
--------
@ -236,7 +149,7 @@ but we can move data from A to B via some mechanism (CD, email, etc.).
We want to update R2 with development made on the branch master in R1.
To bootstrap the process, you can first create a bundle that does not have
any prerequisites. You can use a tag to remember up to what commit you last
any basis. You can use a tag to remember up to what commit you last
processed, in order to make it easy to later update the other repository
with an incremental bundle:
@ -287,7 +200,7 @@ machineB$ git pull
If you know up to what commit the intended recipient repository should
have the necessary objects, you can use that knowledge to specify the
prerequisites, giving a cut-off point to limit the revisions and objects that go
basis, giving a cut-off point to limit the revisions and objects that go
in the resulting bundle. The previous example used the lastR2bundle tag
for this purpose, but you can use any other options that you would give to
the linkgit:git-log[1] command. Here are more examples:
@ -298,7 +211,7 @@ You can use a tag that is present in both:
$ git bundle create mybundle v1.0.0..master
----------------
You can use a prerequisite based on time:
You can use a basis based on time:
----------------
$ git bundle create mybundle --since=10.days master
@ -311,7 +224,7 @@ $ git bundle create mybundle -10 master
----------------
You can run `git-bundle verify` to see if you can extract from a bundle
that was created with a prerequisite:
that was created with a basis:
----------------
$ git bundle verify mybundle

View File

@ -39,7 +39,7 @@ OPTIONS
--indent=<string>::
String to be printed at the beginning of each line.
--nl=<string>::
--nl=<N>::
String to be printed at the end of each line,
including newline character.

View File

@ -72,7 +72,7 @@ OPTIONS
-p::
--patch::
Use the interactive patch selection interface to choose
Use the interactive patch selection interface to chose
which changes to commit. See linkgit:git-add[1] for
details.

View File

@ -71,10 +71,6 @@ codes are:
On success, the command returns the exit code 0.
A list of all available configuration variables can be obtained using the
`git help --config` command.
[[OPTIONS]]
OPTIONS
-------
@ -147,13 +143,7 @@ See also <<FILES>>.
-f config-file::
--file config-file::
For writing options: write to the specified file rather than the
repository `.git/config`.
+
For reading options: read only from the specified file rather than from all
available files.
+
See also <<FILES>>.
Use the given config file instead of the one specified by GIT_CONFIG.
--blob blob::
Similar to `--file` but use the given blob instead of a file. E.g.
@ -335,14 +325,21 @@ All writing options will per default write to the repository specific
configuration file. Note that this also affects options like `--replace-all`
and `--unset`. *'git config' will only ever change one file at a time*.
You can override these rules using the `--global`, `--system`,
`--local`, `--worktree`, and `--file` command-line options; see
<<OPTIONS>> above.
You can override these rules either by command-line options or by environment
variables. The `--global`, `--system` and `--worktree` options will limit
the file used to the global, system-wide or per-worktree file respectively.
The `GIT_CONFIG` environment variable has a similar effect, but you
can specify any filename you want.
ENVIRONMENT
-----------
GIT_CONFIG::
Take the configuration from the given file instead of .git/config.
Using the "--global" option forces this to ~/.gitconfig. Using the
"--system" option forces this to $(prefix)/etc/gitconfig.
GIT_CONFIG_GLOBAL::
GIT_CONFIG_SYSTEM::
Take the configuration from the given files instead from global or
@ -370,12 +367,6 @@ This is useful for cases where you want to spawn multiple git commands
with a common configuration but cannot depend on a configuration file,
for example when writing scripts.
GIT_CONFIG::
If no `--file` option is provided to `git config`, use the file
given by `GIT_CONFIG` as if it were provided via `--file`. This
variable has no effect on other Git commands, and is mostly for
historical compatibility; there is generally no reason to use it
instead of the `--file` option.
[[EXAMPLES]]
EXAMPLES

View File

@ -99,7 +99,7 @@ looks like
------
Only anonymous access is provided by pserver by default. To commit you
Only anonymous access is provided by pserve by default. To commit you
will have to create pserver accounts, simply add a gitcvs.authdb
setting in the config file of the repositories you want the cvsserver
to allow writes to, for example:
@ -114,20 +114,21 @@ The format of these files is username followed by the encrypted password,
for example:
------
myuser:sqkNi8zPf01HI
myuser:$1$9K7FzU28$VfF6EoPYCJEYcVQwATgOP/
myuser:$5$.NqmNH1vwfzGpV8B$znZIcumu1tNLATgV2l6e1/mY8RzhUDHMOaVOeL1cxV3
myuser:$1Oyx5r9mdGZ2
myuser:$1$BA)@$vbnMJMDym7tA32AamXrm./
------
You can use the 'htpasswd' facility that comes with Apache to make these
files, but only with the -d option (or -B if your system suports it).
files, but Apache's MD5 crypt method differs from the one used by most C
library's crypt() function, so don't use the -m option.
Preferably use the system specific utility that manages password hash
creation in your platform (e.g. mkpasswd in Linux, encrypt in OpenBSD or
pwhash in NetBSD) and paste it in the right location.
Alternatively you can produce the password with perl's crypt() operator:
-----
perl -e 'my ($user, $pass) = @ARGV; printf "%s:%s\n", $user, crypt($user, $pass)' $USER password
-----
Then provide your password via the pserver method, for example:
------
cvs -d:pserver:someuser:somepassword@server:/path/repo.git co <HEAD_name>
cvs -d:pserver:someuser:somepassword <at> server/path/repo.git co <HEAD_name>
------
No special setup is needed for SSH access, other than having Git tools
in the PATH. If you have clients that do not accept the CVS_SERVER
@ -137,7 +138,7 @@ Note: Newer CVS versions (>= 1.12.11) also support specifying
CVS_SERVER directly in CVSROOT like
------
cvs -d ":ext;CVS_SERVER=git cvsserver:user@server/path/repo.git" co <HEAD_name>
cvs -d ":ext;CVS_SERVER=git cvsserver:user@server/path/repo.git" co <HEAD_name>
------
This has the advantage that it will be saved in your 'CVS/Root' files and
you don't need to worry about always setting the correct environment
@ -185,8 +186,8 @@ allowing access over SSH.
+
--
------
export CVSROOT=:ext:user@server:/var/git/project.git
export CVS_SERVER="git cvsserver"
export CVSROOT=:ext:user@server:/var/git/project.git
export CVS_SERVER="git cvsserver"
------
--
4. For SSH clients that will make commits, make sure their server-side
@ -202,7 +203,7 @@ allowing access over SSH.
`project-master` directory:
+
------
cvs co -d project-master master
cvs co -d project-master master
------
[[dbbackend]]

View File

@ -63,10 +63,9 @@ OPTIONS
Automatically implies --tags.
--abbrev=<n>::
Instead of using the default number of hexadecimal digits (which
will vary according to the number of objects in the repository with
a default of 7) of the abbreviated object name, use <n> digits, or
as many digits as needed to form a unique object name. An <n> of 0
Instead of using the default 7 hexadecimal digits as the
abbreviated object name, use <n> digits, or as many digits
as needed to form a unique object name. An <n> of 0
will suppress long format, only showing the closest tag.
--candidates=<n>::
@ -140,11 +139,8 @@ at the end.
The number of additional commits is the number
of commits which would be displayed by "git log v1.0.4..parent".
The hash suffix is "-g" + an unambigous abbreviation for the tip commit
of parent (which was `2414721b194453f058079d897d13c4e377f92dc6`). The
length of the abbreviation scales as the repository grows, using the
approximate number of objects in the repository and a bit of math
around the birthday paradox, and defaults to a minimum of 7.
The hash suffix is "-g" + unambiguous abbreviation for the tip commit
of parent (which was `2414721b194453f058079d897d13c4e377f92dc6`).
The "g" prefix stands for "git" and is used to allow describing the version of
a software depending on the SCM the software is managed with. This is useful
in an environment where people may use different SCMs.

View File

@ -51,20 +51,16 @@ files on disk.
--staged is a synonym of --cached.
+
If --merge-base is given, instead of using <commit>, use the merge base
of <commit> and HEAD. `git diff --cached --merge-base A` is equivalent to
`git diff --cached $(git merge-base A HEAD)`.
of <commit> and HEAD. `git diff --merge-base A` is equivalent to
`git diff $(git merge-base A HEAD)`.
'git diff' [<options>] [--merge-base] <commit> [--] [<path>...]::
'git diff' [<options>] <commit> [--] [<path>...]::
This form is to view the changes you have in your
working tree relative to the named <commit>. You can
use HEAD to compare it with the latest commit, or a
branch name to compare with the tip of a different
branch.
+
If --merge-base is given, instead of using <commit>, use the merge base
of <commit> and HEAD. `git diff --merge-base A` is equivalent to
`git diff $(git merge-base A HEAD)`.
'git diff' [<options>] [--merge-base] <commit> <commit> [--] [<path>...]::

View File

@ -133,7 +133,7 @@ remember to run that, set `fetch.prune` globally, or
linkgit:git-config[1].
Here's where things get tricky and more specific. The pruning feature
doesn't actually care about branches, instead it'll prune local <-->
doesn't actually care about branches, instead it'll prune local <->
remote-references as a function of the refspec of the remote (see
`<refspec>` and <<CRTB,CONFIGURED REMOTE-TRACKING BRANCHES>> above).

View File

@ -39,9 +39,7 @@ OPTIONS
full ref name (including prefix) will be printed. If 'auto' is
specified, then if the output is going to a terminal, the ref names
are shown as if 'short' were given, otherwise no ref names are
shown. The option `--decorate` is short-hand for `--decorate=short`.
Default to configuration value of `log.decorate` if configured,
otherwise, `auto`.
shown. The default option is 'short'.
--decorate-refs=<pattern>::
--decorate-refs-exclude=<pattern>::

View File

@ -61,8 +61,6 @@ merge has resulted in conflicts.
OPTIONS
-------
:git-merge: 1
include::merge-options.txt[]
-m <msg>::

View File

@ -128,10 +128,10 @@ depth is 4095.
into multiple independent packfiles, each not larger than the
given size. The size can be suffixed with
"k", "m", or "g". The minimum size allowed is limited to 1 MiB.
This option
prevents the creation of a bitmap index.
The default is unlimited, unless the config variable
`pack.packSizeLimit` is set. Note that this option may result in
a larger and slower repository; see the discussion in
`pack.packSizeLimit`.
`pack.packSizeLimit` is set.
--honor-pack-keep::
This flag causes an object already in a local pack that

View File

@ -15,17 +15,14 @@ SYNOPSIS
DESCRIPTION
-----------
Incorporates changes from a remote repository into the current branch.
If the current branch is behind the remote, then by default it will
fast-forward the current branch to match the remote. If the current
branch and the remote have diverged, the user needs to specify how to
reconcile the divergent branches with `--rebase` or `--no-rebase` (or
the corresponding configuration option in `pull.rebase`).
Incorporates changes from a remote repository into the current
branch. In its default mode, `git pull` is shorthand for
`git fetch` followed by `git merge FETCH_HEAD`.
More precisely, `git pull` runs `git fetch` with the given parameters
and then depending on configuration options or command line flags,
will call either `git rebase` or `git merge` to reconcile diverging
branches.
More precisely, 'git pull' runs 'git fetch' with the given
parameters and calls 'git merge' to merge the retrieved branch
heads into the current branch.
With `--rebase`, it runs 'git rebase' instead of 'git merge'.
<repository> should be the name of a remote repository as
passed to linkgit:git-fetch[1]. <refspec> can name an
@ -120,7 +117,7 @@ When set to `preserve` (deprecated in favor of `merges`), rebase with the
`--preserve-merges` option passed to `git rebase` so that locally created
merge commits will not be flattened.
+
When false, merge the upstream branch into the current branch.
When false, merge the current branch into the upstream branch.
+
When `interactive`, enable the interactive mode of rebase.
+
@ -135,7 +132,7 @@ published that history already. Do *not* use this option
unless you have read linkgit:git-rebase[1] carefully.
--no-rebase::
This is shorthand for --rebase=false.
Override earlier --rebase.
Options related to fetching
~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -244,8 +244,8 @@ Imagine that you have to rebase what you have already published.
You will have to bypass the "must fast-forward" rule in order to
replace the history you originally published with the rebased history.
If somebody else built on top of your original history while you are
rebasing, the tip of the branch at the remote may advance with their
commit, and blindly pushing with `--force` will lose their work.
rebasing, the tip of the branch at the remote may advance with her
commit, and blindly pushing with `--force` will lose her work.
+
This option allows you to say that you expect the history you are
updating is what you rebased and want to replace. If the remote ref

View File

@ -340,7 +340,9 @@ See also INCOMPATIBLE OPTIONS below.
-m::
--merge::
Using merging strategies to rebase (default).
Use merging strategies to rebase. When the recursive (default) merge
strategy is used, this allows rebase to be aware of renames on the
upstream side. This is the default.
+
Note that a rebase merge works by replaying each commit from the working
branch on top of the <upstream> branch. Because of this, when a merge
@ -352,8 +354,9 @@ See also INCOMPATIBLE OPTIONS below.
-s <strategy>::
--strategy=<strategy>::
Use the given merge strategy, instead of the default
`recursive`. This implies `--merge`.
Use the given merge strategy.
If there is no `-s` option 'git merge-recursive' is used
instead. This implies --merge.
+
Because 'git rebase' replays each commit from the working branch
on top of the <upstream> branch using the given strategy, using
@ -527,7 +530,7 @@ The `--rebase-merges` mode is similar in spirit to the deprecated
where commits can be reordered, inserted and dropped at will.
+
It is currently only possible to recreate the merge commits using the
`recursive` merge strategy; different merge strategies can be used only via
`recursive` merge strategy; Different merge strategies can be used only via
explicit `exec git merge -s <strategy> [...]` commands.
+
See also REBASING MERGES and INCOMPATIBLE OPTIONS below.
@ -1216,16 +1219,12 @@ successful merge so that the user can edit the message.
If a `merge` command fails for any reason other than merge conflicts (i.e.
when the merge operation did not even start), it is rescheduled immediately.
By default, the `merge` command will use the `recursive` merge
strategy for regular merges, and `octopus` for octopus merges. One
can specify a default strategy for all merges using the `--strategy`
argument when invoking rebase, or can override specific merges in the
interactive list of commands by using an `exec` command to call `git
merge` explicitly with a `--strategy` argument. Note that when
calling `git merge` explicitly like this, you can make use of the fact
that the labels are worktree-local refs (the ref `refs/rewritten/onto`
would correspond to the label `onto`, for example) in order to refer
to the branches you want to merge.
At this time, the `merge` command will *always* use the `recursive`
merge strategy for regular merges, and `octopus` for octopus merges,
with no way to choose a different one. To work around
this, an `exec` command can be used to call `git merge` explicitly,
using the fact that the labels are worktree-local refs (the ref
`refs/rewritten/onto` would correspond to the label `onto`, for example).
Note: the first command (`label onto`) labels the revision onto which
the commits are rebased; The name `onto` is just a convention, as a nod

View File

@ -121,9 +121,7 @@ depth is 4095.
If specified, multiple packfiles may be created, which also
prevents the creation of a bitmap index.
The default is unlimited, unless the config variable
`pack.packSizeLimit` is set. Note that this option may result in
a larger and slower repository; see the discussion in
`pack.packSizeLimit`.
`pack.packSizeLimit` is set.
-b::
--write-bitmap-index::

View File

@ -167,14 +167,6 @@ Sending
`sendemail.envelopeSender` configuration variable; if that is
unspecified, choosing the envelope sender is left to your MTA.
--sendmail-cmd=<command>::
Specify a command to run to send the email. The command should
be sendmail-like; specifically, it must support the `-i` option.
The command will be executed in the shell if necessary. Default
is the value of `sendemail.sendmailcmd`. If unspecified, and if
--smtp-server is also unspecified, git-send-email will search
for `sendmail` in `/usr/sbin`, `/usr/lib` and $PATH.
--smtp-encryption=<encryption>::
Specify the encryption to use, either 'ssl' or 'tls'. Any other
value reverts to plain SMTP. Default is the value of
@ -219,16 +211,13 @@ a password is obtained using 'git-credential'.
--smtp-server=<host>::
If set, specifies the outgoing SMTP server to use (e.g.
`smtp.example.com` or a raw IP address). If unspecified, and if
`--sendmail-cmd` is also unspecified, the default is to search
for `sendmail` in `/usr/sbin`, `/usr/lib` and $PATH if such a
program is available, falling back to `localhost` otherwise.
+
For backward compatibility, this option can also specify a full pathname
of a sendmail-like program instead; the program must support the `-i`
option. This method does not support passing arguments or using plain
command names. For those use cases, consider using `--sendmail-cmd`
instead.
`smtp.example.com` or a raw IP address). Alternatively it can
specify a full pathname of a sendmail-like program instead;
the program must support the `-i` option. Default value can
be specified by the `sendemail.smtpServer` configuration
option; the built-in default is to search for `sendmail` in
`/usr/sbin`, `/usr/lib` and $PATH if such program is
available, falling back to `localhost` otherwise.
--smtp-server-port=<port>::
Specifies a port different from the default port (SMTP

View File

@ -1,28 +0,0 @@
git-version(1)
==============
NAME
----
git-version - Display version information about Git
SYNOPSIS
--------
[verse]
'git version' [--build-options]
DESCRIPTION
-----------
With no options given, the version of 'git' is printed on the standard output.
Note that `git --version` is identical to `git version` because the
former is internally converted into the latter.
OPTIONS
-------
--build-options::
Include additional information about how git was built for diagnostic
purposes.
GIT
---
Part of the linkgit:git[1] suite

View File

@ -9,7 +9,7 @@ git-worktree - Manage multiple working trees
SYNOPSIS
--------
[verse]
'git worktree add' [-f] [--detach] [--checkout] [--lock [--reason <string>]] [-b <new-branch>] <path> [<commit-ish>]
'git worktree add' [-f] [--detach] [--checkout] [--lock] [-b <new-branch>] <path> [<commit-ish>]
'git worktree list' [--porcelain]
'git worktree lock' [--reason <string>] <worktree>
'git worktree move' <worktree> <new-path>
@ -242,7 +242,7 @@ With `list`, annotate missing working trees as prunable if they are
older than `<time>`.
--reason <string>::
With `lock` or with `add --lock`, an explanation why the working tree is locked.
With `lock`, an explanation why the working tree is locked.
<worktree>::
Working trees can be identified by path, either relative or
@ -387,7 +387,7 @@ These annotations are:
------------
$ git worktree list
/path/to/linked-worktree abcd1234 [master]
/path/to/locked-worktree acbd5678 (brancha) locked
/path/to/locked-worktreee acbd5678 (brancha) locked
/path/to/prunable-worktree 5678abc (detached HEAD) prunable
------------

View File

@ -41,10 +41,6 @@ OPTIONS
-------
--version::
Prints the Git suite version that the 'git' program came from.
+
This option is internally converted to `git version ...` and accepts
the same options as the linkgit:git-version[1] command. If `--help` is
also given, it takes precedence over `--version`.
--help::
Prints the synopsis and a list of the most commonly used

View File

@ -27,11 +27,12 @@ precedence, the last matching pattern decides the outcome):
them.
* Patterns read from a `.gitignore` file in the same directory
as the path, or in any parent directory (up to the top-level of the working
tree), with patterns in the higher level files being overridden by those in
lower level files down to the directory containing the file. These patterns
match relative to the location of the `.gitignore` file. A project normally
includes such `.gitignore` files in its repository, containing patterns for
as the path, or in any parent directory, with patterns in the
higher level files (up to the toplevel of the work tree) being overridden
by those in lower level files down to the directory containing the file.
These patterns match relative to the location of the
`.gitignore` file. A project normally includes such
`.gitignore` files in its repository, containing patterns for
files generated as part of the project build.
* Patterns read from `$GIT_DIR/info/exclude`.

View File

@ -322,7 +322,7 @@ initiating this "pull". If Bob's work conflicts with what Alice did since
their histories forked, Alice will use her working tree and the index to
resolve conflicts, and existing local changes will interfere with the
conflict resolution process (Git will still perform the fetch but will
refuse to merge -- Alice will have to get rid of her local changes in
refuse to merge --- Alice will have to get rid of her local changes in
some way and pull again when this happens).
Alice can peek at what Bob did without merging first, using the "fetch"

View File

@ -146,8 +146,8 @@ current branch integrates with) obviously do not work, as there is no
<<def_revision,revision>> and you are "merging" another
<<def_branch,branch>>'s changes that happen to be a descendant of what
you have. In such a case, you do not make a new <<def_merge,merge>>
<<def_commit,commit>> but instead just update your branch to point at the same
revision as the branch you are merging. This will happen frequently on a
<<def_commit,commit>> but instead just update to his
revision. This will happen frequently on a
<<def_remote_tracking_branch,remote-tracking branch>> of a remote
<<def_repository,repository>>.

View File

@ -2,9 +2,6 @@
--no-commit::
Perform the merge and commit the result. This option can
be used to override --no-commit.
ifdef::git-pull[]
Only useful when merging.
endif::git-pull[]
+
With --no-commit perform the merge and stop just before creating
a merge commit, to give the user a chance to inspect and further
@ -42,7 +39,6 @@ set to `no` at the beginning of them.
to `MERGE_MSG` before being passed on to the commit machinery in the
case of a merge conflict.
ifdef::git-merge[]
--ff::
--no-ff::
--ff-only::
@ -51,22 +47,6 @@ ifdef::git-merge[]
default unless merging an annotated (and possibly signed) tag
that is not stored in its natural place in the `refs/tags/`
hierarchy, in which case `--no-ff` is assumed.
endif::git-merge[]
ifdef::git-pull[]
--ff-only::
Only update to the new history if there is no divergent local
history. This is the default when no method for reconciling
divergent histories is provided (via the --rebase=* flags).
--ff::
--no-ff::
When merging rather than rebasing, specifies how a merge is
handled when the merged-in history is already a descendant of
the current history. If merging is requested, `--ff` is the
default unless merging an annotated (and possibly signed) tag
that is not stored in its natural place in the `refs/tags/`
hierarchy, in which case `--no-ff` is assumed.
endif::git-pull[]
+
With `--ff`, when possible resolve the merge as a fast-forward (only
update the branch pointer to match the merged branch; do not create a
@ -75,11 +55,9 @@ descendant of the current history), create a merge commit.
+
With `--no-ff`, create a merge commit in all cases, even when the merge
could instead be resolved as a fast-forward.
ifdef::git-merge[]
+
With `--ff-only`, resolve the merge as a fast-forward when possible.
When not possible, refuse to merge and exit with a non-zero status.
endif::git-merge[]
-S[<keyid>]::
--gpg-sign[=<keyid>]::
@ -95,9 +73,6 @@ endif::git-merge[]
In addition to branch names, populate the log message with
one-line descriptions from at most <n> actual commits that are being
merged. See also linkgit:git-fmt-merge-msg[1].
ifdef::git-pull[]
Only useful when merging.
endif::git-pull[]
+
With --no-log do not list one-line descriptions from the
actual commits being merged.
@ -127,25 +102,18 @@ With --no-squash perform the merge and commit the result. This
option can be used to override --squash.
+
With --squash, --commit is not allowed, and will fail.
ifdef::git-pull[]
+
Only useful when merging.
endif::git-pull[]
--no-verify::
This option bypasses the pre-merge and commit-msg hooks.
See also linkgit:githooks[5].
ifdef::git-pull[]
Only useful when merging.
endif::git-pull[]
-s <strategy>::
--strategy=<strategy>::
Use the given merge strategy; can be supplied more than
once to specify them in the order they should be tried.
If there is no `-s` option, a built-in list of strategies
is used instead (`recursive` when merging a single head,
`octopus` otherwise).
is used instead ('git merge-recursive' when merging a single
head, 'git merge-octopus' otherwise).
-X <option>::
--strategy-option=<option>::
@ -159,10 +127,6 @@ endif::git-pull[]
default trust model, this means the signing key has been signed by
a trusted key. If the tip commit of the side branch is not signed
with a valid key, the merge is aborted.
ifdef::git-pull[]
+
Only useful when merging.
endif::git-pull[]
--summary::
--no-summary::
@ -190,8 +154,7 @@ endif::git-pull[]
--autostash::
--no-autostash::
Automatically create a temporary stash entry before the operation
begins, record it in the special ref `MERGE_AUTOSTASH`
and apply it after the operation ends. This means
begins, and apply it after the operation ends. This means
that you can run the operation on a dirty worktree. However, use
with care: the final stash application after a successful
merge might result in non-trivial conflicts.
@ -203,7 +166,3 @@ endif::git-pull[]
projects that started their lives independently. As that is
a very rare occasion, no configuration variable to enable
this by default exists and will not be added.
ifdef::git-pull[]
+
Only useful when merging.
endif::git-pull[]

View File

@ -6,6 +6,13 @@ backend 'merge strategies' to be chosen with `-s` option. Some strategies
can also take their own options, which can be passed by giving `-X<option>`
arguments to `git merge` and/or `git pull`.
resolve::
This can only resolve two heads (i.e. the current branch
and another branch you pulled from) using a 3-way merge
algorithm. It tries to carefully detect criss-cross
merge ambiguities and is considered generally safe and
fast.
recursive::
This can only resolve two heads using a 3-way merge
algorithm. When there is more than one common
@ -16,9 +23,9 @@ recursive::
causing mismerges by tests done on actual merge commits
taken from Linux 2.6 kernel development history.
Additionally this can detect and handle merges involving
renames. It does not make use of detected copies. This
is the default merge strategy when pulling or merging one
branch.
renames, but currently cannot make use of detected
copies. This is the default merge strategy when pulling
or merging one branch.
+
The 'recursive' strategy can take the following options:
@ -37,14 +44,17 @@ theirs;;
no 'theirs' merge strategy to confuse this merge option with.
patience;;
Deprecated synonym for `diff-algorithm=patience`.
With this option, 'merge-recursive' spends a little extra time
to avoid mismerges that sometimes occur due to unimportant
matching lines (e.g., braces from distinct functions). Use
this when the branches to be merged have diverged wildly.
See also linkgit:git-diff[1] `--patience`.
diff-algorithm=[patience|minimal|histogram|myers];;
Use a different diff algorithm while merging, which can help
avoid mismerges that occur due to unimportant matching lines
(such as braces from distinct functions). See also
linkgit:git-diff[1] `--diff-algorithm`. Defaults to the
`diff.algorithm` config setting.
Tells 'merge-recursive' to use a different diff algorithm, which
can help avoid mismerges that occur due to unimportant matching
lines (such as braces from distinct functions). See also
linkgit:git-diff[1] `--diff-algorithm`.
ignore-space-change;;
ignore-all-space;;
@ -95,26 +105,6 @@ subtree[=<path>];;
is prefixed (or stripped from the beginning) to make the shape of
two trees to match.
ort::
This is meant as a drop-in replacement for the `recursive`
algorithm (as reflected in its acronym -- "Ostensibly
Recursive's Twin"), and will likely replace it in the future.
It fixes corner cases that the `recursive` strategy handles
suboptimally, and is significantly faster in large
repositories -- especially when many renames are involved.
+
The `ort` strategy takes all the same options as `recursive`.
However, it ignores three of those options: `no-renames`,
`patience` and `diff-algorithm`. It always runs with rename
detection (it handles it much faster than `recursive` does), and
it specifically uses `diff-algorithm=histogram`.
resolve::
This can only resolve two heads (i.e. the current branch
and another branch you pulled from) using a 3-way merge
algorithm. It tries to carefully detect criss-cross
merge ambiguities. It does not handle renames.
octopus::
This resolves cases with more than two heads, but refuses to do
a complex merge that needs manual resolution. It is

View File

@ -33,16 +33,14 @@ people using 80-column terminals.
used together.
--encoding=<encoding>::
Commit objects record the character encoding used for the log message
The commit objects record the encoding used for the log message
in their encoding header; this option can be used to tell the
command to re-code the commit log message in the encoding
preferred by the user. For non plumbing commands this
defaults to UTF-8. Note that if an object claims to be encoded
in `X` and we are outputting in `X`, we will output the object
verbatim; this means that invalid sequences in the original
commit may be copied to the output. Likewise, if iconv(3) fails
to convert the commit, we will quietly output the original
object verbatim.
commit may be copied to the output.
--expand-tabs=<n>::
--expand-tabs::

View File

@ -897,7 +897,7 @@ which are not of the requested type.
+
The form '--filter=sparse:oid=<blob-ish>' uses a sparse-checkout
specification contained in the blob (or blob-expression) '<blob-ish>'
to omit blobs that would not be required for a sparse checkout on
to omit blobs that would not be not required for a sparse checkout on
the requested refs.
+
The form '--filter=tree:<depth>' omits all blobs and trees whose depth
@ -1064,14 +1064,6 @@ ifdef::git-rev-list[]
--header::
Print the contents of the commit in raw-format; each record is
separated with a NUL character.
--no-commit-header::
Suppress the header line containing "commit" and the object ID printed before
the specified format. This has no effect on the built-in formats; only custom
formats are affected.
--commit-header::
Overrides a previous `--no-commit-header`.
endif::git-rev-list[]
--parents::

View File

@ -260,9 +260,6 @@ any of the given commits.
A commit's reachable set is the commit itself and the commits in
its ancestry chain.
There are several notations to specify a set of connected commits
(called a "revision range"), illustrated below.
Commit Exclusions
~~~~~~~~~~~~~~~~~
@ -297,26 +294,6 @@ is a shorthand for 'HEAD..origin' and asks "What did the origin do since
I forked from them?" Note that '..' would mean 'HEAD..HEAD' which is an
empty range that is both reachable and unreachable from HEAD.
Commands that are specifically designed to take two distinct ranges
(e.g. "git range-diff R1 R2" to compare two ranges) do exist, but
they are exceptions. Unless otherwise noted, all "git" commands
that operate on a set of commits work on a single revision range.
In other words, writing two "two-dot range notation" next to each
other, e.g.
$ git log A..B C..D
does *not* specify two revision ranges for most commands. Instead
it will name a single connected set of commits, i.e. those that are
reachable from either B or D but are reachable from neither A or C.
In a linear history like this:
---A---B---o---o---C---D
because A and B are reachable from C, the revision range specified
by these two dotted ranges is a single commit D.
Other <rev>{caret} Parent Shorthand Notations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Three other shorthands exist, particularly useful for merge commits,

View File

@ -396,14 +396,14 @@ only present on the "start" and "atexit" events.
}
------------
`"too_many_files"`::
`"discard"`::
This event is written to the git-trace2-discard sentinel file if there
are too many files in the target trace directory (see the
trace2.maxFiles config option).
+
------------
{
"event":"too_many_files",
"event":"discard",
...
}
------------

View File

@ -2,9 +2,9 @@ Directory rename detection
==========================
Rename detection logic in diffcore-rename that checks for renames of
individual files is also aggregated there and then analyzed in either
merge-ort or merge-recursive for cases where combinations of renames
indicate that a full directory has been renamed.
individual files is aggregated and analyzed in merge-recursive for cases
where combinations of renames indicate that a full directory has been
renamed.
Scope of abilities
------------------
@ -88,11 +88,9 @@ directory rename detection support in:
Folks have requested in the past that `git diff` detect directory
renames and somehow simplify its output. It is not clear whether this
would be desirable or how the output should be simplified, so this was
simply not implemented. Also, while diffcore-rename has most of the
logic for detecting directory renames, some of the logic is still found
within merge-ort and merge-recursive. Fully supporting directory
rename detection in diffs would require copying or moving the remaining
bits of logic to the diff machinery.
simply not implemented. Further, to implement this, directory rename
detection logic would need to move from merge-recursive to
diffcore-rename.
* am

View File

@ -599,7 +599,7 @@ supports four different modes of operation:
convert any object names written to output to SHA-1, but store
objects using SHA-256. This allows users to test the code with no
visible behavior change except for performance. This allows
running even tests that assume the SHA-1 hash function, to
allows running even tests that assume the SHA-1 hash function, to
sanity-check the behavior of the new mode.
2. ("early transition") Allow both SHA-1 and SHA-256 object names in

View File

@ -35,14 +35,13 @@ include some sort of non-trivial implementation in the Minimum Viable Product,
at least so that we can test the client.
This is the implementation: a feature, marked experimental, that allows the
server to be configured by one or more `uploadpack.blobPackfileUri=
<object-hash> <pack-hash> <uri>` entries. Whenever the list of objects to be
sent is assembled, all such blobs are excluded, replaced with URIs. As noted
in "Future work" below, the server can evolve in the future to support
excluding other objects (or other implementations of servers could be made
that support excluding other objects) without needing a protocol change, so
clients should not expect that packfiles downloaded in this way only contain
single blobs.
server to be configured by one or more `uploadpack.blobPackfileUri=<sha1>
<uri>` entries. Whenever the list of objects to be sent is assembled, all such
blobs are excluded, replaced with URIs. As noted in "Future work" below, the
server can evolve in the future to support excluding other objects (or other
implementations of servers could be made that support excluding other objects)
without needing a protocol change, so clients should not expect that packfiles
downloaded in this way only contain single blobs.
Client design
-------------

View File

@ -242,7 +242,8 @@ remote in a specific order.
repository and can satisfy all such requests.
- Repack essentially treats promisor and non-promisor packfiles as 2
distinct partitions and does not mix them.
distinct partitions and does not mix them. Repack currently only works
on non-promisor packfiles and loose objects.
- Dynamic object fetching invokes fetch-pack once *for each item*
because most algorithms stumble upon a missing object and need to have
@ -272,6 +273,9 @@ to use those promisor remotes in that order."
The user might want to work in a triangular work flow with multiple
promisor remotes that each have an incomplete view of the repository.
- Allow repack to work on promisor packfiles (while keeping them distinct
from non-promisor packfiles).
- Allow non-pathname-based filters to make use of packfile bitmaps (when
present). This was just an omission during the initial implementation.

View File

@ -540,7 +540,7 @@ An `object-info` request takes the following arguments:
Indicates to the server an object which the client wants to obtain
information for.
The response of `object-info` is a list of the requested object ids
The response of `object-info` is a list of the the requested object ids
and associated requested information, each separated by a single space.
output = info flush-pkt

View File

@ -1,671 +0,0 @@
Rebases and cherry-picks involve a sequence of merges whose results are
recorded as new single-parent commits. The first parent side of those
merges represent the "upstream" side, and often include a far larger set of
changes than the second parent side. Traditionally, the renames on the
first-parent side of that sequence of merges were repeatedly re-detected
for every merge. This file explains why it is safe and effective during
rebases and cherry-picks to remember renames on the upstream side of
history as an optimization, assuming all merges are automatic and clean
(i.e. no conflicts and not interrupted for user input or editing).
Outline:
0. Assumptions
1. How rebasing and cherry-picking work
2. Why the renames on MERGE_SIDE1 in any given pick are *always* a
superset of the renames on MERGE_SIDE1 for the next pick.
3. Why any rename on MERGE_SIDE1 in any given pick is _almost_ always also
a rename on MERGE_SIDE1 for the next pick
4. A detailed description of the the counter-examples to #3.
5. Why the special cases in #4 are still fully reasonable to use to pair
up files for three-way content merging in the merge machinery, and why
they do not affect the correctness of the merge.
6. Interaction with skipping of "irrelevant" renames
7. Additional items that need to be cached
8. How directory rename detection interacts with the above and why this
optimization is still safe even if merge.directoryRenames is set to
"true".
=== 0. Assumptions ===
There are two assumptions that will hold throughout this document:
* The upstream side where commits are transplanted to is treated as the
first parent side when rebase/cherry-pick call the merge machinery
* All merges are fully automatic
and a third that will hold in sections 2-5 for simplicity, that I'll later
address in section 8:
* No directory renames occur
Let me explain more about each assumption and why I include it:
The first assumption is merely for the purposes of making this document
clearer; the optimization implementation does not actually depend upon it.
However, the assumption does hold in all cases because it reflects the way
that both rebase and cherry-pick were implemented; and the implementation
of cherry-pick and rebase are not readily changeable for backwards
compatibility reasons (see for example the discussion of the --ours and
--theirs flag in the documentation of `git checkout`, particularly the
comments about how they behave with rebase). The optimization avoids
checking first-parent-ness, though. It checks the conditions that make the
optimization valid instead, so it would still continue working if someone
changed the parent ordering that cherry-pick and rebase use. But making
this assumption does make this document much clearer and prevents me from
having to repeat every example twice.
If the second assumption is violated, then the optimization simply is
turned off and thus isn't relevant to consider. The second assumption can
also be stated as "there is no interruption for a user to resolve conflicts
or to just further edit or tweak files". While real rebases and
cherry-picks are often interrupted (either because it's an interactive
rebase where the user requested to stop and edit, or because there were
conflicts that the user needs to resolve), the cache of renames is not
stored on disk, and thus is thrown away as soon as the rebase or cherry
pick stops for the user to resolve the operation.
The third assumption makes sections 2-5 simpler, and allows people to
understand the basics of why this optimization is safe and effective, and
then I can go back and address the specifics in section 8. It is probably
also worth noting that if directory renames do occur, then the default of
merge.directoryRenames being set to "conflict" means that the operation
will stop for users to resolve the conflicts and the cache will be thrown
away, and thus that there won't be an optimization to apply. So, the only
reason we need to address directory renames specifically, is that some
users will have set merge.directoryRenames to "true" to allow the merges to
continue to proceed automatically. The optimization is still safe with
this config setting, but we have to discuss a few more cases to show why;
this discussion is deferred until section 8.
=== 1. How rebasing and cherry-picking work ===
Consider the following setup (from the git-rebase manpage):
A---B---C topic
/
D---E---F---G main
After rebasing or cherry-picking topic onto main, this will appear as:
A'--B'--C' topic
/
D---E---F---G main
The way the commits A', B', and C' are created is through a series of
merges, where rebase or cherry-pick sequentially uses each of the three
A-B-C commits in a special merge operation. Let's label the three commits
in the merge operation as MERGE_BASE, MERGE_SIDE1, and MERGE_SIDE2. For
this picture, the three commits for each of the three merges would be:
To create A':
MERGE_BASE: E
MERGE_SIDE1: G
MERGE_SIDE2: A
To create B':
MERGE_BASE: A
MERGE_SIDE1: A'
MERGE_SIDE2: B
To create C':
MERGE_BASE: B
MERGE_SIDE1: B'
MERGE_SIDE2: C
Sometimes, folks are surprised that these three-way merges are done. It
can be useful in understanding these three-way merges to view them in a
slightly different light. For example, in creating C', you can view it as
either:
* Apply the changes between B & C to B'
* Apply the changes between B & B' to C
Conceptually the two statements above are the same as a three-way merge of
B, B', and C, at least the parts before you decide to record a commit.
=== 2. Why the renames on MERGE_SIDE1 in any given pick are always a ===
=== superset of the renames on MERGE_SIDE1 for the next pick. ===
The merge machinery uses the filenames it is fed from MERGE_BASE,
MERGE_SIDE1, and MERGE_SIDE2. It will only move content to a different
filename under one of three conditions:
* To make both pieces of a conflict available to a user during conflict
resolution (examples: directory/file conflict, add/add type conflict
such as symlink vs. regular file)
* When MERGE_SIDE1 renames the file.
* When MERGE_SIDE2 renames the file.
First, let's remember what commits are involved in the first and second
picks of the cherry-pick or rebase sequence:
To create A':
MERGE_BASE: E
MERGE_SIDE1: G
MERGE_SIDE2: A
To create B':
MERGE_BASE: A
MERGE_SIDE1: A'
MERGE_SIDE2: B
So, in particular, we need to show that the renames between E and G are a
superset of those between A and A'.
A' is created by the first merge. A' will only have renames for one of the
three reasons listed above. The first case, a conflict, results in a
situation where the cache is dropped and thus this optimization doesn't
take effect, so we need not consider that case. The third case, a rename
on MERGE_SIDE2 (i.e. from G to A), will show up in A' but it also shows up
in A -- therefore when diffing A and A' that path does not show up as a
rename. The only remaining way for renames to show up in A' is for the
rename to come from MERGE_SIDE1. Therefore, all renames between A and A'
are a subset of those between E and G. Equivalently, all renames between E
and G are a superset of those between A and A'.
=== 3. Why any rename on MERGE_SIDE1 in any given pick is _almost_ ===
=== always also a rename on MERGE_SIDE1 for the next pick. ===
Let's again look at the first two picks:
To create A':
MERGE_BASE: E
MERGE_SIDE1: G
MERGE_SIDE2: A
To create B':
MERGE_BASE: A
MERGE_SIDE1: A'
MERGE_SIDE2: B
Now let's look at any given rename from MERGE_SIDE1 of the first pick, i.e.
any given rename from E to G. Let's use the filenames 'oldfile' and
'newfile' for demonstration purposes. That first pick will function as
follows; when the rename is detected, the merge machinery will do a
three-way content merge of the following:
E:oldfile
G:newfile
A:oldfile
and produce a new result:
A':newfile
Note above that I've assumed that E->A did not rename oldfile. If that
side did rename, then we most likely have a rename/rename(1to2) conflict
that will cause the rebase or cherry-pick operation to halt and drop the
in-memory cache of renames and thus doesn't need to be considered further.
In the special case that E->A does rename the file but also renames it to
newfile, then there is no conflict from the renaming and the merge can
succeed. In this special case, the rename is not valid to cache because
the second merge will find A:newfile in the MERGE_BASE (see also the new
testcases in t6429 with "rename same file identically" in their
description). So a rename/rename(1to1) needs to be specially handled by
pruning renames from the cache and decrementing the dir_rename_counts in
the current and leading directories associated with those renames. Or,
since these are really rare, one could just take the easy way out and
disable the remembering renames optimization when a rename/rename(1to1)
happens.
The previous paragraph handled the cases for E->A renaming oldfile, let's
continue assuming that oldfile is not renamed in A.
As per the diagram for creating B', MERGE_SIDE1 involves the changes from A
to A'. So, we are curious whether A:oldfile and A':newfile will be viewed
as renames. Note that:
* There will be no A':oldfile (because there could not have been a
G:oldfile as we do not do break detection in the merge machinery and
G:newfile was detected as a rename, and by the construction of the
rename above that merged cleanly, the merge machinery will ensure there
is no 'oldfile' in the result).
* There will be no A:newfile (if there had been, we would have had a
rename/add conflict).
* Clearly A:oldfile and A':newfile are "related" (A':newfile came from a
clean three-way content merge involving A:oldfile).
We can also expound on the third point above, by noting that three-way
content merges can also be viewed as applying the differences between the
base and one side to the other side. Thus we can view A':newfile as
having been created by taking the changes between E:oldfile and G:newfile
(which were detected as being related, i.e. <50% changed) to A:oldfile.
Thus A:oldfile and A':newfile are just as related as E:oldfile and
G:newfile are -- they have exactly identical differences. Since the latter
were detected as renames, A:oldfile and A':newfile should also be
detectable as renames almost always.
=== 4. A detailed description of the counter-examples to #3. ===
We already noted in section 3 that rename/rename(1to1) (i.e. both sides
renaming a file the same way) was one counter-example. The more
interesting bit, though, is why did we need to use the "almost" qualifier
when stating that A:oldfile and A':newfile are "almost" always detectable
as renames?
Let's repeat an earlier point that section 3 made:
A':newfile was created by applying the changes between E:oldfile and
G:newfile to A:oldfile. The changes between E:oldfile and G:newfile were
<50% of the size of E:oldfile.
If those changes that were <50% of the size of E:oldfile are also <50% of
the size of A:oldfile, then A:oldfile and A':newfile will be detectable as
renames. However, if there is a dramatic size reduction between E:oldfile
and A:oldfile (but the changes between E:oldfile, G:newfile, and A:oldfile
still somehow merge cleanly), then traditional rename detection would not
detect A:oldfile and A':newfile as renames.
Here's an example where that can happen:
* E:oldfile had 20 lines
* G:newfile added 10 new lines at the beginning of the file
* A:oldfile kept the first 3 lines of the file, and deleted all the rest
then
=> A':newfile would have 13 lines, 3 of which matches those in A:oldfile.
E:oldfile -> G:newfile would be detected as a rename, but A:oldfile and
A':newfile would not be.
=== 5. Why the special cases in #4 are still fully reasonable to use to ===
=== pair up files for three-way content merging in the merge machinery, ===
=== and why they do not affect the correctness of the merge. ===
In the rename/rename(1to1) case, A:newfile and A':newfile are not renames
since they use the *same* filename. However, files with the same filename
are obviously fine to pair up for three-way content merging (the merge
machinery has never employed break detection). The interesting
counter-example case is thus not the rename/rename(1to1) case, but the case
where A did not rename oldfile. That was the case that we spent most of
the time discussing in sections 3 and 4. The remainder of this section
will be devoted to that case as well.
So, even if A:oldfile and A':newfile aren't detectable as renames, why is
it still reasonable to pair them up for three-way content merging in the
merge machinery? There are multiple reasons:
* As noted in sections 3 and 4, the diff between A:oldfile and A':newfile
is *exactly* the same as the diff between E:oldfile and G:newfile. The
latter pair were detected as renames, so it seems unlikely to surprise
users for us to treat A:oldfile and A':newfile as renames.
* In fact, "oldfile" and "newfile" were at one point detected as renames
due to how they were constructed in the E..G chain. And we used that
information once already in this rebase/cherry-pick. I think users
would be unlikely to be surprised at us continuing to treat the files
as renames and would quickly understand why we had done so.
* Marking or declaring files as renames is *not* the end goal for merges.
Merges use renames to determine which files make sense to be paired up
for three-way content merges.
* A:oldfile and A':newfile were _already_ paired up in a three-way
content merge; that is how A':newfile was created. In fact, that
three-way content merge was clean. So using them again in a later
three-way content merge seems very reasonable.
However, the above is focusing on the common scenarios. Let's try to look
at all possible unusual scenarios and compare without the optimization to
with the optimization. Consider the following theoretical cases; we will
then dive into each to determine which of them are possible,
and if so, what they mean:
1. Without the optimization, the second merge results in a conflict.
With the optimization, the second merge also results in a conflict.
Questions: Are the conflicts confusingly different? Better in one case?
2. Without the optimization, the second merge results in NO conflict.
With the optimization, the second merge also results in NO conflict.
Questions: Are the merges the same?
3. Without the optimization, the second merge results in a conflict.
With the optimization, the second merge results in NO conflict.
Questions: Possible? Bug, bugfix, or something else?
4. Without the optimization, the second merge results in NO conflict.
With the optimization, the second merge results in a conflict.
Questions: Possible? Bug, bugfix, or something else?
I'll consider all four cases, but out of order.
The fourth case is impossible. For the code without the remembering
renames optimization to not get a conflict, B:oldfile would need to exactly
match A:oldfile -- if it doesn't, there would be a modify/delete conflict.
If A:oldfile matches B:oldfile exactly, then a three-way content merge
between A:oldfile, A':newfile, and B:oldfile would have no conflict and
just give us the version of newfile from A' as the result.
From the same logic as the above paragraph, the second case would indeed
result in identical merges. When A:oldfile exactly matches B:oldfile, an
undetected rename would say, "Oh, I see one side didn't modify 'oldfile'
and the other side deleted it. I'll delete it. And I see you have this
brand new file named 'newfile' in A', so I'll keep it." That gives the
same results as three-way content merging A:oldfile, A':newfile, and
B:oldfile -- a removal of oldfile with the version of newfile from A'
showing up in the result.
The third case is interesting. It means that A:oldfile and A':newfile were
not just similar enough, but that the changes between them did not conflict
with the changes between A:oldfile and B:oldfile. This would validate our
hunch that the files were similar enough to be used in a three-way content
merge, and thus seems entirely correct for us to have used them that way.
(Sidenote: One particular example here may be enlightening. Let's say that
B was an immediate revert of A. B clearly would have been a clean revert
of A, since A was B's immediate parent. One would assume that if you can
pick a commit, you should also be able to cherry-pick its immediate revert.
However, this is one of those funny corner cases; without this
optimization, we just successfully picked a commit cleanly, but we are
unable to cherry-pick its immediate revert due to the size differences
between E:oldfile and A:oldfile.)
That leaves only the first case to consider -- when we get conflicts both
with or without the optimization. Without the optimization, we'll have a
modify/delete conflict, where both A':newfile and B:oldfile are left in the
tree for the user to deal with and no hints about the potential similarity
between the two. With the optimization, we'll have a three-way content
merged A:oldfile, A':newfile, and B:oldfile with conflict markers
suggesting we thought the files were related but giving the user the chance
to resolve. As noted above, I don't think users will find us treating
'oldfile' and 'newfile' as related as a surprise since they were between E
and G. In any event, though, this case shouldn't be concerning since we
hit a conflict in both cases, told the user what we know, and asked them to
resolve it.
So, in summary, case 4 is impossible, case 2 yields the same behavior, and
cases 1 and 3 seem to provide as good or better behavior with the
optimization than without.
=== 6. Interaction with skipping of "irrelevant" renames ===
Previous optimizations involved skipping rename detection for paths
considered to be "irrelevant". See for example the following commits:
* 32a56dfb99 ("merge-ort: precompute subset of sources for which we
need rename detection", 2021-03-11)
* 2fd9eda462 ("merge-ort: precompute whether directory rename
detection is needed", 2021-03-11)
* 9bd342137e ("diffcore-rename: determine which relevant_sources are
no longer relevant", 2021-03-13)
Relevance is always determined by what the _other_ side of history has
done, in terms of modifing a file that our side renamed, or adding a
file to a directory which our side renamed. This means that a path
that is "irrelevant" when picking the first commit of a series in a
rebase or cherry-pick, may suddenly become "relevant" when picking the
next commit.
The upshot of this is that we can only cache rename detection results
for relevant paths, and need to re-check relevance in subsequent
commits. If those subsequent commits have additional paths that are
relevant for rename detection, then we will need to redo rename
detection -- though we can limit it to the paths for which we have not
already detected renames.
=== 7. Additional items that need to be cached ===
It turns out we have to cache more than just renames; we also cache:
A) non-renames (i.e. unpaired deletes)
B) counts of renames within directories
C) sources that were marked as RELEVANT_LOCATION, but which were
downgraded to RELEVANT_NO_MORE
D) the toplevel trees involved in the merge
These are all stored in struct rename_info, and respectively appear in
* cached_pairs (along side actual renames, just with a value of NULL)
* dir_rename_counts
* cached_irrelevant
* merge_trees
The reason for (A) comes from the irrelevant renames skipping
optimization discussed in section 6. The fact that irrelevant renames
are skipped means we only get a subset of the potential renames
detected and subsequent commits may need to run rename detection on
the upstream side on a subset of the remaining renames (to get the
renames that are relevant for that later commit). Since unpaired
deletes are involved in rename detection too, we don't want to
repeatedly check that those paths remain unpaired on the upstream side
with every commit we are transplanting.
The reason for (B) is that diffcore_rename_extended() is what
generates the counts of renames by directory which is needed in
directory rename detection, and if we don't run
diffcore_rename_extended() again then we need to have the output from
it, including dir_rename_counts, from the previous run.
The reason for (C) is that merge-ort's tree traversal will again think
those paths are relevant (marking them as RELEVANT_LOCATION), but the
fact that they were downgraded to RELEVANT_NO_MORE means that
dir_rename_counts already has the information we need for directory
rename detection. (A path which becomes RELEVANT_CONTENT in a
subsequent commit will be removed from cached_irrelevant.)
The reason for (D) is that is how we determine whether the remember
renames optimization can be used. In particular, remembering that our
sequence of merges looks like:
Merge 1:
MERGE_BASE: E
MERGE_SIDE1: G
MERGE_SIDE2: A
=> Creates A'
Merge 2:
MERGE_BASE: A
MERGE_SIDE1: A'
MERGE_SIDE2: B
=> Creates B'
It is the fact that the trees A and A' appear both in Merge 1 and in
Merge 2, with A as a parent of A' that allows this optimization. So
we store the trees to compare with what we are asked to merge next
time.
=== 8. How directory rename detection interacts with the above and ===
=== why this optimization is still safe even if ===
=== merge.directoryRenames is set to "true". ===
As noted in the assumptions section:
"""
...if directory renames do occur, then the default of
merge.directoryRenames being set to "conflict" means that the operation
will stop for users to resolve the conflicts and the cache will be
thrown away, and thus that there won't be an optimization to apply.
So, the only reason we need to address directory renames specifically,
is that some users will have set merge.directoryRenames to "true" to
allow the merges to continue to proceed automatically.
"""
Let's remember that we need to look at how any given pick affects the next
one. So let's again use the first two picks from the diagram in section
one:
First pick does this three-way merge:
MERGE_BASE: E
MERGE_SIDE1: G
MERGE_SIDE2: A
=> creates A'
Second pick does this three-way merge:
MERGE_BASE: A
MERGE_SIDE1: A'
MERGE_SIDE2: B
=> creates B'
Now, directory rename detection exists so that if one side of history
renames a directory, and the other side adds a new file to the old
directory, then the merge (with merge.directoryRenames=true) can move the
file into the new directory. There are two qualitatively different ways to
add a new file to an old directory: create a new file, or rename a file
into that directory. Also, directory renames can be done on either side of
history, so there are four cases to consider:
* MERGE_SIDE1 renames old dir, MERGE_SIDE2 adds new file to old dir
* MERGE_SIDE1 renames old dir, MERGE_SIDE2 renames file into old dir
* MERGE_SIDE1 adds new file to old dir, MERGE_SIDE2 renames old dir
* MERGE_SIDE1 renames file into old dir, MERGE_SIDE2 renames old dir
One last note before we consider these four cases: There are some
important properties about how we implement this optimization with
respect to directory rename detection that we need to bear in mind
while considering all of these cases:
* rename caching occurs *after* applying directory renames
* a rename created by directory rename detection is recorded for the side
of history that did the directory rename.
* dir_rename_counts, the nested map of
{oldname => {newname => count}},
is cached between runs as well. This basically means that directory
rename detection is also cached, though only on the side of history
that we cache renames for (MERGE_SIDE1 as far as this document is
concerned; see the assumptions section). Two interesting sub-notes
about these counts:
* If we need to perform rename-detection again on the given side (e.g.
some paths are relevant for rename detection that weren't before),
then we clear dir_rename_counts and recompute it, making use of
cached_pairs. The reason it is important to do this is optimizations
around RELEVANT_LOCATION exist to prevent us from computing
unnecessary renames for directory rename detection and from computing
dir_rename_counts for irrelevant directories; but those same renames
or directories may become necessary for subsequent merges. The
easiest way to "fix up" dir_rename_counts in such cases is to just
recompute it.
* If we prune rename/rename(1to1) entries from the cache, then we also
need to update dir_rename_counts to decrement the counts for the
involved directory and any relevant parent directories (to undo what
update_dir_rename_counts() in diffcore-rename.c incremented when the
rename was initially found). If we instead just disable the
remembering renames optimization when the exceedingly rare
rename/rename(1to1) cases occur, then dir_rename_counts will get
re-computed the next time rename detection occurs, as noted above.
* the side with multiple commits to pick, is the side of history that we
do NOT cache renames for. Thus, there are no additional commits to
change the number of renames in a directory, except for those done by
directory rename detection (which always pad the majority).
* the "renames" we cache are modified slightly by any directory rename,
as noted below.
Now, with those notes out of the way, let's go through the four cases
in order:
Case 1: MERGE_SIDE1 renames old dir, MERGE_SIDE2 adds new file to old dir
This case looks like this:
MERGE_BASE: E, Has olddir/
MERGE_SIDE1: G, Renames olddir/ -> newdir/
MERGE_SIDE2: A, Adds olddir/newfile
=> creates A', With newdir/newfile
MERGE_BASE: A, Has olddir/newfile
MERGE_SIDE1: A', Has newdir/newfile
MERGE_SIDE2: B, Modifies olddir/newfile
=> expected B', with threeway-merged newdir/newfile from above
In this case, with the optimization, note that after the first commit:
* MERGE_SIDE1 remembers olddir/ -> newdir/
* MERGE_SIDE1 has cached olddir/newfile -> newdir/newfile
Given the cached rename noted above, the second merge can proceed as
expected without needing to perform rename detection from A -> A'.
Case 2: MERGE_SIDE1 renames old dir, MERGE_SIDE2 renames file into old dir
This case looks like this:
MERGE_BASE: E oldfile, olddir/
MERGE_SIDE1: G oldfile, olddir/ -> newdir/
MERGE_SIDE2: A oldfile -> olddir/newfile
=> creates A', With newdir/newfile representing original oldfile
MERGE_BASE: A olddir/newfile
MERGE_SIDE1: A' newdir/newfile
MERGE_SIDE2: B modify olddir/newfile
=> expected B', with threeway-merged newdir/newfile from above
In this case, with the optimization, note that after the first commit:
* MERGE_SIDE1 remembers olddir/ -> newdir/
* MERGE_SIDE1 has cached olddir/newfile -> newdir/newfile
(NOT oldfile -> newdir/newfile; compare to case with
(p->status == 'R' && new_path) in possibly_cache_new_pair())
Given the cached rename noted above, the second merge can proceed as
expected without needing to perform rename detection from A -> A'.
Case 3: MERGE_SIDE1 adds new file to old dir, MERGE_SIDE2 renames old dir
This case looks like this:
MERGE_BASE: E, Has olddir/
MERGE_SIDE1: G, Adds olddir/newfile
MERGE_SIDE2: A, Renames olddir/ -> newdir/
=> creates A', With newdir/newfile
MERGE_BASE: A, Has newdir/, but no notion of newdir/newfile
MERGE_SIDE1: A', Has newdir/newfile
MERGE_SIDE2: B, Has newdir/, but no notion of newdir/newfile
=> expected B', with newdir/newfile from A'
In this case, with the optimization, note that after the first commit there
were no renames on MERGE_SIDE1, and any renames on MERGE_SIDE2 are tossed.
But the second merge didn't need any renames so this is fine.
Case 4: MERGE_SIDE1 renames file into old dir, MERGE_SIDE2 renames old dir
This case looks like this:
MERGE_BASE: E, Has olddir/
MERGE_SIDE1: G, Renames oldfile -> olddir/newfile
MERGE_SIDE2: A, Renames olddir/ -> newdir/
=> creates A', With newdir/newfile representing original oldfile
MERGE_BASE: A, Has oldfile
MERGE_SIDE1: A', Has newdir/newfile
MERGE_SIDE2: B, Modifies oldfile
=> expected B', with threeway-merged newdir/newfile from above
In this case, with the optimization, note that after the first commit:
* MERGE_SIDE1 remembers oldfile -> newdir/newfile
(NOT oldfile -> olddir/newfile; compare to case of second
block under p->status == 'R' in possibly_cache_new_pair())
* MERGE_SIDE2 renames are tossed because only MERGE_SIDE1 is remembered
Given the cached rename noted above, the second merge can proceed as
expected without needing to perform rename detection from A -> A'.
Finally, I'll just note here that interactions with the
skip-irrelevant-renames optimization means we sometimes don't detect
renames for any files within a directory that was renamed, in which
case we will not have been able to detect any rename for the directory
itself. In such a case, we do not know whether the directory was
renamed; we want to be careful to avoid cacheing some kind of "this
directory was not renamed" statement. If we did, then a subsequent
commit being rebased could add a file to the old directory, and the
user would expect it to end up in the correct directory -- something
our erroneous "this directory was not renamed" cache would preclude.

View File

@ -2792,7 +2792,7 @@ A fast-forward looks something like this:
In some cases it is possible that the new head will *not* actually be
a descendant of the old head. For example, the developer may have
realized a serious mistake was made and decided to backtrack,
realized she made a serious mistake, and decided to backtrack,
resulting in a situation like:
................................................

View File

@ -1,7 +1,7 @@
#!/bin/sh
GVF=GIT-VERSION-FILE
DEF_VER=v2.33.4
DEF_VER=v2.32.4
LF='
'

View File

@ -398,10 +398,6 @@ all::
# with a different indexfile format version. If it isn't set the index
# file format used is index-v[23].
#
# Define GIT_TEST_UTF8_LOCALE to preferred utf-8 locale for testing.
# If it isn't set, fallback to $LC_ALL, $LANG or use the first utf-8
# locale returned by "locale -a".
#
# Define HAVE_CLOCK_GETTIME if your platform has clock_gettime.
#
# Define HAVE_CLOCK_MONOTONIC if your platform has CLOCK_MONOTONIC.
@ -715,7 +711,6 @@ TEST_BUILTINS_OBJS += test-example-decorate.o
TEST_BUILTINS_OBJS += test-fast-rebase.o
TEST_BUILTINS_OBJS += test-genrandom.o
TEST_BUILTINS_OBJS += test-genzeros.o
TEST_BUILTINS_OBJS += test-getcwd.o
TEST_BUILTINS_OBJS += test-hash-speed.o
TEST_BUILTINS_OBJS += test-hash.o
TEST_BUILTINS_OBJS += test-hashmap.o
@ -727,11 +722,9 @@ TEST_BUILTINS_OBJS += test-mergesort.o
TEST_BUILTINS_OBJS += test-mktemp.o
TEST_BUILTINS_OBJS += test-oid-array.o
TEST_BUILTINS_OBJS += test-oidmap.o
TEST_BUILTINS_OBJS += test-oidtree.o
TEST_BUILTINS_OBJS += test-online-cpus.o
TEST_BUILTINS_OBJS += test-parse-options.o
TEST_BUILTINS_OBJS += test-parse-pathspec-file.o
TEST_BUILTINS_OBJS += test-partial-clone.o
TEST_BUILTINS_OBJS += test-path-utils.o
TEST_BUILTINS_OBJS += test-pcre2-config.o
TEST_BUILTINS_OBJS += test-pkt-line.o
@ -852,7 +845,6 @@ LIB_OBJS += branch.o
LIB_OBJS += bulk-checkin.o
LIB_OBJS += bundle.o
LIB_OBJS += cache-tree.o
LIB_OBJS += cbtree.o
LIB_OBJS += chdir-notify.o
LIB_OBJS += checkout.o
LIB_OBJS += chunk-format.o
@ -948,7 +940,6 @@ LIB_OBJS += object.o
LIB_OBJS += oid-array.o
LIB_OBJS += oidmap.o
LIB_OBJS += oidset.o
LIB_OBJS += oidtree.o
LIB_OBJS += pack-bitmap-write.o
LIB_OBJS += pack-bitmap.o
LIB_OBJS += pack-check.o
@ -2169,16 +2160,6 @@ shell_compatibility_test: please_set_SHELL_PATH_to_a_more_modern_shell
strip: $(PROGRAMS) git$X
$(STRIP) $(STRIP_OPTS) $^
### Flags affecting all rules
# A GNU make extension since gmake 3.72 (released in late 1994) to
# remove the target of rules if commands in those rules fail. The
# default is to only do that if make itself receives a signal. Affects
# all targets, see:
#
# info make --index-search=.DELETE_ON_ERROR
.DELETE_ON_ERROR:
### Target-specific flags and dependencies
# The generic compilation pattern rule and automatically
@ -2262,6 +2243,7 @@ SCRIPT_DEFINES = $(SHELL_PATH_SQ):$(DIFF_SQ):$(GIT_VERSION):\
$(gitwebdir_SQ):$(PERL_PATH_SQ):$(SANE_TEXT_GREP):$(PAGER_ENV):\
$(perllibdir_SQ)
define cmd_munge_script
$(RM) $@ $@+ && \
sed -e '1s|#!.*/sh|#!$(SHELL_PATH_SQ)|' \
-e 's|@SHELL_PATH@|$(SHELL_PATH_SQ)|' \
-e 's|@@DIFF@@|$(DIFF_SQ)|' \
@ -2331,7 +2313,7 @@ endif
PERL_DEFINES += $(gitexecdir) $(perllibdir) $(localedir)
$(SCRIPT_PERL_GEN): % : %.perl GIT-PERL-DEFINES GIT-PERL-HEADER GIT-VERSION-FILE
$(QUIET_GEN) \
$(QUIET_GEN)$(RM) $@ $@+ && \
sed -e '1{' \
-e ' s|#!.*perl|#!$(PERL_PATH_SQ)|' \
-e ' r GIT-PERL-HEADER' \
@ -2351,7 +2333,7 @@ GIT-PERL-DEFINES: FORCE
fi
GIT-PERL-HEADER: $(PERL_HEADER_TEMPLATE) GIT-PERL-DEFINES Makefile
$(QUIET_GEN) \
$(QUIET_GEN)$(RM) $@ && \
INSTLIBDIR='$(perllibdir_SQ)' && \
INSTLIBDIR_EXTRA='$(PERLLIB_EXTRA_SQ)' && \
INSTLIBDIR="$$INSTLIBDIR$${INSTLIBDIR_EXTRA:+:$$INSTLIBDIR_EXTRA}" && \
@ -2377,7 +2359,7 @@ git-instaweb: git-instaweb.sh GIT-SCRIPT-DEFINES
mv $@+ $@
else # NO_PERL
$(SCRIPT_PERL_GEN) git-instaweb: % : unimplemented.sh
$(QUIET_GEN) \
$(QUIET_GEN)$(RM) $@ $@+ && \
sed -e '1s|#!.*/sh|#!$(SHELL_PATH_SQ)|' \
-e 's|@@REASON@@|NO_PERL=$(NO_PERL)|g' \
unimplemented.sh >$@+ && \
@ -2391,14 +2373,14 @@ $(SCRIPT_PYTHON_GEN): GIT-BUILD-OPTIONS
ifndef NO_PYTHON
$(SCRIPT_PYTHON_GEN): GIT-CFLAGS GIT-PREFIX GIT-PYTHON-VARS
$(SCRIPT_PYTHON_GEN): % : %.py
$(QUIET_GEN) \
$(QUIET_GEN)$(RM) $@ $@+ && \
sed -e '1s|#!.*python|#!$(PYTHON_PATH_SQ)|' \
$< >$@+ && \
chmod +x $@+ && \
mv $@+ $@
else # NO_PYTHON
$(SCRIPT_PYTHON_GEN): % : unimplemented.sh
$(QUIET_GEN) \
$(QUIET_GEN)$(RM) $@ $@+ && \
sed -e '1s|#!.*/sh|#!$(SHELL_PATH_SQ)|' \
-e 's|@@REASON@@|NO_PYTHON=$(NO_PYTHON)|g' \
unimplemented.sh >$@+ && \
@ -2406,7 +2388,8 @@ $(SCRIPT_PYTHON_GEN): % : unimplemented.sh
mv $@+ $@
endif # NO_PYTHON
CONFIGURE_RECIPE = sed -e 's/@@GIT_VERSION@@/$(GIT_VERSION)/g' \
CONFIGURE_RECIPE = $(RM) configure configure.ac+ && \
sed -e 's/@@GIT_VERSION@@/$(GIT_VERSION)/g' \
configure.ac >configure.ac+ && \
autoconf -o configure configure.ac+ && \
$(RM) configure.ac+
@ -2477,6 +2460,7 @@ dep_args = -MF $(dep_file) -MQ $@ -MMD -MP
endif
ifneq ($(COMPUTE_HEADER_DEPENDENCIES),yes)
dep_dirs =
missing_dep_dirs =
dep_args =
endif
@ -2530,6 +2514,7 @@ endif
ifeq ($(GENERATE_COMPILATION_DATABASE),yes)
all:: compile_commands.json
compile_commands.json:
@$(RM) $@
$(QUIET_GEN)sed -e '1s/^/[/' -e '$$s/,$$/]/' $(compdb_dir)/*.o.json > $@+
@if test -s $@+; then mv $@+ $@; else $(RM) $@+; fi
endif
@ -2690,13 +2675,10 @@ po/git.pot: $(GENERATED_H) FORCE
.PHONY: pot
pot: po/git.pot
ifdef NO_GETTEXT
POFILES :=
MOFILES :=
else
POFILES := $(wildcard po/*.po)
MOFILES := $(patsubst po/%.po,po/build/locale/%/LC_MESSAGES/git.mo,$(POFILES))
ifndef NO_GETTEXT
all:: $(MOFILES)
endif
@ -2819,9 +2801,6 @@ ifdef GIT_TEST_CMP
endif
ifdef GIT_TEST_CMP_USE_COPIED_CONTEXT
@echo GIT_TEST_CMP_USE_COPIED_CONTEXT=YesPlease >>$@+
endif
ifdef GIT_TEST_UTF8_LOCALE
@echo GIT_TEST_UTF8_LOCALE=\''$(subst ','\'',$(subst ','\'',$(GIT_TEST_UTF8_LOCALE)))'\' >>$@+
endif
@echo NO_GETTEXT=\''$(subst ','\'',$(subst ','\'',$(NO_GETTEXT)))'\' >>$@+
ifdef GIT_PERF_REPEAT_COUNT
@ -3089,7 +3068,8 @@ endif
ln "$$execdir/git-remote-http$X" "$$execdir/$$p" 2>/dev/null || \
ln -s "git-remote-http$X" "$$execdir/$$p" 2>/dev/null || \
cp "$$execdir/git-remote-http$X" "$$execdir/$$p" || exit; } \
done
done && \
./check_bindir "z$$bindir" "z$$execdir" "$$bindir/git-add$X"
.PHONY: install-gitweb install-doc install-man install-man-perl install-html install-info install-pdf
.PHONY: quick-install-doc quick-install-man quick-install-html

View File

@ -1 +1 @@
Documentation/RelNotes/2.33.4.txt
Documentation/RelNotes/2.32.4.txt

View File

@ -280,7 +280,6 @@ static void add_p_state_clear(struct add_p_state *s)
clear_add_i_state(&s->s);
}
__attribute__((format (printf, 2, 3)))
static void err(struct add_p_state *s, const char *fmt, ...)
{
va_list args;

View File

@ -286,11 +286,6 @@ void NORETURN die_conclude_merge(void)
die(_("Exiting because of unfinished merge."));
}
void NORETURN die_ff_impossible(void)
{
die(_("Not possible to fast-forward, aborting."));
}
void advise_on_updating_sparse_paths(struct string_list *pathspec_list)
{
struct string_list_item *item;

View File

@ -90,13 +90,11 @@ int advice_enabled(enum advice_type type);
/**
* Checks the visibility of the advice before printing.
*/
__attribute__((format (printf, 2, 3)))
void advise_if_enabled(enum advice_type type, const char *advice, ...);
int error_resolve_conflict(const char *me);
void NORETURN die_resolve_conflict(const char *me);
void NORETURN die_conclude_merge(void);
void NORETURN die_ff_impossible(void);
void advise_on_updating_sparse_paths(struct string_list *pathspec_list);
void detach_advice(const char *new_name);

11
alias.c
View File

@ -46,14 +46,16 @@ void list_aliases(struct string_list *list)
#define SPLIT_CMDLINE_BAD_ENDING 1
#define SPLIT_CMDLINE_UNCLOSED_QUOTE 2
#define SPLIT_CMDLINE_ARGC_OVERFLOW 3
static const char *split_cmdline_errors[] = {
N_("cmdline ends with \\"),
N_("unclosed quote")
N_("unclosed quote"),
N_("too many arguments"),
};
int split_cmdline(char *cmdline, const char ***argv)
{
int src, dst, count = 0, size = 16;
size_t src, dst, count = 0, size = 16;
char quoted = 0;
ALLOC_ARRAY(*argv, size);
@ -96,6 +98,11 @@ int split_cmdline(char *cmdline, const char ***argv)
return -SPLIT_CMDLINE_UNCLOSED_QUOTE;
}
if (count >= INT_MAX) {
FREE_AND_NULL(*argv);
return -SPLIT_CMDLINE_ARGC_OVERFLOW;
}
ALLOC_GROW(*argv, count + 1, size);
(*argv)[count] = NULL;

28
apply.c
View File

@ -101,9 +101,9 @@ int init_apply_state(struct apply_state *state,
state->ws_error_action = warn_on_ws_error;
state->ws_ignore_action = ignore_ws_none;
state->linenr = 1;
string_list_init_nodup(&state->fn_table);
string_list_init_nodup(&state->limit_by_name);
string_list_init_nodup(&state->symlink_changes);
string_list_init(&state->fn_table, 0);
string_list_init(&state->limit_by_name, 0);
string_list_init(&state->symlink_changes, 0);
strbuf_init(&state->root, 0);
git_apply_config();
@ -1917,7 +1917,6 @@ static struct fragment *parse_binary_hunk(struct apply_state *state,
state->linenr++;
buffer += llen;
size -= llen;
while (1) {
int byte_length, max_byte_length, newsize;
llen = linelen(buffer, size);
@ -3468,21 +3467,6 @@ static int load_preimage(struct apply_state *state,
return 0;
}
static int resolve_to(struct image *image, const struct object_id *result_id)
{
unsigned long size;
enum object_type type;
clear_image(image);
image->buf = read_object_file(result_id, &type, &size);
if (!image->buf || type != OBJ_BLOB)
die("unable to read blob object %s", oid_to_hex(result_id));
image->len = size;
return 0;
}
static int three_way_merge(struct apply_state *state,
struct image *image,
char *path,
@ -3494,12 +3478,6 @@ static int three_way_merge(struct apply_state *state,
mmbuffer_t result = { NULL };
int status;
/* resolve trivial cases first */
if (oideq(base, ours))
return resolve_to(image, theirs);
else if (oideq(base, theirs) || oideq(ours, theirs))
return resolve_to(image, ours);
read_mmblob(&base_file, base);
read_mmblob(&our_file, ours);
read_mmblob(&their_file, theirs);

View File

@ -191,7 +191,7 @@ static int write_archive_entry(const struct object_id *oid, const char *base,
return err;
}
static void queue_directory(const struct object_id *oid,
static void queue_directory(const unsigned char *sha1,
struct strbuf *base, const char *filename,
unsigned mode, struct archiver_context *c)
{
@ -203,7 +203,7 @@ static void queue_directory(const struct object_id *oid,
d->mode = mode;
c->bottom = d;
d->len = xsnprintf(d->path, len, "%.*s%s/", (int)base->len, base->buf, filename);
oidcpy(&d->oid, oid);
oidread(&d->oid, sha1);
}
static int write_directory(struct archiver_context *c)
@ -250,7 +250,8 @@ static int queue_or_write_archive_entry(const struct object_id *oid,
if (check_attr_export_ignore(check))
return 0;
queue_directory(oid, base, filename, mode, c);
queue_directory(oid->hash, base, filename,
mode, c);
return READ_TREE_RECURSIVE;
}
@ -644,7 +645,7 @@ int write_archive(int argc, const char **argv, const char *prefix,
args.pretty_ctx = &ctx;
args.repo = repo;
args.prefix = prefix;
string_list_init_dup(&args.extra_files);
string_list_init(&args.extra_files, 1);
argc = parse_archive_args(argc, argv, &ar, &args, name_hint, remote);
if (!startup_info->have_repository) {
/*

2
attr.c
View File

@ -685,7 +685,7 @@ static struct attr_stack *read_attr_from_array(const char **list)
* Callers into the attribute system assume there is a single, system-wide
* global state where attributes are read from and when the state is flipped by
* calling git_attr_set_direction(), the stack frames that have been
* constructed need to be discarded so that subsequent calls into the
* constructed need to be discarded so so that subsequent calls into the
* attribute system will lazily read from the right place. Since changing
* direction causes a global paradigm shift, it should not ever be called while
* another thread could potentially be calling into the attribute system.

View File

@ -313,7 +313,9 @@ static int edit_patch(int argc, const char **argv, const char *prefix)
rev.diffopt.output_format = DIFF_FORMAT_PATCH;
rev.diffopt.use_color = 0;
rev.diffopt.flags.ignore_dirty_submodules = 1;
out = xopen(file, O_CREAT | O_WRONLY | O_TRUNC, 0666);
out = open(file, O_CREAT | O_WRONLY | O_TRUNC, 0666);
if (out < 0)
die(_("Could not open '%s' for writing."), file);
rev.diffopt.file = xfdopen(out, "w");
rev.diffopt.close_file = 1;
if (run_diff_files(&rev, 0))
@ -468,7 +470,7 @@ int cmd_add(int argc, const char **argv, const char *prefix)
{
int exit_status = 0;
struct pathspec pathspec;
struct dir_struct dir = DIR_INIT;
struct dir_struct dir;
int flags;
int add_new_files;
int require_pathspec;
@ -575,6 +577,7 @@ int cmd_add(int argc, const char **argv, const char *prefix)
die_in_unpopulated_submodule(&the_index, prefix);
die_path_inside_submodule(&the_index, &pathspec);
dir_init(&dir);
if (add_new_files) {
int baselen;

View File

@ -210,7 +210,6 @@ static void write_state_bool(const struct am_state *state,
* If state->quiet is false, calls fprintf(fp, fmt, ...), and appends a newline
* at the end.
*/
__attribute__((format (printf, 3, 4)))
static void say(const struct am_state *state, FILE *fp, const char *fmt, ...)
{
va_list ap;
@ -2106,8 +2105,7 @@ static void am_abort(struct am_state *state)
if (!has_orig_head)
oidcpy(&orig_head, the_hash_algo->empty_tree);
if (clean_index(&curr_head, &orig_head))
die(_("failed to clean index"));
clean_index(&curr_head, &orig_head);
if (has_orig_head)
update_ref("am --abort", "HEAD", &orig_head,

View File

@ -12,7 +12,9 @@
static void create_output_file(const char *output_file)
{
int output_fd = xopen(output_file, O_CREAT | O_WRONLY | O_TRUNC, 0666);
int output_fd = open(output_file, O_CREAT | O_WRONLY | O_TRUNC, 0666);
if (output_fd < 0)
die_errno(_("could not create archive file '%s'"), output_file);
if (output_fd != 1) {
if (dup2(output_fd, 1) < 0)
die_errno(_("could not redirect output"));

View File

@ -117,7 +117,6 @@ static int write_in_file(const char *path, const char *mode, const char *format,
return fclose(fp);
}
__attribute__((format (printf, 2, 3)))
static int write_to_file(const char *path, const char *format, ...)
{
int res;
@ -130,7 +129,6 @@ static int write_to_file(const char *path, const char *format, ...)
return res;
}
__attribute__((format (printf, 2, 3)))
static int append_to_file(const char *path, const char *format, ...)
{
int res;

View File

@ -168,7 +168,7 @@ static int check_branch_commit(const char *branchname, const char *refname,
int kinds, int force)
{
struct commit *rev = lookup_commit_reference(the_repository, oid);
if (!force && !rev) {
if (!rev) {
error(_("Couldn't look up commit object for '%s'"), refname);
return -1;
}

View File

@ -171,7 +171,10 @@ int cmd_bugreport(int argc, const char **argv, const char *prefix)
get_populated_hooks(&buffer, !startup_info->have_repository);
/* fopen doesn't offer us an O_EXCL alternative, except with glibc. */
report = xopen(report_path.buf, O_CREAT | O_EXCL | O_WRONLY, 0666);
report = open(report_path.buf, O_CREAT | O_EXCL | O_WRONLY, 0666);
if (report < 0)
die(_("couldn't create a new file at '%s'"), report_path.buf);
if (write_in_full(report, buffer.buf, buffer.len) < 0)
die_errno(_("unable to write to %s"), report_path.buf);

View File

@ -46,7 +46,7 @@ static int parse_options_cmd_bundle(int argc,
const char* prefix,
const char * const usagestr[],
const struct option options[],
char **bundle_file) {
const char **bundle_file) {
int newargc;
newargc = parse_options(argc, argv, NULL, options, usagestr,
PARSE_OPT_STOP_AT_NON_OPTION);
@ -61,7 +61,7 @@ static int cmd_bundle_create(int argc, const char **argv, const char *prefix) {
int progress = isatty(STDERR_FILENO);
struct strvec pack_opts;
int version = -1;
int ret;
struct option options[] = {
OPT_SET_INT('q', "quiet", &progress,
N_("do not show progress meter"), 0),
@ -76,7 +76,7 @@ static int cmd_bundle_create(int argc, const char **argv, const char *prefix) {
N_("specify bundle format version")),
OPT_END()
};
char *bundle_file;
const char* bundle_file;
argc = parse_options_cmd_bundle(argc, argv, prefix,
builtin_bundle_create_usage, options, &bundle_file);
@ -94,95 +94,75 @@ static int cmd_bundle_create(int argc, const char **argv, const char *prefix) {
if (!startup_info->have_repository)
die(_("Need a repository to create a bundle."));
ret = !!create_bundle(the_repository, bundle_file, argc, argv, &pack_opts, version);
free(bundle_file);
return ret;
return !!create_bundle(the_repository, bundle_file, argc, argv, &pack_opts, version);
}
static int cmd_bundle_verify(int argc, const char **argv, const char *prefix) {
struct bundle_header header = BUNDLE_HEADER_INIT;
struct bundle_header header;
int bundle_fd = -1;
int quiet = 0;
int ret;
struct option options[] = {
OPT_BOOL('q', "quiet", &quiet,
N_("do not show bundle details")),
OPT_END()
};
char *bundle_file;
const char* bundle_file;
argc = parse_options_cmd_bundle(argc, argv, prefix,
builtin_bundle_verify_usage, options, &bundle_file);
/* bundle internals use argv[1] as further parameters */
if ((bundle_fd = read_bundle_header(bundle_file, &header)) < 0) {
ret = 1;
goto cleanup;
}
memset(&header, 0, sizeof(header));
if ((bundle_fd = read_bundle_header(bundle_file, &header)) < 0)
return 1;
close(bundle_fd);
if (verify_bundle(the_repository, &header, !quiet)) {
ret = 1;
goto cleanup;
}
if (verify_bundle(the_repository, &header, !quiet))
return 1;
fprintf(stderr, _("%s is okay\n"), bundle_file);
ret = 0;
cleanup:
free(bundle_file);
bundle_header_release(&header);
return ret;
return 0;
}
static int cmd_bundle_list_heads(int argc, const char **argv, const char *prefix) {
struct bundle_header header = BUNDLE_HEADER_INIT;
struct bundle_header header;
int bundle_fd = -1;
int ret;
struct option options[] = {
OPT_END()
};
char *bundle_file;
const char* bundle_file;
argc = parse_options_cmd_bundle(argc, argv, prefix,
builtin_bundle_list_heads_usage, options, &bundle_file);
/* bundle internals use argv[1] as further parameters */
if ((bundle_fd = read_bundle_header(bundle_file, &header)) < 0) {
ret = 1;
goto cleanup;
}
memset(&header, 0, sizeof(header));
if ((bundle_fd = read_bundle_header(bundle_file, &header)) < 0)
return 1;
close(bundle_fd);
ret = !!list_bundle_refs(&header, argc, argv);
cleanup:
free(bundle_file);
bundle_header_release(&header);
return ret;
return !!list_bundle_refs(&header, argc, argv);
}
static int cmd_bundle_unbundle(int argc, const char **argv, const char *prefix) {
struct bundle_header header = BUNDLE_HEADER_INIT;
struct bundle_header header;
int bundle_fd = -1;
int ret;
struct option options[] = {
OPT_END()
};
char *bundle_file;
const char* bundle_file;
argc = parse_options_cmd_bundle(argc, argv, prefix,
builtin_bundle_unbundle_usage, options, &bundle_file);
/* bundle internals use argv[1] as further parameters */
if ((bundle_fd = read_bundle_header(bundle_file, &header)) < 0) {
ret = 1;
goto cleanup;
}
memset(&header, 0, sizeof(header));
if ((bundle_fd = read_bundle_header(bundle_file, &header)) < 0)
return 1;
if (!startup_info->have_repository)
die(_("Need a repository to unbundle."));
ret = !!unbundle(the_repository, &header, bundle_fd, 0) ||
return !!unbundle(the_repository, &header, bundle_fd, 0) ||
list_bundle_refs(&header, argc, argv);
bundle_header_release(&header);
cleanup:
free(bundle_file);
return ret;
}
int cmd_bundle(int argc, const char **argv, const char *prefix)

View File

@ -512,6 +512,12 @@ static int batch_objects(struct batch_options *opt)
if (opt->cmdmode)
data.split_on_whitespace = 1;
if (opt->all_objects) {
struct object_info empty = OBJECT_INFO_INIT;
if (!memcmp(&data.info, &empty, sizeof(empty)))
data.skip_object_info = 1;
}
/*
* If we are printing out the object, then always fill in the type,
* since we will want to decide whether or not to stream.
@ -521,10 +527,6 @@ static int batch_objects(struct batch_options *opt)
if (opt->all_objects) {
struct object_cb_data cb;
struct object_info empty = OBJECT_INFO_INIT;
if (!memcmp(&data.info, &empty, sizeof(empty)))
data.skip_object_info = 1;
if (has_promisor_remote())
warning("This repository uses promisor remotes. Some objects may not be loaded.");

View File

@ -153,7 +153,7 @@ static int check_ignore_stdin_paths(struct dir_struct *dir, const char *prefix)
int cmd_check_ignore(int argc, const char **argv, const char *prefix)
{
int num_ignored;
struct dir_struct dir = DIR_INIT;
struct dir_struct dir;
git_config(git_default_config, NULL);
@ -182,6 +182,7 @@ int cmd_check_ignore(int argc, const char **argv, const char *prefix)
if (!no_index && read_cache() < 0)
die(_("index file corrupt"));
dir_init(&dir);
setup_standard_excludes(&dir);
if (stdin_paths) {

View File

@ -53,7 +53,7 @@ static void packet_to_pc_item(const char *buffer, int len,
static void report_result(struct parallel_checkout_item *pc_item)
{
struct pc_item_result res = { 0 };
struct pc_item_result res;
size_t size;
res.id = pc_item->id;

View File

@ -378,6 +378,9 @@ static int checkout_worktree(const struct checkout_opts *opts,
if (pc_workers > 1)
init_parallel_checkout();
/* TODO: audit for interaction with sparse-index. */
ensure_full_index(&the_index);
for (pos = 0; pos < active_nr; pos++) {
struct cache_entry *ce = active_cache[pos];
if (ce->ce_flags & CE_MATCHED) {
@ -404,7 +407,7 @@ static int checkout_worktree(const struct checkout_opts *opts,
mem_pool_discard(&ce_mem_pool, should_validate_cache_entries());
remove_marked_cache_entries(&the_index, 1);
remove_scheduled_dirs();
errs |= finish_delayed_checkout(&state, &nr_checkouts, opts->show_progress);
errs |= finish_delayed_checkout(&state, &nr_checkouts);
if (opts->count_checkout_paths) {
if (nr_unmerged)
@ -527,6 +530,8 @@ static int checkout_paths(const struct checkout_opts *opts,
* Make sure all pathspecs participated in locating the paths
* to be checked out.
*/
/* TODO: audit for interaction with sparse-index. */
ensure_full_index(&the_index);
for (pos = 0; pos < active_nr; pos++)
if (opts->overlay_mode)
mark_ce_for_checkout_overlay(active_cache[pos],
@ -1588,9 +1593,6 @@ static int checkout_main(int argc, const char **argv, const char *prefix,
git_config(git_checkout_config, opts);
prepare_repo_settings(the_repository);
the_repository->settings.command_requires_full_index = 0;
opts->track = BRANCH_TRACK_UNSPECIFIED;
if (!opts->accept_pathspec && !opts->accept_ref)

View File

@ -641,7 +641,7 @@ static int clean_cmd(void)
static int filter_by_patterns_cmd(void)
{
struct dir_struct dir = DIR_INIT;
struct dir_struct dir;
struct strbuf confirm = STRBUF_INIT;
struct strbuf **ignore_list;
struct string_list_item *item;
@ -665,6 +665,7 @@ static int filter_by_patterns_cmd(void)
if (!confirm.len)
break;
dir_init(&dir);
pl = add_pattern_list(&dir, EXC_CMDL, "manual exclude");
ignore_list = strbuf_split_max(&confirm, ' ', 0);
@ -889,7 +890,7 @@ int cmd_clean(int argc, const char **argv, const char *prefix)
int ignored_only = 0, config_set = 0, errors = 0, gone = 1;
int rm_flags = REMOVE_DIR_KEEP_NESTED_GIT;
struct strbuf abs_path = STRBUF_INIT;
struct dir_struct dir = DIR_INIT;
struct dir_struct dir;
struct pathspec pathspec;
struct strbuf buf = STRBUF_INIT;
struct string_list exclude_list = STRING_LIST_INIT_NODUP;
@ -920,6 +921,7 @@ int cmd_clean(int argc, const char **argv, const char *prefix)
argc = parse_options(argc, argv, prefix, options, builtin_clean_usage,
0);
dir_init(&dir);
if (!interactive && !dry_run && !force) {
if (config_set)
die(_("clean.requireForce set to true and neither -i, -n, nor -f given; "

View File

@ -424,13 +424,11 @@ static void copy_or_link_directory(struct strbuf *src, struct strbuf *dest,
int src_len, dest_len;
struct dir_iterator *iter;
int iter_status;
unsigned int flags;
struct strbuf realpath = STRBUF_INIT;
mkdir_if_missing(dest->buf, 0777);
flags = DIR_ITERATOR_PEDANTIC | DIR_ITERATOR_FOLLOW_SYMLINKS;
iter = dir_iterator_begin(src->buf, flags);
iter = dir_iterator_begin(src->buf, DIR_ITERATOR_PEDANTIC);
if (!iter)
die_errno(_("failed to start iterator over '%s'"), src->buf);
@ -446,6 +444,10 @@ static void copy_or_link_directory(struct strbuf *src, struct strbuf *dest,
strbuf_setlen(dest, dest_len);
strbuf_addstr(dest, iter->relative_path);
if (S_ISLNK(iter->st.st_mode))
die(_("symlink '%s' exists, refusing to clone with --local"),
iter->relative_path);
if (S_ISDIR(iter->st.st_mode)) {
mkdir_if_missing(dest->buf, 0777);
continue;
@ -1320,8 +1322,9 @@ int cmd_clone(int argc, const char **argv, const char *prefix)
}
if (!is_local && !complete_refs_before_fetch) {
if (transport_fetch_refs(transport, mapped_refs))
die(_("remote transport reported error"));
err = transport_fetch_refs(transport, mapped_refs);
if (err)
goto cleanup;
}
remote_head = find_ref_by_name(refs, "HEAD");
@ -1340,9 +1343,6 @@ int cmd_clone(int argc, const char **argv, const char *prefix)
our_head_points_at = remote_head_points_at;
}
else {
const char *branch;
char *ref;
if (option_branch)
die(_("Remote branch %s not found in upstream %s"),
option_branch, remote_name);
@ -1353,22 +1353,24 @@ int cmd_clone(int argc, const char **argv, const char *prefix)
remote_head_points_at = NULL;
remote_head = NULL;
option_no_checkout = 1;
if (!option_bare) {
const char *branch;
char *ref;
if (transport_ls_refs_options.unborn_head_target &&
skip_prefix(transport_ls_refs_options.unborn_head_target,
"refs/heads/", &branch)) {
ref = transport_ls_refs_options.unborn_head_target;
transport_ls_refs_options.unborn_head_target = NULL;
create_symref("HEAD", ref, reflog_msg.buf);
} else {
branch = git_default_branch_name(0);
ref = xstrfmt("refs/heads/%s", branch);
}
if (transport_ls_refs_options.unborn_head_target &&
skip_prefix(transport_ls_refs_options.unborn_head_target,
"refs/heads/", &branch)) {
ref = transport_ls_refs_options.unborn_head_target;
transport_ls_refs_options.unborn_head_target = NULL;
create_symref("HEAD", ref, reflog_msg.buf);
} else {
branch = git_default_branch_name(0);
ref = xstrfmt("refs/heads/%s", branch);
}
if (!option_bare)
install_branch_config(0, branch, remote_name, ref);
free(ref);
free(ref);
}
}
write_refspec_config(src_ref_prefix, our_head_points_at,
@ -1380,8 +1382,9 @@ int cmd_clone(int argc, const char **argv, const char *prefix)
if (is_local)
clone_local(path, git_dir);
else if (refs && complete_refs_before_fetch) {
if (transport_fetch_refs(transport, mapped_refs))
die(_("remote transport reported error"));
err = transport_fetch_refs(transport, mapped_refs);
if (err)
goto cleanup;
}
update_remote_refs(refs, mapped_refs, remote_head_points_at,
@ -1409,6 +1412,7 @@ int cmd_clone(int argc, const char **argv, const char *prefix)
junk_mode = JUNK_LEAVE_REPO;
err = checkout(submodule_progress);
cleanup:
free(remote_name);
strbuf_release(&reflog_msg);
strbuf_release(&branch_top);

View File

@ -29,7 +29,7 @@ int cmd_column(int argc, const char **argv, const char *prefix)
OPT_INTEGER(0, "raw-mode", &colopts, N_("layout to use")),
OPT_INTEGER(0, "width", &copts.width, N_("maximum width")),
OPT_STRING(0, "indent", &copts.indent, N_("string"), N_("padding space on left border")),
OPT_STRING(0, "nl", &copts.nl, N_("string"), N_("padding space on right border")),
OPT_INTEGER(0, "nl", &copts.nl, N_("padding space on right border")),
OPT_INTEGER(0, "padding", &copts.padding, N_("padding space between columns")),
OPT_END()
};

View File

@ -88,7 +88,9 @@ static int parse_file_arg_callback(const struct option *opt,
if (!strcmp(arg, "-"))
fd = 0;
else {
fd = xopen(arg, O_RDONLY);
fd = open(arg, O_RDONLY);
if (fd < 0)
die_errno(_("git commit-tree: failed to open '%s'"), arg);
}
if (strbuf_read(buf, fd, 0) < 0)
die_errno(_("git commit-tree: failed to read '%s'"), arg);

View File

@ -889,22 +889,7 @@ static int prepare_to_commit(const char *index_file, const char *prefix,
int ident_shown = 0;
int saved_color_setting;
struct ident_split ci, ai;
const char *hint_cleanup_all = allow_empty_message ?
_("Please enter the commit message for your changes."
" Lines starting\nwith '%c' will be ignored.\n") :
_("Please enter the commit message for your changes."
" Lines starting\nwith '%c' will be ignored, and an empty"
" message aborts the commit.\n");
const char *hint_cleanup_space = allow_empty_message ?
_("Please enter the commit message for your changes."
" Lines starting\n"
"with '%c' will be kept; you may remove them"
" yourself if you want to.\n") :
_("Please enter the commit message for your changes."
" Lines starting\n"
"with '%c' will be kept; you may remove them"
" yourself if you want to.\n"
"An empty message aborts the commit.\n");
if (whence != FROM_COMMIT) {
if (cleanup_mode == COMMIT_MSG_CLEANUP_SCISSORS &&
!merge_contains_scissors)
@ -926,12 +911,20 @@ static int prepare_to_commit(const char *index_file, const char *prefix,
fprintf(s->fp, "\n");
if (cleanup_mode == COMMIT_MSG_CLEANUP_ALL)
status_printf(s, GIT_COLOR_NORMAL, hint_cleanup_all, comment_line_char);
status_printf(s, GIT_COLOR_NORMAL,
_("Please enter the commit message for your changes."
" Lines starting\nwith '%c' will be ignored, and an empty"
" message aborts the commit.\n"), comment_line_char);
else if (cleanup_mode == COMMIT_MSG_CLEANUP_SCISSORS) {
if (whence == FROM_COMMIT && !merge_contains_scissors)
wt_status_add_cut_line(s->fp);
} else /* COMMIT_MSG_CLEANUP_SPACE, that is. */
status_printf(s, GIT_COLOR_NORMAL, hint_cleanup_space, comment_line_char);
status_printf(s, GIT_COLOR_NORMAL,
_("Please enter the commit message for your changes."
" Lines starting\n"
"with '%c' will be kept; you may remove them"
" yourself if you want to.\n"
"An empty message aborts the commit.\n"), comment_line_char);
/*
* These should never fail because they come from our own
@ -1253,6 +1246,8 @@ static int parse_and_validate_options(int argc, const char *argv[],
if (logfile || have_option_m || use_message)
use_editor = 0;
if (0 <= edit_flag)
use_editor = edit_flag;
/* Sanity check options */
if (amend && !current_head)
@ -1342,9 +1337,6 @@ static int parse_and_validate_options(int argc, const char *argv[],
}
}
if (0 <= edit_flag)
use_editor = edit_flag;
cleanup_mode = get_cleanup_mode(cleanup_arg, use_editor);
handle_untracked_files_arg(s);
@ -1518,9 +1510,6 @@ int cmd_status(int argc, const char **argv, const char *prefix)
if (argc == 2 && !strcmp(argv[1], "-h"))
usage_with_options(builtin_status_usage, builtin_status_options);
prepare_repo_settings(the_repository);
the_repository->settings.command_requires_full_index = 0;
status_init_config(&s, git_status_config);
argc = parse_options(argc, argv, prefix,
builtin_status_options,
@ -1690,9 +1679,6 @@ int cmd_commit(int argc, const char **argv, const char *prefix)
if (argc == 2 && !strcmp(argv[1], "-h"))
usage_with_options(builtin_commit_usage, builtin_commit_options);
prepare_repo_settings(the_repository);
the_repository->settings.command_requires_full_index = 0;
status_init_config(&s, git_commit_config);
s.commit_template = 1;
status_format = STATUS_FORMAT_NONE; /* Ignore status.short */

View File

@ -2,7 +2,6 @@
#include "cache.h"
#include "config.h"
#include "diff.h"
#include "diff-merges.h"
#include "commit.h"
#include "revision.h"
#include "builtin.h"
@ -28,12 +27,6 @@ int cmd_diff_index(int argc, const char **argv, const char *prefix)
rev.abbrev = 0;
prefix = precompose_argv_prefix(argc, argv, prefix);
/*
* We need (some of) diff for merges options (e.g., --cc), and we need
* to avoid conflict with our own meaning of "-m".
*/
diff_merges_suppress_m_parsing();
argc = setup_revisions(argc, argv, &rev, NULL);
for (i = 1; i < argc; i++) {
const char *arg = argv[i];
@ -42,8 +35,6 @@ int cmd_diff_index(int argc, const char **argv, const char *prefix)
option |= DIFF_INDEX_CACHED;
else if (!strcmp(arg, "--merge-base"))
option |= DIFF_INDEX_MERGE_BASE;
else if (!strcmp(arg, "-m"))
rev.match_missing = 1;
else
usage(diff_cache_usage);
}

View File

@ -26,8 +26,8 @@
static const char builtin_diff_usage[] =
"git diff [<options>] [<commit>] [--] [<path>...]\n"
" or: git diff [<options>] --cached [--merge-base] [<commit>] [--] [<path>...]\n"
" or: git diff [<options>] [--merge-base] <commit> [<commit>...] <commit> [--] [<path>...]\n"
" or: git diff [<options>] --cached [<commit>] [--] [<path>...]\n"
" or: git diff [<options>] <commit> [--merge-base] [<commit>...] <commit> [--] [<path>...]\n"
" or: git diff [<options>] <commit>...<commit>] [--] [<path>...]\n"
" or: git diff [<options>] <blob> <blob>]\n"
" or: git diff [<options>] --no-index [--] <path> <path>]\n"

Some files were not shown because too many files have changed in this diff Show More