Compare commits

..

22 Commits

Author SHA1 Message Date
5b1c746c35 Git 2.31.4
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2022-06-23 12:35:25 +02:00
2f8809f9a1 Sync with 2.30.5
* maint-2.30:
  Git 2.30.5
  setup: tighten ownership checks post CVE-2022-24765
  git-compat-util: allow root to access both SUDO_UID and root owned
  t0034: add negative tests and allow git init to mostly work under sudo
  git-compat-util: avoid failing dir ownership checks if running privileged
  t: regression git needs safe.directory when using sudo
2022-06-23 12:35:23 +02:00
88b7be68a4 Git 2.30.5
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2022-06-23 12:31:05 +02:00
3b0bf27049 setup: tighten ownership checks post CVE-2022-24765
8959555cee (setup_git_directory(): add an owner check for the top-level
directory, 2022-03-02), adds a function to check for ownership of
repositories using a directory that is representative of it, and ways to
add exempt a specific repository from said check if needed, but that
check didn't account for owership of the gitdir, or (when used) the
gitfile that points to that gitdir.

An attacker could create a git repository in a directory that they can
write into but that is owned by the victim to work around the fix that
was introduced with CVE-2022-24765 to potentially run code as the
victim.

An example that could result in privilege escalation to root in *NIX would
be to set a repository in a shared tmp directory by doing (for example):

  $ git -C /tmp init

To avoid that, extend the ensure_valid_ownership function to be able to
check for all three paths.

This will have the side effect of tripling the number of stat() calls
when a repository is detected, but the effect is expected to be likely
minimal, as it is done only once during the directory walk in which Git
looks for a repository.

Additionally make sure to resolve the gitfile (if one was used) to find
the relevant gitdir for checking.

While at it change the message printed on failure so it is clear we are
referring to the repository by its worktree (or gitdir if it is bare) and
not to a specific directory.

Helped-by: Junio C Hamano <junio@pobox.com>
Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
2022-06-23 12:31:05 +02:00
b779214eaf Merge branch 'cb/path-owner-check-with-sudo'
With a recent update to refuse access to repositories of other
people by default, "sudo make install" and "sudo git describe"
stopped working.  This series intends to loosen it while keeping
the safety.

* cb/path-owner-check-with-sudo:
  t0034: add negative tests and allow git init to mostly work under sudo
  git-compat-util: avoid failing dir ownership checks if running privileged
  t: regression git needs safe.directory when using sudo

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2022-06-23 12:31:04 +02:00
6b11e3d52e git-compat-util: allow root to access both SUDO_UID and root owned
Previous changes introduced a regression which will prevent root for
accessing repositories owned by thyself if using sudo because SUDO_UID
takes precedence.

Loosen that restriction by allowing root to access repositories owned
by both uid by default and without having to add a safe.directory
exception.

A previous workaround that was documented in the tests is no longer
needed so it has been removed together with its specially crafted
prerequisite.

Helped-by: Johanness Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-06-17 14:03:08 -07:00
b9063afda1 t0034: add negative tests and allow git init to mostly work under sudo
Add a support library that provides one function that can be used
to run a "scriplet" of commands through sudo and that helps invoking
sudo in the slightly awkward way that is required to ensure it doesn't
block the call (if shell was allowed as tested in the prerequisite)
and it doesn't run the command through a different shell than the one
we intended.

Add additional negative tests as suggested by Junio and that use a
new workspace that is owned by root.

Document a regression that was introduced by previous commits where
root won't be able anymore to access directories they own unless
SUDO_UID is removed from their environment.

The tests document additional ways that this new restriction could
be worked around and the documentation explains why it might be instead
considered a feature, but a "fix" is planned for a future change.

Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-12 18:12:23 -07:00
ae9abbb63e git-compat-util: avoid failing dir ownership checks if running privileged
bdc77d1d68 (Add a function to determine whether a path is owned by the
current user, 2022-03-02) checks for the effective uid of the running
process using geteuid() but didn't account for cases where that user was
root (because git was invoked through sudo or a compatible tool) and the
original uid that repository trusted for its config was no longer known,
therefore failing the following otherwise safe call:

  guy@renard ~/Software/uncrustify $ sudo git describe --always --dirty
  [sudo] password for guy:
  fatal: unsafe repository ('/home/guy/Software/uncrustify' is owned by someone else)

Attempt to detect those cases by using the environment variables that
those tools create to keep track of the original user id, and do the
ownership check using that instead.

This assumes the environment the user is running on after going
privileged can't be tampered with, and also adds code to restrict that
the new behavior only applies if running as root, therefore keeping the
most common case, which runs unprivileged, from changing, but because of
that, it will miss cases where sudo (or an equivalent) was used to change
to another unprivileged user or where the equivalent tool used to raise
privileges didn't track the original id in a sudo compatible way.

Because of compatibility with sudo, the code assumes that uid_t is an
unsigned integer type (which is not required by the standard) but is used
that way in their codebase to generate SUDO_UID.  In systems where uid_t
is signed, sudo might be also patched to NOT be unsigned and that might
be able to trigger an edge case and a bug (as described in the code), but
it is considered unlikely to happen and even if it does, the code would
just mostly fail safely, so there was no attempt either to detect it or
prevent it by the code, which is something that might change in the future,
based on expected user feedback.

Reported-by: Guy Maurel <guy.j@maurel.de>
Helped-by: SZEDER Gábor <szeder.dev@gmail.com>
Helped-by: Randall Becker <rsbecker@nexbridge.com>
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Suggested-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-12 18:12:23 -07:00
5f1a3fec8c t: regression git needs safe.directory when using sudo
Originally reported after release of v2.35.2 (and other maint branches)
for CVE-2022-24765 and blocking otherwise harmless commands that were
done using sudo in a repository that was owned by the user.

Add a new test script with very basic support to allow running git
commands through sudo, so a reproduction could be implemented and that
uses only `git status` as a proxy of the issue reported.

Note that because of the way sudo interacts with the system, a much
more complete integration with the test framework will require a lot
more work and that was therefore intentionally punted for now.

The current implementation requires the execution of a special cleanup
function which should always be kept as the last "test" or otherwise
the standard cleanup functions will fail because they can't remove
the root owned directories that are used.  This also means that if
failures are found while running, the specifics of the failure might
not be kept for further debugging and if the test was interrupted, it
will be necessary to clean the working directory manually before
restarting by running:

  $ sudo rm -rf trash\ directory.t0034-root-safe-directory/

The test file also uses at least one initial "setup" test that creates
a parallel execution directory under the "root" sub directory, which
should be used as top level directory for all repositories that are
used in this test file.  Unlike all other tests the repository provided
by the test framework should go unused.

Special care should be taken when invoking commands through sudo, since
the environment is otherwise independent from what the test framework
setup and might have changed the values for HOME, SHELL and dropped
several relevant environment variables for your test.  Indeed `git status`
was used as a proxy because it doesn't even require commits in the
repository to work and usually doesn't require much from the environment
to run, but a future patch will add calls to `git init` and that will
fail to honor the default branch name, unless that setting is NOT
provided through an environment variable (which means even a CI run
could fail that test if enabled incorrectly).

A new SUDO prerequisite is provided that does some sanity checking
to make sure the sudo command that will be used allows for passwordless
execution as root without restrictions and doesn't mess with git's
execution path.  This matches what is provided by the macOS agents that
are used as part of GitHub actions and probably nowhere else.

Most of those characteristics make this test mostly only suitable for
CI, but it might be executed locally if special care is taken to provide
for all of them in the local configuration and maybe making use of the
sudo credential cache by first invoking sudo, entering your password if
needed, and then invoking the test with:

  $ GIT_TEST_ALLOW_SUDO=YES ./t0034-root-safe-directory.sh

If it fails to run, then it means your local setup wouldn't work for the
test because of the configuration sudo has or other system settings, and
things that might help are to comment out sudo's secure_path config, and
make sure that the account you are using has no restrictions on the
commands it can run through sudo, just like is provided for the user in
the CI.

For example (assuming a username of marta for you) something probably
similar to the following entry in your /etc/sudoers (or equivalent) file:

  marta	ALL=(ALL:ALL) NOPASSWD: ALL

Reported-by: SZEDER Gábor <szeder.dev@gmail.com>
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-12 18:12:23 -07:00
09f66d65f8 Git 2.31.3
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-04-13 15:21:08 -07:00
17083c79ae Git 2.30.4
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-04-13 13:31:29 -07:00
0f85c4a30b setup: opt-out of check with safe.directory=*
With the addition of the safe.directory in 8959555ce
(setup_git_directory(): add an owner check for the top-level directory,
2022-03-02) released in v2.35.2, we are receiving feedback from a
variety of users about the feature.

Some users have a very large list of shared repositories and find it
cumbersome to add this config for every one of them.

In a more difficult case, certain workflows involve running Git commands
within containers. The container boundary prevents any global or system
config from communicating `safe.directory` values from the host into the
container. Further, the container almost always runs as a different user
than the owner of the directory in the host.

To simplify the reactions necessary for these users, extend the
definition of the safe.directory config value to include a possible '*'
value. This value implies that all directories are safe, providing a
single setting to opt-out of this protection.

Note that an empty assignment of safe.directory clears all previous
values, and this is already the case with the "if (!value || !*value)"
condition.

Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-04-13 12:42:51 -07:00
bb50ec3cc3 setup: fix safe.directory key not being checked
It seems that nothing is ever checking to make sure the safe directories
in the configs actually have the key safe.directory, so some unrelated
config that has a value with a certain directory would also make it a
safe directory.

Signed-off-by: Matheus Valadares <me@m28.io>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-04-13 12:42:51 -07:00
e47363e5a8 t0033: add tests for safe.directory
It is difficult to change the ownership on a directory in our test
suite, so insert a new GIT_TEST_ASSUME_DIFFERENT_OWNER environment
variable to trick Git into thinking we are in a differently-owned
directory. This allows us to test that the config is parsed correctly.

Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-04-13 12:42:49 -07:00
44de39c45c Git 2.31.2
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2022-03-24 00:24:29 +01:00
6a2381a3e5 Sync with 2.30.3
* maint-2.30:
  Git 2.30.3
  setup_git_directory(): add an owner check for the top-level directory
  Add a function to determine whether a path is owned by the current user
2022-03-24 00:24:29 +01:00
cb95038137 Git 2.30.3
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2022-03-24 00:22:17 +01:00
fdcad5a53e Fix GIT_CEILING_DIRECTORIES with C:\ and the likes
When determining the length of the longest ancestor of a given path with
respect to to e.g. `GIT_CEILING_DIRECTORIES`, we special-case the root
directory by returning 0 (i.e. we pretend that the path `/` does not end
in a slash by virtually stripping it).

That is the correct behavior because when normalizing paths, the root
directory is special: all other directory paths have their trailing
slash stripped, but not the root directory's path (because it would
become the empty string, which is not a legal path).

However, this special-casing of the root directory in
`longest_ancestor_length()` completely forgets about Windows-style root
directories, e.g. `C:\`. These _also_ get normalized with a trailing
slash (because `C:` would actually refer to the current directory on
that drive, not necessarily to its root directory).

In fc56c7b34b (mingw: accomodate t0060-path-utils for MSYS2,
2016-01-27), we almost got it right. We noticed that
`longest_ancestor_length()` expects a slash _after_ the matched prefix,
and if the prefix already ends in a slash, the normalized path won't
ever match and -1 is returned.

But then that commit went astray: The correct fix is not to adjust the
_tests_ to expect an incorrect -1 when that function is fed a prefix
that ends in a slash, but instead to treat such a prefix as if the
trailing slash had been removed.

Likewise, that function needs to handle the case where it is fed a path
that ends in a slash (not only a prefix that ends in a slash): if it
matches the prefix (plus trailing slash), we still need to verify that
the path does not end there, otherwise the prefix is not actually an
ancestor of the path but identical to it (and we need to return -1 in
that case).

With these two adjustments, we no longer need to play games in t0060
where we only add `$rootoff` if the passed prefix is different from the
MSYS2 pseudo root, instead we also add it for the MSYS2 pseudo root
itself. We do have to be careful to skip that logic entirely for Windows
paths, though, because they do are not subject to that MSYS2 pseudo root
treatment.

This patch fixes the scenario where a user has set
`GIT_CEILING_DIRECTORIES=C:\`, which would be ignored otherwise.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2022-03-24 00:21:08 +01:00
8959555cee setup_git_directory(): add an owner check for the top-level directory
It poses a security risk to search for a git directory outside of the
directories owned by the current user.

For example, it is common e.g. in computer pools of educational
institutes to have a "scratch" space: a mounted disk with plenty of
space that is regularly swiped where any authenticated user can create
a directory to do their work. Merely navigating to such a space with a
Git-enabled `PS1` when there is a maliciously-crafted `/scratch/.git/`
can lead to a compromised account.

The same holds true in multi-user setups running Windows, as `C:\` is
writable to every authenticated user by default.

To plug this vulnerability, we stop Git from accepting top-level
directories owned by someone other than the current user. We avoid
looking at the ownership of each and every directories between the
current and the top-level one (if there are any between) to avoid
introducing a performance bottleneck.

This new default behavior is obviously incompatible with the concept of
shared repositories, where we expect the top-level directory to be owned
by only one of its legitimate users. To re-enable that use case, we add
support for adding exceptions from the new default behavior via the
config setting `safe.directory`.

The `safe.directory` config setting is only respected in the system and
global configs, not from repository configs or via the command-line, and
can have multiple values to allow for multiple shared repositories.

We are particularly careful to provide a helpful message to any user
trying to use a shared repository.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2022-03-21 13:16:26 +01:00
bdc77d1d68 Add a function to determine whether a path is owned by the current user
This function will be used in the next commit to prevent
`setup_git_directory()` from discovering a repository in a directory
that is owned by someone other than the current user.

Note: We cannot simply use `st.st_uid` on Windows just like we do on
Linux and other Unix-like platforms: according to
https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/stat-functions
this field is always zero on Windows (because Windows' idea of a user ID
does not fit into a single numerical value). Therefore, we have to do
something a little involved to replicate the same functionality there.

Also note: On Windows, a user's home directory is not actually owned by
said user, but by the administrator. For all practical purposes, it is
under the user's control, though, therefore we pretend that it is owned
by the user.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2022-03-21 13:16:26 +01:00
2a9a5862e5 Merge branch 'cb/mingw-gmtime-r'
Build fix on Windows.

* cb/mingw-gmtime-r:
  mingw: avoid fallback for {local,gm}time_r()

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2022-03-17 12:52:12 +01:00
6e7ad1e4c2 mingw: avoid fallback for {local,gm}time_r()
mingw-w64's pthread_unistd.h had a bug that mistakenly (because there is
no support for the *lockfile() functions required[1]) defined
_POSIX_THREAD_SAFE_FUNCTIONS and that was being worked around since
3ecd153a3b (compat/mingw: support MSys2-based MinGW build, 2016-01-14).

The bug was fixed in winphtreads, but as a side effect, leaves the
reentrant functions from time.h no longer visible and therefore breaks
the build.

Since the intention all along was to avoid using the fallback functions,
formalize the use of POSIX by setting the corresponding feature flag and
compile out the implementation for the fallback functions.

[1] https://unix.org/whitepapers/reentrant.html

Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Acked-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-17 12:52:12 +01:00
451 changed files with 4577 additions and 20054 deletions

View File

@ -186,11 +186,6 @@ jobs:
## Unzip and remove the artifact
unzip artifacts.zip
rm artifacts.zip
- name: initialize vcpkg
uses: actions/checkout@v2
with:
repository: 'microsoft/vcpkg'
path: 'compat/vcbuild/vcpkg'
- name: download vcpkg artifacts
shell: powershell
run: |

2
.gitignore vendored
View File

@ -33,7 +33,6 @@
/git-check-mailmap
/git-check-ref-format
/git-checkout
/git-checkout--worker
/git-checkout-index
/git-cherry
/git-cherry-pick
@ -163,7 +162,6 @@
/git-stripspace
/git-submodule
/git-submodule--helper
/git-subtree
/git-svn
/git-switch
/git-symbolic-ref

View File

@ -220,7 +220,6 @@ Philipp A. Hartmann <pah@qo.cx> <ph@sorgh.de>
Philippe Bruhat <book@cpan.org>
Ralf Thielow <ralf.thielow@gmail.com> <ralf.thielow@googlemail.com>
Ramsay Jones <ramsay@ramsayjones.plus.com> <ramsay@ramsay1.demon.co.uk>
Ramkumar Ramachandra <r@artagnon.com> <artagnon@gmail.com>
Randall S. Becker <randall.becker@nexbridge.ca> <rsbecker@nexbridge.com>
René Scharfe <l.s.r@web.de> <rene.scharfe@lsrfire.ath.cx>
René Scharfe <l.s.r@web.de> Rene Scharfe

View File

@ -175,11 +175,6 @@ For shell scripts specifically (not exhaustive):
does not have such a problem.
- Even though "local" is not part of POSIX, we make heavy use of it
in our test suite. We do not use it in scripted Porcelains, and
hopefully nobody starts using "local" before they are reimplemented
in C ;-)
For C programs:
@ -503,12 +498,7 @@ Error Messages
- Do not end error messages with a full stop.
- Do not capitalize the first word, only because it is the first word
in the message ("unable to open %s", not "Unable to open %s"). But
"SHA-3 not supported" is fine, because the reason the first word is
capitalized is not because it is at the beginning of the sentence,
but because the word would be spelled in capital letters even when
it appeared in the middle of the sentence.
- Do not capitalize ("unable to open %s", not "Unable to open %s")
- Say what the error is first ("cannot open %s", not "%s: cannot open")

View File

@ -2,8 +2,6 @@
MAN1_TXT =
MAN5_TXT =
MAN7_TXT =
HOWTO_TXT =
DOC_DEP_TXT =
TECH_DOCS =
ARTICLES =
SP_ARTICLES =
@ -44,11 +42,6 @@ MAN7_TXT += gittutorial-2.txt
MAN7_TXT += gittutorial.txt
MAN7_TXT += gitworkflows.txt
HOWTO_TXT += $(wildcard howto/*.txt)
DOC_DEP_TXT += $(wildcard *.txt)
DOC_DEP_TXT += $(wildcard config/*.txt)
ifdef MAN_FILTER
MAN_TXT = $(filter $(MAN_FILTER),$(MAN1_TXT) $(MAN5_TXT) $(MAN7_TXT))
else
@ -83,7 +76,6 @@ SP_ARTICLES += howto/rebuild-from-update-hook
SP_ARTICLES += howto/rebase-from-internal-branch
SP_ARTICLES += howto/keep-canonical-history-correct
SP_ARTICLES += howto/maintain-git
SP_ARTICLES += howto/coordinate-embargoed-releases
API_DOCS = $(patsubst %.txt,%,$(filter-out technical/api-index-skel.txt technical/api-index.txt, $(wildcard technical/api-*.txt)))
SP_ARTICLES += $(API_DOCS)
@ -98,7 +90,6 @@ TECH_DOCS += technical/multi-pack-index
TECH_DOCS += technical/pack-format
TECH_DOCS += technical/pack-heuristics
TECH_DOCS += technical/pack-protocol
TECH_DOCS += technical/parallel-checkout
TECH_DOCS += technical/partial-clone
TECH_DOCS += technical/protocol-capabilities
TECH_DOCS += technical/protocol-common
@ -293,7 +284,7 @@ docdep_prereqs = \
mergetools-list.made $(mergetools_txt) \
cmd-list.made $(cmds_txt)
doc.dep : $(docdep_prereqs) $(DOC_DEP_TXT) build-docdep.perl
doc.dep : $(docdep_prereqs) $(wildcard *.txt) $(wildcard config/*.txt) build-docdep.perl
$(QUIET_GEN)$(RM) $@+ $@ && \
$(PERL_PATH) ./build-docdep.perl >$@+ $(QUIET_STDERR) && \
mv $@+ $@
@ -436,9 +427,9 @@ $(patsubst %.txt,%.texi,$(MAN_TXT)): %.texi : %.xml
$(DOCBOOK2X_TEXI) --to-stdout $*.xml >$@+ && \
mv $@+ $@
howto-index.txt: howto-index.sh $(HOWTO_TXT)
howto-index.txt: howto-index.sh $(wildcard howto/*.txt)
$(QUIET_GEN)$(RM) $@+ $@ && \
'$(SHELL_PATH_SQ)' ./howto-index.sh $(sort $(HOWTO_TXT)) >$@+ && \
'$(SHELL_PATH_SQ)' ./howto-index.sh $(sort $(wildcard howto/*.txt)) >$@+ && \
mv $@+ $@
$(patsubst %,%.html,$(ARTICLES)) : %.html : %.txt
@ -447,7 +438,7 @@ $(patsubst %,%.html,$(ARTICLES)) : %.html : %.txt
WEBDOC_DEST = /pub/software/scm/git/docs
howto/%.html: ASCIIDOC_EXTRA += -a git-relative-html-prefix=../
$(patsubst %.txt,%.html,$(HOWTO_TXT)): %.html : %.txt GIT-ASCIIDOCFLAGS
$(patsubst %.txt,%.html,$(wildcard howto/*.txt)): %.html : %.txt GIT-ASCIIDOCFLAGS
$(QUIET_ASCIIDOC)$(RM) $@+ $@ && \
sed -e '1,/^$$/d' $< | \
$(TXT_TO_HTML) - >$@+ && \
@ -479,13 +470,7 @@ print-man1:
@for i in $(MAN1_TXT); do echo $$i; done
lint-docs::
$(QUIET_LINT)$(PERL_PATH) lint-gitlink.perl \
$(HOWTO_TXT) $(DOC_DEP_TXT) \
--section=1 $(MAN1_TXT) \
--section=5 $(MAN5_TXT) \
--section=7 $(MAN7_TXT); \
$(PERL_PATH) lint-man-end-blurb.perl $(MAN_TXT); \
$(PERL_PATH) lint-man-section-order.perl $(MAN_TXT);
$(QUIET_LINT)$(PERL_PATH) lint-gitlink.perl
ifeq ($(wildcard po/Makefile),po/Makefile)
doc-l10n install-l10n::

View File

@ -0,0 +1,24 @@
Git v2.30.2 Release Notes
=========================
This release addresses the security issue CVE-2022-24765.
Fixes since v2.30.2
-------------------
* Build fix on Windows.
* Fix `GIT_CEILING_DIRECTORIES` with Windows-style root directories.
* CVE-2022-24765:
On multi-user machines, Git users might find themselves
unexpectedly in a Git worktree, e.g. when another user created a
repository in `C:\.git`, in a mounted network drive or in a
scratch space. Merely having a Git-aware prompt that runs `git
status` (or `git diff`) and navigating to a directory which is
supposedly not a Git worktree, or opening such a directory in an
editor or IDE such as VS Code or Atom, will potentially run
commands defined by that other user.
Credit for finding this vulnerability goes to 俞晨东; The fix was
authored by Johannes Schindelin.

View File

@ -0,0 +1,21 @@
Git v2.30.4 Release Notes
=========================
This release contains minor fix-ups for the changes that went into
Git 2.30.3, which was made to address CVE-2022-24765.
* The code that was meant to parse the new `safe.directory`
configuration variable was not checking what configuration
variable was being fed to it, which has been corrected.
* '*' can be used as the value for the `safe.directory` variable to
signal that the user considers that any directory is safe.
Derrick Stolee (2):
t0033: add tests for safe.directory
setup: opt-out of check with safe.directory=*
Matheus Valadares (1):
setup: fix safe.directory key not being checked

View File

@ -0,0 +1,12 @@
Git v2.30.5 Release Notes
=========================
This release contains minor fix-ups for the changes that went into
Git 2.30.3 and 2.30.4, addressing CVE-2022-29187.
* The safety check that verifies a safe ownership of the Git
worktree is now extended to also cover the ownership of the Git
directory (and the `.git` file, if there is any).
Carlo Marcelo Arenas Belón (1):
setup: tighten ownership checks post CVE-2022-24765

View File

@ -0,0 +1,6 @@
Git v2.31.2 Release Notes
=========================
This release merges up the fixes that appear in v2.30.3 to address
the security issue CVE-2022-24765; see the release notes for that
version for details.

View File

@ -0,0 +1,4 @@
Git Documentation/RelNotes/2.31.3.txt Release Notes
=========================
This release merges up the fixes that appear in v2.31.3.

View File

@ -0,0 +1,6 @@
Git v2.31.4 Release Notes
=========================
This release merges up the fixes that appear in v2.30.5 to address
the security issue CVE-2022-29187; see the release notes for that
version for details.

View File

@ -1,397 +0,0 @@
Git 2.32 Release Notes
======================
Backward compatibility notes
----------------------------
* ".gitattributes", ".gitignore", and ".mailmap" files that are
symbolic links are ignored.
* "git apply --3way" used to first attempt a straight application,
and only fell back to the 3-way merge algorithm when the stright
application failed. Starting with this version, the command will
first try the 3-way merge algorithm and only when it fails (either
resulting with conflict or the base versions of blobs are missing),
falls back to the usual patch application.
Updates since v2.31
-------------------
UI, Workflows & Features
* It does not make sense to make ".gitattributes", ".gitignore" and
".mailmap" symlinks, as they are supposed to be usable from the
object store (think: bare repositories where HEAD:.mailmap etc. are
used). When these files are symbolic links, we used to read the
contents of the files pointed by them by mistake, which has been
corrected.
* "git stash show" learned to optionally show untracked part of the
stash.
* "git log --format='...'" learned "%(describe)" placeholder.
* "git repack" so far has been only capable of repacking everything
under the sun into a single pack (or split by size). A cleverer
strategy to reduce the cost of repacking a repository has been
introduced.
* The http codepath learned to let the credential layer to cache the
password used to unlock a certificate that has successfully been
used.
* "git commit --fixup=<commit>", which was to tweak the changes made
to the contents while keeping the original log message intact,
learned "--fixup=(amend|reword):<commit>", that can be used to
tweak both the message and the contents, and only the message,
respectively.
* When accessing a server with a URL like https://user:pass@site/, we
did not to fall back to the basic authentication with the
credential material embedded in the URL after the "Negotiate"
authentication failed. Now we do.
* "git send-email" learned to honor the core.hooksPath configuration.
* "git format-patch -v<n>" learned to allow a reroll count that is
not an integer.
* "git commit" learned "--trailer <key>[=<value>]" option; together
with the interpret-trailers command, this will make it easier to
support custom trailers.
* "git clone --reject-shallow" option fails the clone as soon as we
notice that we are cloning from a shallow repository.
* A configuration variable has been added to force tips of certain
refs to be given a reachability bitmap.
* "gitweb" learned "e-mail privacy" feature to redact strings that
look like e-mail addresses on various pages.
* "git apply --3way" has always been "to fall back to 3-way merge
only when straight application fails". Swap the order of falling
back so that 3-way is always attempted first (only when the option
is given, of course) and then straight patch application is used as
a fallback when it fails.
* "git apply" now takes "--3way" and "--cached" at the same time, and
work and record results only in the index.
* The command line completion (in contrib/) has learned that
CHERRY_PICK_HEAD is a possible pseudo-ref.
* Userdiff patterns for "Scheme" has been added.
* "git log" learned "--diff-merges=<style>" option, with an
associated configuration variable log.diffMerges.
* "git log --format=..." placeholders learned %ah/%ch placeholders to
request the --date=human output.
* Replace GIT_CONFIG_NOSYSTEM mechanism to decline from reading the
system-wide configuration file with GIT_CONFIG_SYSTEM that lets
users specify from which file to read the system-wide configuration
(setting it to an empty file would essentially be the same as
setting NOSYSTEM), and introduce GIT_CONFIG_GLOBAL to override the
per-user configuration in $HOME/.gitconfig.
* "git add" and "git rm" learned not to touch those paths that are
outside of sparse checkout.
* "git rev-list" learns the "--filter=object:type=<type>" option,
which can be used to exclude objects of the given kind from the
packfile generated by pack-objects.
* The command line completion (in contrib/) for "git stash" has been
updated.
* "git subtree" updates.
* It is now documented that "format-patch" skips merges.
* Options to "git pack-objects" that take numeric values like
--window and --depth should not accept negative values; the input
validation has been tightened.
* The way the command line specified by the trailer.<token>.command
configuration variable receives the end-user supplied value was
both error prone and misleading. An alternative to achieve the
same goal in a safer and more intuitive way has been added, as
the trailer.<token>.cmd configuration variable, to replace it.
* "git add -i --dry-run" does not dry-run, which was surprising. The
combination of options has taught to error out.
* "git push" learns to discover common ancestor with the receiving
end over protocol v2. This will hopefully make "git push" as
efficient as "git fetch" in avoiding objects from getting
transferred unnecessarily.
* "git mailinfo" (hence "git am") learned the "--quoted-cr" option to
control how lines ending with CRLF wrapped in base64 or qp are
handled.
Performance, Internal Implementation, Development Support etc.
* Rename detection rework continues.
* GIT_TEST_FAIL_PREREQS is a mechanism to skip test pieces with
prerequisites to catch broken tests that depend on the side effects
of optional pieces, but did not work at all when negative
prerequisites were involved.
(merge 27d578d904 jk/fail-prereq-testfix later to maint).
* "git diff-index" codepath has been taught to trust fsmonitor status
to reduce number of lstat() calls.
(merge 7e5aa13d2c nk/diff-index-fsmonitor later to maint).
* Reorganize Makefile to allow building git.o and other essential
objects without extra stuff needed only for testing.
* Preparatory API changes for parallel checkout.
* A simple IPC interface gets introduced to build services like
fsmonitor on top.
* Fsck API clean-up.
* SECURITY.md that is facing individual contributors and end users
has been introduced. Also a procedure to follow when preparing
embargoed releases has been spelled out.
(merge 09420b7648 js/security-md later to maint).
* Optimize "rev-list --use-bitmap-index --objects" corner case that
uses negative tags as the stopping points.
* CMake update for vsbuild.
* An on-disk reverse-index to map the in-pack location of an object
back to its object name across multiple packfiles is introduced.
* Generate [ec]tags under $(QUIET_GEN).
* Clean-up codepaths that implements "git send-email --validate"
option and improves the message from it.
* The last remnant of gettext-poison has been removed.
* The test framework has been taught to optionally turn the default
merge strategy to "ort" throughout the system where we use
three-way merges internally, like cherry-pick, rebase etc.,
primarily to enhance its test coverage (the strategy has been
available as an explicit "-s ort" choice).
* A bit of code clean-up and a lot of test clean-up around userdiff
area.
* Handling of "promisor packs" that allows certain objects to be
missing and lazily retrievable has been optimized (a bit).
* When packet_write() fails, we gave an extra error message
unnecessarily, which has been corrected.
* The checkout machinery has been taught to perform the actual
write-out of the files in parallel when able.
* Show errno in the trace output in the error codepath that calls
read_raw_ref method.
* Effort to make the command line completion (in contrib/) safe with
"set -u" continues.
* Tweak a few tests for "log --format=..." that show timestamps in
various formats.
* The reflog expiry machinery has been taught to emit trace events.
* Over-the-wire protocol learns a new request type to ask for object
sizes given a list of object names.
Fixes since v2.31
-----------------
* The fsmonitor interface read from its input without making sure
there is something to read from. This bug is new in 2.31
timeframe.
* The data structure used by fsmonitor interface was not properly
duplicated during an in-core merge, leading to use-after-free etc.
* "git bisect" reimplemented more in C during 2.30 timeframe did not
take an annotated tag as a good/bad endpoint well. This regression
has been corrected.
* Fix macros that can silently inject unintended null-statements.
* CALLOC_ARRAY() macro replaces many uses of xcalloc().
* Update insn in Makefile comments to run fuzz-all target.
* Fix a corner case bug in "git mv" on case insensitive systems,
which was introduced in 2.29 timeframe.
* We had a code to diagnose and die cleanly when a required
clean/smudge filter is missing, but an assert before that
unnecessarily fired, hiding the end-user facing die() message.
(merge 6fab35f748 mt/cleanly-die-upon-missing-required-filter later to maint).
* Update C code that sets a few configuration variables when a remote
is configured so that it spells configuration variable names in the
canonical camelCase.
(merge 0f1da600e6 ab/remote-write-config-in-camel-case later to maint).
* A new configuration variable has been introduced to allow choosing
which version of the generation number gets used in the
commit-graph file.
(merge 702110aac6 ds/commit-graph-generation-config later to maint).
* Perf test update to work better in secondary worktrees.
(merge 36e834abc1 jk/perf-in-worktrees later to maint).
* Updates to memory allocation code around the use of pcre2 library.
(merge c1760352e0 ab/grep-pcre2-allocfix later to maint).
* "git -c core.bare=false clone --bare ..." would have segfaulted,
which has been corrected.
(merge 75555676ad bc/clone-bare-with-conflicting-config later to maint).
* When "git checkout" removes a path that does not exist in the
commit it is checking out, it wasn't careful enough not to follow
symbolic links, which has been corrected.
(merge fab78a0c3d mt/checkout-remove-nofollow later to maint).
* A few option description strings started with capital letters,
which were corrected.
(merge 5ee90326dc cc/downcase-opt-help later to maint).
* Plug or annotate remaining leaks that trigger while running the
very basic set of tests.
(merge 68ffe095a2 ah/plugleaks later to maint).
* The hashwrite() API uses a buffering mechanism to avoid calling
write(2) too frequently. This logic has been refactored to be
easier to understand.
(merge ddaf1f62e3 ds/clarify-hashwrite later to maint).
* "git cherry-pick/revert" with or without "--[no-]edit" did not spawn
the editor as expected (e.g. "revert --no-edit" after a conflict
still asked to edit the message), which has been corrected.
(merge 39edfd5cbc en/sequencer-edit-upon-conflict-fix later to maint).
* "git daemon" has been tightened against systems that take backslash
as directory separator.
(merge 9a7f1ce8b7 rs/daemon-sanitize-dir-sep later to maint).
* A NULL-dereference bug has been corrected in an error codepath in
"git for-each-ref", "git branch --list" etc.
(merge c685450880 jk/ref-filter-segfault-fix later to maint).
* Streamline the codepath to fix the UTF-8 encoding issues in the
argv[] and the prefix on macOS.
(merge c7d0e61016 tb/precompose-prefix-simplify later to maint).
* The command-line completion script (in contrib/) had a couple of
references that would have given a warning under the "-u" (nounset)
option.
(merge c5c0548d79 vs/completion-with-set-u later to maint).
* When "git pack-objects" makes a literal copy of a part of existing
packfile using the reachability bitmaps, its update to the progress
meter was broken.
(merge 8e118e8490 jk/pack-objects-bitmap-progress-fix later to maint).
* The dependencies for config-list.h and command-list.h were broken
when the former was split out of the latter, which has been
corrected.
(merge 56550ea718 sg/bugreport-fixes later to maint).
* "git push --quiet --set-upstream" was not quiet when setting the
upstream branch configuration, which has been corrected.
(merge f3cce896a8 ow/push-quiet-set-upstream later to maint).
* The prefetch task in "git maintenance" assumed that "git fetch"
from any remote would fetch all its local branches, which would
fetch too much if the user is interested in only a subset of
branches there.
(merge 32f67888d8 ds/maintenance-prefetch-fix later to maint).
* Clarify that pathnames recorded in Git trees are most often (but
not necessarily) encoded in UTF-8.
(merge 9364bf465d ab/pathname-encoding-doc later to maint).
* "git --config-env var=val cmd" weren't accepted (only
--config-env=var=val was).
(merge c331551ccf ps/config-env-option-with-separate-value later to maint).
* When the reachability bitmap is in effect, the "do not lose
recently created objects and those that are reachable from them"
safety to protect us from races were disabled by mistake, which has
been corrected.
(merge 2ba582ba4c jk/prune-with-bitmap-fix later to maint).
* Cygwin pathname handling fix.
(merge bccc37fdc7 ad/cygwin-no-backslashes-in-paths later to maint).
* "git rebase --[no-]reschedule-failed-exec" did not work well with
its configuration variable, which has been corrected.
(merge e5b32bffd1 ab/rebase-no-reschedule-failed-exec later to maint).
* Portability fix for command line completion script (in contrib/).
(merge f2acf763e2 si/zsh-complete-comment-fix later to maint).
* "git repack -A -d" in a partial clone unnecessarily loosened
objects in promisor pack.
* "git bisect skip" when custom words are used for new/old did not
work, which has been corrected.
* A few variants of informational message "Already up-to-date" has
been rephrased.
(merge ad9322da03 js/merge-already-up-to-date-message-reword later to maint).
* "git submodule update --quiet" did not propagate the quiet option
down to underlying "git fetch", which has been corrected.
(merge 62af4bdd42 nc/submodule-update-quiet later to maint).
* Document that our test can use "local" keyword.
(merge a84fd3bcc6 jc/test-allows-local later to maint).
* The word-diff mode has been taught to work better with a word
regexp that can match an empty string.
(merge 0324e8fc6b pw/word-diff-zero-width-matches later to maint).
* "git p4" learned to find branch points more efficiently.
(merge 6b79818bfb jk/p4-locate-branch-point-optim later to maint).
* When "git update-ref -d" removes a ref that is packed, it left
empty directories under $GIT_DIR/refs/ for
(merge 5f03e5126d wc/packed-ref-removal-cleanup later to maint).
* Other code cleanup, docfix, build fix, etc.
(merge f451960708 dl/cat-file-doc-cleanup later to maint).
(merge 12604a8d0c sv/t9801-test-path-is-file-cleanup later to maint).
(merge ea7e63921c jr/doc-ignore-typofix later to maint).
(merge 23c781f173 ps/update-ref-trans-hook-doc later to maint).
(merge 42efa1231a jk/filter-branch-sha256 later to maint).
(merge 4c8e3dca6e tb/push-simple-uses-branch-merge-config later to maint).
(merge 6534d436a2 bs/asciidoctor-installation-hints later to maint).
(merge 47957485b3 ab/read-tree later to maint).
(merge 2be927f3d1 ab/diff-no-index-tests later to maint).
(merge 76593c09bb ab/detox-gettext-tests later to maint).
(merge 28e29ee38b jc/doc-format-patch-clarify later to maint).
(merge fc12b6fdde fm/user-manual-use-preface later to maint).
(merge dba94e3a85 cc/test-helper-bloom-usage-fix later to maint).
(merge 61a7660516 hn/reftable-tables-doc-update later to maint).
(merge 81ed96a9b2 jt/fetch-pack-request-fix later to maint).
(merge 151b6c2dd7 jc/doc-do-not-capitalize-clarification later to maint).
(merge 9160068ac6 js/access-nul-emulation-on-windows later to maint).
(merge 7a14acdbe6 po/diff-patch-doc later to maint).
(merge f91371b948 pw/patience-diff-clean-up later to maint).
(merge 3a7f0908b6 mt/clean-clean later to maint).
(merge d4e2d15a8b ab/streaming-simplify later to maint).
(merge 0e59f7ad67 ah/merge-ort-i18n later to maint).
(merge e6f68f62e0 ls/typofix later to maint).

View File

@ -117,13 +117,10 @@ If in doubt which identifier to use, run `git log --no-merges` on the
files you are modifying to see the current conventions.
[[summary-section]]
The title sentence after the "area:" prefix omits the full stop at the
end, and its first word is not capitalized unless there is a reason to
capitalize it other than because it is the first word in the sentence.
E.g. "doc: clarify...", not "doc: Clarify...", or "githooks.txt:
improve...", not "githooks.txt: Improve...". But "refs: HEAD is also
treated as a ref" is correct, as we spell `HEAD` in all caps even when
it appears in the middle of a sentence.
It's customary to start the remainder of the first line after "area: "
with a lower-case letter. E.g. "doc: clarify...", not "doc:
Clarify...", or "githooks.txt: improve...", not "githooks.txt:
Improve...".
[[meaningful-message]]
The body should provide a meaningful commit message, which:

View File

@ -440,6 +440,8 @@ include::config/rerere.txt[]
include::config/reset.txt[]
include::config/safe.txt[]
include::config/sendemail.txt[]
include::config/sequencer.txt[]

View File

@ -119,8 +119,4 @@ advice.*::
addEmptyPathspec::
Advice shown if a user runs the add command without providing
the pathspec parameter.
updateSparsePath::
Advice shown when either linkgit:git-add[1] or linkgit:git-rm[1]
is asked to update index entries outside the current sparse
checkout.
--

View File

@ -21,24 +21,3 @@ checkout.guess::
Provides the default value for the `--guess` or `--no-guess`
option in `git checkout` and `git switch`. See
linkgit:git-switch[1] and linkgit:git-checkout[1].
checkout.workers::
The number of parallel workers to use when updating the working tree.
The default is one, i.e. sequential execution. If set to a value less
than one, Git will use as many workers as the number of logical cores
available. This setting and `checkout.thresholdForParallelism` affect
all commands that perform checkout. E.g. checkout, clone, reset,
sparse-checkout, etc.
+
Note: parallel checkout usually delivers better performance for repositories
located on SSDs or over NFS. For repositories on spinning disks and/or machines
with a small number of cores, the default sequential checkout often performs
better. The size and compression level of a repository might also influence how
well the parallel version performs.
checkout.thresholdForParallelism::
When running parallel checkout with a small number of files, the cost
of subprocess spawning and inter-process communication might outweigh
the parallelization gains. This setting allows to define the minimum
number of files for which parallel checkout should be attempted. The
default is 100.

View File

@ -2,7 +2,3 @@ clone.defaultRemoteName::
The name of the remote to create when cloning a repository. Defaults to
`origin`, and can be overridden by passing the `--origin` command-line
option to linkgit:git-clone[1].
clone.rejectShallow::
Reject to clone a repository if it is a shallow one, can be overridden by
passing option `--reject-shallow` in command line. See linkgit:git-clone[1]

View File

@ -1,9 +1,3 @@
commitGraph.generationVersion::
Specifies the type of generation number version to use when writing
or reading the commit-graph file. If version 1 is specified, then
the corrected commit dates will not be written or read. Defaults to
2.
commitGraph.maxNewFilters::
Specifies the default value for the `--max-new-filters` option of `git
commit-graph write` (c.f., linkgit:git-commit-graph[1]).

View File

@ -14,11 +14,6 @@ index.recordOffsetTable::
Defaults to 'true' if index.threads has been explicitly enabled,
'false' otherwise.
index.sparse::
When enabled, write the index using sparse-directory entries. This
has no effect unless `core.sparseCheckout` and
`core.sparseCheckoutCone` are both enabled. Defaults to 'false'.
index.threads::
Specifies the number of threads to spawn when loading the index.
This is meant to reduce index load time on multiprocessor machines.

View File

@ -24,11 +24,6 @@ log.excludeDecoration::
the config option can be overridden by the `--decorate-refs`
option.
log.diffMerges::
Set default diff format to be used for merge commits. See
`--diff-merges` in linkgit:git-log[1] for details.
Defaults to `separate`.
log.follow::
If `true`, `git log` will act as if the `--follow` option was used when
a single <path> is given. This has the same limitations as `--follow`,

View File

@ -122,21 +122,6 @@ pack.useSparse::
commits contain certain types of direct renames. Default is
`true`.
pack.preferBitmapTips::
When selecting which commits will receive bitmaps, prefer a
commit at the tip of any reference that is a suffix of any value
of this configuration over any other commits in the "selection
window".
+
Note that setting this configuration to `refs/foo` does not mean that
the commits at the tips of `refs/foo/bar` and `refs/foo/baz` will
necessarily be selected. This is because commits are selected for
bitmaps from within a series of windows of variable length.
+
If a commit at the tip of any reference which is a suffix of any value
of this configuration is seen in a window, it is immediately given
preference over any other commit in that window.
pack.writeBitmaps (deprecated)::
This is a deprecated synonym for `repack.writeBitmaps`.

View File

@ -120,10 +120,3 @@ push.useForceIfIncludes::
`--force-if-includes` as an option to linkgit:git-push[1]
in the command line. Adding `--no-force-if-includes` at the
time of push overrides this configuration setting.
push.negotiate::
If set to "true", attempt to reduce the size of the packfile
sent by rounds of negotiation in which the client and the
server attempt to find commits in common. If "false", Git will
rely solely on the server's ref advertisement to find commits
in common.

View File

@ -1,3 +1,10 @@
rebase.useBuiltin::
Unused configuration variable. Used in Git versions 2.20 and
2.21 as an escape hatch to enable the legacy shellscript
implementation of rebase. Now the built-in rewrite of it in C
is always used. Setting this will emit a warning, to alert any
remaining users that setting this now does nothing.
rebase.backend::
Default backend to use for rebasing. Possible choices are
'apply' or 'merge'. In the future, if the merge backend gains

View File

@ -0,0 +1,42 @@
safe.directory::
These config entries specify Git-tracked directories that are
considered safe even if they are owned by someone other than the
current user. By default, Git will refuse to even parse a Git
config of a repository owned by someone else, let alone run its
hooks, and this config setting allows users to specify exceptions,
e.g. for intentionally shared repositories (see the `--shared`
option in linkgit:git-init[1]).
+
This is a multi-valued setting, i.e. you can add more than one directory
via `git config --add`. To reset the list of safe directories (e.g. to
override any such directories specified in the system config), add a
`safe.directory` entry with an empty value.
+
This config setting is only respected when specified in a system or global
config, not when it is specified in a repository config or via the command
line option `-c safe.directory=<path>`.
+
The value of this setting is interpolated, i.e. `~/<path>` expands to a
path relative to the home directory and `%(prefix)/<path>` expands to a
path relative to Git's (runtime) prefix.
+
To completely opt-out of this security check, set `safe.directory` to the
string `*`. This will allow all repositories to be treated as if their
directory was listed in the `safe.directory` list. If `safe.directory=*`
is set in system config and you want to re-enable this protection, then
initialize your list with an empty value before listing the repositories
that you deem safe.
+
As explained, Git only allows you to access repositories owned by
yourself, i.e. the user who is running Git, by default. When Git
is running as 'root' in a non Windows platform that provides sudo,
however, git checks the SUDO_UID environment variable that sudo creates
and will allow access to the uid recorded as its value in addition to
the id from 'root'.
This is to make it easy to perform a common sequence during installation
"make && sudo make install". A git process running under 'sudo' runs as
'root' but the 'sudo' command exports the environment variable to record
which id the original user has.
If that is not what you would prefer and want git to only trust
repositories that are owned by root instead, then you can remove
the `SUDO_UID` variable from root's environment before invoking git.

View File

@ -5,11 +5,6 @@ stash.useBuiltin::
is always used. Setting this will emit a warning, to alert any
remaining users that setting this now does nothing.
stash.showIncludeUntracked::
If this is set to true, the `git stash show` command without an
option will show the untracked files of a stash entry. Defaults to
false. See description of 'show' command in linkgit:git-stash[1].
stash.showPatch::
If this is set to true, the `git stash show` command without an
option will show the stash entry in patch form. Defaults to false.

View File

@ -59,16 +59,15 @@ uploadpack.allowFilter::
uploadpackfilter.allow::
Provides a default value for unspecified object filters (see: the
below configuration variable). If set to `true`, this will also
enable all filters which get added in the future.
below configuration variable).
Defaults to `true`.
uploadpackfilter.<filter>.allow::
Explicitly allow or ban the object filter corresponding to
`<filter>`, where `<filter>` may be one of: `blob:none`,
`blob:limit`, `object:type`, `tree`, `sparse:oid`, or `combine`.
If using combined filters, both `combine` and all of the nested
filter kinds must be allowed. Defaults to `uploadpackfilter.allow`.
`blob:limit`, `tree`, `sparse:oid`, or `combine`. If using
combined filters, both `combine` and all of the nested filter
kinds must be allowed. Defaults to `uploadpackfilter.allow`.
uploadpackfilter.tree.maxDepth::
Only allow `--filter=tree:<n>` when `<n>` is no more than the value of

View File

@ -11,7 +11,7 @@ linkgit:git-diff-files[1]
with the `-p` option produces patch text.
You can customize the creation of patch text via the
`GIT_EXTERNAL_DIFF` and the `GIT_DIFF_OPTS` environment variables
(see linkgit:git[1]), and the `diff` attribute (see linkgit:gitattributes[5]).
(see linkgit:git[1]).
What the -p option produces is slightly different from the traditional
diff format:
@ -74,11 +74,6 @@ separate lines indicate the old and the new mode.
rename from b
rename to a
5. Hunk headers mention the name of the function to which the hunk
applies. See "Defining a custom hunk-header" in
linkgit:gitattributes[5] for details of how to tailor to this to
specific languages.
Combined diff format
--------------------

View File

@ -34,7 +34,7 @@ endif::git-diff[]
endif::git-format-patch[]
ifdef::git-log[]
--diff-merges=(off|none|on|first-parent|1|separate|m|combined|c|dense-combined|cc)::
--diff-merges=(off|none|first-parent|1|separate|m|combined|c|dense-combined|cc)::
--no-diff-merges::
Specify diff format to be used for merge commits. Default is
{diff-merges-default} unless `--first-parent` is in use, in which case
@ -45,24 +45,17 @@ ifdef::git-log[]
Disable output of diffs for merge commits. Useful to override
implied value.
+
--diff-merges=on:::
--diff-merges=m:::
-m:::
This option makes diff output for merge commits to be shown in
the default format. `-m` will produce the output only if `-p`
is given as well. The default format could be changed using
`log.diffMerges` configuration parameter, which default value
is `separate`.
+
--diff-merges=first-parent:::
--diff-merges=1:::
This option makes merge commits show the full diff with
respect to the first parent only.
+
--diff-merges=separate:::
--diff-merges=m:::
-m:::
This makes merge commits show the full diff with respect to
each of the parents. Separate log entry and diff is generated
for each parent.
for each parent. `-m` doesn't produce any output without `-p`.
+
--diff-merges=combined:::
--diff-merges=c:::
@ -300,14 +293,11 @@ explained for the configuration variable `core.quotePath` (see
linkgit:git-config[1]).
--name-only::
Show only names of changed files. The file names are often encoded in UTF-8.
For more information see the discussion about encoding in the linkgit:git-log[1]
manual page.
Show only names of changed files.
--name-status::
Show only names and status of changed files. See the description
of the `--diff-filter` option on what the status letters mean.
Just like `--name-only` the file names are often encoded in UTF-8.
--submodule[=<format>]::
Specify how differences in submodules are shown. When specifying

View File

@ -110,11 +110,6 @@ ifndef::git-pull[]
setting `fetch.writeCommitGraph`.
endif::git-pull[]
--prefetch::
Modify the configured refspec to place all refs into the
`refs/prefetch/` namespace. See the `prefetch` task in
linkgit:git-maintenance[1].
-p::
--prune::
Before fetching, remove any remote-tracking references that no

View File

@ -15,7 +15,6 @@ SYNOPSIS
[--whitespace=<option>] [-C<n>] [-p<n>] [--directory=<dir>]
[--exclude=<path>] [--include=<path>] [--reject] [-q | --quiet]
[--[no-]scissors] [-S[<keyid>]] [--patch-format=<format>]
[--quoted-cr=<action>]
[(<mbox> | <Maildir>)...]
'git am' (--continue | --skip | --abort | --quit | --show-current-patch[=(diff|raw)])
@ -60,9 +59,6 @@ OPTIONS
--no-scissors::
Ignore scissors lines (see linkgit:git-mailinfo[1]).
--quoted-cr=<action>::
This flag will be passed down to 'git mailinfo' (see linkgit:git-mailinfo[1]).
-m::
--message-id::
Pass the `-m` flag to 'git mailinfo' (see linkgit:git-mailinfo[1]),

View File

@ -84,13 +84,12 @@ OPTIONS
-3::
--3way::
Attempt 3-way merge if the patch records the identity of blobs it is supposed
to apply to and we have those blobs available locally, possibly leaving the
When the patch does not apply cleanly, fall back on 3-way merge if
the patch records the identity of blobs it is supposed to apply to,
and we have those blobs available locally, possibly leaving the
conflict markers in the files in the working tree for the user to
resolve. This option implies the `--index` option unless the
`--cached` option is used, and is incompatible with the `--reject` option.
When used with the `--cached` option, any conflicts are left at higher stages
in the cache.
resolve. This option implies the `--index` option, and is incompatible
with the `--reject` and the `--cached` options.
--build-fake-ancestor=<file>::
Newer 'git diff' output has embedded 'index information'

View File

@ -35,42 +35,42 @@ OPTIONS
-t::
Instead of the content, show the object type identified by
`<object>`.
<object>.
-s::
Instead of the content, show the object size identified by
`<object>`.
<object>.
-e::
Exit with zero status if `<object>` exists and is a valid
object. If `<object>` is of an invalid format exit with non-zero and
Exit with zero status if <object> exists and is a valid
object. If <object> is of an invalid format exit with non-zero and
emits an error on stderr.
-p::
Pretty-print the contents of `<object>` based on its type.
Pretty-print the contents of <object> based on its type.
<type>::
Typically this matches the real type of `<object>` but asking
Typically this matches the real type of <object> but asking
for a type that can trivially be dereferenced from the given
`<object>` is also permitted. An example is to ask for a
"tree" with `<object>` being a commit object that contains it,
or to ask for a "blob" with `<object>` being a tag object that
<object> is also permitted. An example is to ask for a
"tree" with <object> being a commit object that contains it,
or to ask for a "blob" with <object> being a tag object that
points at it.
--textconv::
Show the content as transformed by a textconv filter. In this case,
`<object>` has to be of the form `<tree-ish>:<path>`, or `:<path>` in
<object> has to be of the form <tree-ish>:<path>, or :<path> in
order to apply the filter to the content recorded in the index at
`<path>`.
<path>.
--filters::
Show the content as converted by the filters configured in
the current working tree for the given `<path>` (i.e. smudge filters,
end-of-line conversion, etc). In this case, `<object>` has to be of
the form `<tree-ish>:<path>`, or `:<path>`.
the current working tree for the given <path> (i.e. smudge filters,
end-of-line conversion, etc). In this case, <object> has to be of
the form <tree-ish>:<path>, or :<path>.
--path=<path>::
For use with `--textconv` or `--filters`, to allow specifying an object
For use with --textconv or --filters, to allow specifying an object
name and a path separately, e.g. when it is difficult to figure out
the revision from which the blob came.
@ -115,15 +115,15 @@ OPTIONS
repository.
--allow-unknown-type::
Allow `-s` or `-t` to query broken/corrupt objects of unknown type.
Allow -s or -t to query broken/corrupt objects of unknown type.
--follow-symlinks::
With `--batch` or `--batch-check`, follow symlinks inside the
With --batch or --batch-check, follow symlinks inside the
repository when requesting objects with extended SHA-1
expressions of the form tree-ish:path-in-tree. Instead of
providing output about the link itself, provide output about
the linked-to object. If a symlink points outside the
tree-ish (e.g. a link to `/foo` or a root-level link to `../foo`),
tree-ish (e.g. a link to /foo or a root-level link to ../foo),
the portion of the link which is outside the tree will be
printed.
+
@ -175,15 +175,15 @@ respectively print:
OUTPUT
------
If `-t` is specified, one of the `<type>`.
If `-t` is specified, one of the <type>.
If `-s` is specified, the size of the `<object>` in bytes.
If `-s` is specified, the size of the <object> in bytes.
If `-e` is specified, no output, unless the `<object>` is malformed.
If `-e` is specified, no output, unless the <object> is malformed.
If `-p` is specified, the contents of `<object>` are pretty-printed.
If `-p` is specified, the contents of <object> are pretty-printed.
If `<type>` is specified, the raw (though uncompressed) contents of the `<object>`
If <type> is specified, the raw (though uncompressed) contents of the <object>
will be returned.
BATCH OUTPUT
@ -200,7 +200,7 @@ object, with placeholders of the form `%(atom)` expanded, followed by a
newline. The available atoms are:
`objectname`::
The full hex representation of the object name.
The 40-hex object name of the object.
`objecttype`::
The type of the object (the same as `cat-file -t` reports).
@ -215,9 +215,8 @@ newline. The available atoms are:
`deltabase`::
If the object is stored as a delta on-disk, this expands to the
full hex representation of the delta base object name.
Otherwise, expands to the null OID (all zeroes). See `CAVEATS`
below.
40-hex sha1 of the delta base object. Otherwise, expands to the
null sha1 (40 zeroes). See `CAVEATS` below.
`rest`::
If this atom is used in the output string, input lines are split
@ -236,14 +235,14 @@ newline.
For example, `--batch` without a custom format would produce:
------------
<oid> SP <type> SP <size> LF
<sha1> SP <type> SP <size> LF
<contents> LF
------------
Whereas `--batch-check='%(objectname) %(objecttype)'` would produce:
------------
<oid> SP <type> LF
<sha1> SP <type> LF
------------
If a name is specified on stdin that cannot be resolved to an object in
@ -259,7 +258,7 @@ If a name is specified that might refer to more than one object (an ambiguous sh
<object> SP ambiguous LF
------------
If `--follow-symlinks` is used, and a symlink in the repository points
If --follow-symlinks is used, and a symlink in the repository points
outside the repository, then `cat-file` will ignore any custom format
and print:
@ -268,11 +267,11 @@ symlink SP <size> LF
<symlink> LF
------------
The symlink will either be absolute (beginning with a `/`), or relative
to the tree root. For instance, if dir/link points to `../../foo`, then
`<symlink>` will be `../foo`. `<size>` is the size of the symlink in bytes.
The symlink will either be absolute (beginning with a /), or relative
to the tree root. For instance, if dir/link points to ../../foo, then
<symlink> will be ../foo. <size> is the size of the symlink in bytes.
If `--follow-symlinks` is used, the following error messages will be
If --follow-symlinks is used, the following error messages will be
displayed:
------------

View File

@ -15,7 +15,7 @@ SYNOPSIS
[--dissociate] [--separate-git-dir <git dir>]
[--depth <depth>] [--[no-]single-branch] [--no-tags]
[--recurse-submodules[=<pathspec>]] [--[no-]shallow-submodules]
[--[no-]remote-submodules] [--jobs <n>] [--sparse] [--[no-]reject-shallow]
[--[no-]remote-submodules] [--jobs <n>] [--sparse]
[--filter=<filter>] [--] <repository>
[<directory>]
@ -149,11 +149,6 @@ objects from the source repository into a pack in the cloned repository.
--no-checkout::
No checkout of HEAD is performed after the clone is complete.
--[no-]reject-shallow::
Fail if the source repository is a shallow repository.
The 'clone.rejectShallow' configuration variable can be used to
specify the default.
--bare::
Make a 'bare' Git repository. That is, instead of
creating `<directory>` and placing the administrative

View File

@ -9,13 +9,12 @@ SYNOPSIS
--------
[verse]
'git commit' [-a | --interactive | --patch] [-s] [-v] [-u<mode>] [--amend]
[--dry-run] [(-c | -C | --squash) <commit> | --fixup [(amend|reword):]<commit>)]
[--dry-run] [(-c | -C | --fixup | --squash) <commit>]
[-F <file> | -m <msg>] [--reset-author] [--allow-empty]
[--allow-empty-message] [--no-verify] [-e] [--author=<author>]
[--date=<date>] [--cleanup=<mode>] [--[no-]status]
[-i | -o] [--pathspec-from-file=<file> [--pathspec-file-nul]]
[(--trailer <token>[(=|:)<value>])...] [-S[<keyid>]]
[--] [<pathspec>...]
[-S[<keyid>]] [--] [<pathspec>...]
DESCRIPTION
-----------
@ -87,44 +86,11 @@ OPTIONS
Like '-C', but with `-c` the editor is invoked, so that
the user can further edit the commit message.
--fixup=[(amend|reword):]<commit>::
Create a new commit which "fixes up" `<commit>` when applied with
`git rebase --autosquash`. Plain `--fixup=<commit>` creates a
"fixup!" commit which changes the content of `<commit>` but leaves
its log message untouched. `--fixup=amend:<commit>` is similar but
creates an "amend!" commit which also replaces the log message of
`<commit>` with the log message of the "amend!" commit.
`--fixup=reword:<commit>` creates an "amend!" commit which
replaces the log message of `<commit>` with its own log message
but makes no changes to the content of `<commit>`.
+
The commit created by plain `--fixup=<commit>` has a subject
composed of "fixup!" followed by the subject line from <commit>,
and is recognized specially by `git rebase --autosquash`. The `-m`
option may be used to supplement the log message of the created
commit, but the additional commentary will be thrown away once the
"fixup!" commit is squashed into `<commit>` by
`git rebase --autosquash`.
+
The commit created by `--fixup=amend:<commit>` is similar but its
subject is instead prefixed with "amend!". The log message of
<commit> is copied into the log message of the "amend!" commit and
opened in an editor so it can be refined. When `git rebase
--autosquash` squashes the "amend!" commit into `<commit>`, the
log message of `<commit>` is replaced by the refined log message
from the "amend!" commit. It is an error for the "amend!" commit's
log message to be empty unless `--allow-empty-message` is
specified.
+
`--fixup=reword:<commit>` is shorthand for `--fixup=amend:<commit>
--only`. It creates an "amend!" commit with only a log message
(ignoring any changes staged in the index). When squashed by `git
rebase --autosquash`, it replaces the log message of `<commit>`
without making any other changes.
+
Neither "fixup!" nor "amend!" commits change authorship of
`<commit>` when applied by `git rebase --autosquash`.
See linkgit:git-rebase[1] for details.
--fixup=<commit>::
Construct a commit message for use with `rebase --autosquash`.
The commit message will be the subject line from the specified
commit with a prefix of "fixup! ". See linkgit:git-rebase[1]
for details.
--squash=<commit>::
Construct a commit message for use with `rebase --autosquash`.
@ -200,17 +166,6 @@ The `-m` option is mutually exclusive with `-c`, `-C`, and `-F`.
include::signoff-option.txt[]
--trailer <token>[(=|:)<value>]::
Specify a (<token>, <value>) pair that should be applied as a
trailer. (e.g. `git commit --trailer "Signed-off-by:C O Mitter \
<committer@example.com>" --trailer "Helped-by:C O Mitter \
<committer@example.com>"` will add the "Signed-off-by" trailer
and the "Helped-by" trailer to the commit message.)
The `trailer.*` configuration variables
(linkgit:git-interpret-trailers[1]) can be used to define if
a duplicated trailer is omitted, where in the run of trailers
each trailer would appear, and other details.
-n::
--no-verify::
This option bypasses the pre-commit and commit-msg hooks.

View File

@ -340,11 +340,6 @@ GIT_CONFIG::
Using the "--global" option forces this to ~/.gitconfig. Using the
"--system" option forces this to $(prefix)/etc/gitconfig.
GIT_CONFIG_GLOBAL::
GIT_CONFIG_SYSTEM::
Take the configuration from the given files instead from global or
system-level configuration. See linkgit:git[1] for details.
GIT_CONFIG_NOSYSTEM::
Whether to skip reading settings from the system-wide
$(prefix)/etc/gitconfig file. See linkgit:git[1] for details.

View File

@ -159,7 +159,3 @@ empty string.
+
Components which are missing from the URL (e.g., there is no
username in the example above) will be left unset.
GIT
---
Part of the linkgit:git[1] suite

View File

@ -24,18 +24,6 @@ Usage:
[verse]
'git-cvsserver' [<options>] [pserver|server] [<directory> ...]
DESCRIPTION
-----------
This application is a CVS emulation layer for Git.
It is highly functional. However, not all methods are implemented,
and for those methods that are implemented,
not all switches are implemented.
Testing has been done using both the CLI CVS client, and the Eclipse CVS
plugin. Most functionality works fine with both of these clients.
OPTIONS
-------
@ -69,6 +57,18 @@ access still needs to be enabled by the `gitcvs.enabled` config option
unless `--export-all` was given, too.
DESCRIPTION
-----------
This application is a CVS emulation layer for Git.
It is highly functional. However, not all methods are implemented,
and for those methods that are implemented,
not all switches are implemented.
Testing has been done using both the CLI CVS client, and the Eclipse CVS
plugin. Most functionality works fine with both of these clients.
LIMITATIONS
-----------

View File

@ -36,28 +36,11 @@ SYNOPSIS
DESCRIPTION
-----------
Prepare each non-merge commit with its "patch" in
one "message" per commit, formatted to resemble a UNIX mailbox.
Prepare each commit with its patch in
one file per commit, formatted to resemble UNIX mailbox format.
The output of this command is convenient for e-mail submission or
for use with 'git am'.
A "message" generated by the command consists of three parts:
* A brief metadata header that begins with `From <commit>`
with a fixed `Mon Sep 17 00:00:00 2001` datestamp to help programs
like "file(1)" to recognize that the file is an output from this
command, fields that record the author identity, the author date,
and the title of the change (taken from the first paragraph of the
commit log message).
* The second and subsequent paragraphs of the commit log message.
* The "patch", which is the "diff -p --stat" output (see
linkgit:git-diff[1]) between the commit and its parent.
The log message and the patch is separated by a line with a
three-dash line.
There are two ways to specify which commits to operate on.
1. A single commit, <since>, specifies that the commits leading
@ -238,11 +221,6 @@ populated with placeholder text.
`--subject-prefix` option) has ` v<n>` appended to it. E.g.
`--reroll-count=4` may produce `v4-0001-add-makefile.patch`
file that has "Subject: [PATCH v4 1/20] Add makefile" in it.
`<n>` does not have to be an integer (e.g. "--reroll-count=4.4",
or "--reroll-count=4rev2" are allowed), but the downside of
using such a reroll-count is that the range-diff/interdiff
with the previous version does not state exactly which
version the new interation is compared against.
--to=<email>::
Add a `To:` header to the email headers. This is in addition
@ -740,14 +718,6 @@ use it only when you know the recipient uses Git to apply your patch.
$ git format-patch -3
------------
CAVEATS
-------
Note that `format-patch` will omit merge commits from the output, even
if they are part of the requested range. A simple "patch" does not
include enough information for the receiving end to reproduce the same
merge commit.
SEE ALSO
--------
linkgit:git-am[1], linkgit:git-send-email[1]

View File

@ -38,6 +38,38 @@ are lists of one or more search expressions separated by newline
characters. An empty string as search expression matches all lines.
CONFIGURATION
-------------
grep.lineNumber::
If set to true, enable `-n` option by default.
grep.column::
If set to true, enable the `--column` option by default.
grep.patternType::
Set the default matching behavior. Using a value of 'basic', 'extended',
'fixed', or 'perl' will enable the `--basic-regexp`, `--extended-regexp`,
`--fixed-strings`, or `--perl-regexp` option accordingly, while the
value 'default' will return to the default matching behavior.
grep.extendedRegexp::
If set to true, enable `--extended-regexp` option by default. This
option is ignored when the `grep.patternType` option is set to a value
other than 'default'.
grep.threads::
Number of grep worker threads to use. If unset (or set to 0), Git will
use as many threads as the number of logical cores available.
grep.fullName::
If set to true, enable `--full-name` option by default.
grep.fallbackToNoIndex::
If set to true, fall back to git grep --no-index if git grep
is executed outside of a git repository. Defaults to false.
OPTIONS
-------
--cached::
@ -331,38 +363,6 @@ with multiple threads might perform slower than single threaded if `--textconv`
is given and there're too many text conversions. So if you experience low
performance in this case, it might be desirable to use `--threads=1`.
CONFIGURATION
-------------
grep.lineNumber::
If set to true, enable `-n` option by default.
grep.column::
If set to true, enable the `--column` option by default.
grep.patternType::
Set the default matching behavior. Using a value of 'basic', 'extended',
'fixed', or 'perl' will enable the `--basic-regexp`, `--extended-regexp`,
`--fixed-strings`, or `--perl-regexp` option accordingly, while the
value 'default' will return to the default matching behavior.
grep.extendedRegexp::
If set to true, enable `--extended-regexp` option by default. This
option is ignored when the `grep.patternType` option is set to a value
other than 'default'.
grep.threads::
Number of grep worker threads to use. If unset (or set to 0), Git will
use as many threads as the number of logical cores available.
grep.fullName::
If set to true, enable `--full-name` option by default.
grep.fallbackToNoIndex::
If set to true, fall back to git grep --no-index if git grep
is executed outside of a git repository. Defaults to false.
GIT
---
Part of the linkgit:git[1] suite

View File

@ -232,38 +232,25 @@ trailer.<token>.ifmissing::
that option for trailers with the specified <token>.
trailer.<token>.command::
This option behaves in the same way as 'trailer.<token>.cmd', except
that it doesn't pass anything as argument to the specified command.
Instead the first occurrence of substring $ARG is replaced by the
value that would be passed as argument.
This option can be used to specify a shell command that will
be called to automatically add or modify a trailer with the
specified <token>.
+
The 'trailer.<token>.command' option has been deprecated in favor of
'trailer.<token>.cmd' due to the fact that $ARG in the user's command is
only replaced once and that the original way of replacing $ARG is not safe.
When this option is specified, the behavior is as if a special
'<token>=<value>' argument were added at the beginning of the command
line, where <value> is taken to be the standard output of the
specified command with any leading and trailing whitespace trimmed
off.
+
When both 'trailer.<token>.cmd' and 'trailer.<token>.command' are given
for the same <token>, 'trailer.<token>.cmd' is used and
'trailer.<token>.command' is ignored.
trailer.<token>.cmd::
This option can be used to specify a shell command that will be called:
once to automatically add a trailer with the specified <token>, and then
each time a '--trailer <token>=<value>' argument to modify the <value> of
the trailer that this option would produce.
If the command contains the `$ARG` string, this string will be
replaced with the <value> part of an existing trailer with the same
<token>, if any, before the command is launched.
+
When the specified command is first called to add a trailer
with the specified <token>, the behavior is as if a special
'--trailer <token>=<value>' argument was added at the beginning
of the "git interpret-trailers" command, where <value>
is taken to be the standard output of the command with any
leading and trailing whitespace trimmed off.
+
If some '--trailer <token>=<value>' arguments are also passed
on the command line, the command is called again once for each
of these arguments with the same <token>. And the <value> part
of these arguments, if any, will be passed to the command as its
first argument. This way the command can produce a <value> computed
from the <value> passed in the '--trailer <token>=<value>' argument.
If some '<token>=<value>' arguments are also passed on the command
line, when a 'trailer.<token>.command' is configured, the command will
also be executed for each of these arguments. And the <value> part of
these arguments, if any, will be used to replace the `$ARG` string in
the command.
EXAMPLES
--------
@ -346,55 +333,6 @@ subject
Fix #42
------------
* Configure a 'help' trailer with a cmd use a script `glog-find-author`
which search specified author identity from git log in git repository
and show how it works:
+
------------
$ cat ~/bin/glog-find-author
#!/bin/sh
test -n "$1" && git log --author="$1" --pretty="%an <%ae>" -1 || true
$ git config trailer.help.key "Helped-by: "
$ git config trailer.help.ifExists "addIfDifferentNeighbor"
$ git config trailer.help.cmd "~/bin/glog-find-author"
$ git interpret-trailers --trailer="help:Junio" --trailer="help:Couder" <<EOF
> subject
>
> message
>
> EOF
subject
message
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Christian Couder <christian.couder@gmail.com>
------------
* Configure a 'ref' trailer with a cmd use a script `glog-grep`
to grep last relevant commit from git log in the git repository
and show how it works:
+
------------
$ cat ~/bin/glog-grep
#!/bin/sh
test -n "$1" && git log --grep "$1" --pretty=reference -1 || true
$ git config trailer.ref.key "Reference-to: "
$ git config trailer.ref.ifExists "replace"
$ git config trailer.ref.cmd "~/bin/glog-grep"
$ git interpret-trailers --trailer="ref:Add copyright notices." <<EOF
> subject
>
> message
>
> EOF
subject
message
Reference-to: 8bc9a0c769 (Add copyright notices., 2005-04-07)
------------
* Configure a 'see' trailer with a command to show the subject of a
commit that is related, and show how it works:
+

View File

@ -9,9 +9,7 @@ git-mailinfo - Extracts patch and authorship from a single e-mail message
SYNOPSIS
--------
[verse]
'git mailinfo' [-k|-b] [-u | --encoding=<encoding> | -n]
[--[no-]scissors] [--quoted-cr=<action>]
<msg> <patch>
'git mailinfo' [-k|-b] [-u | --encoding=<encoding> | -n] [--[no-]scissors] <msg> <patch>
DESCRIPTION
@ -91,23 +89,6 @@ This can be enabled by default with the configuration option mailinfo.scissors.
--no-scissors::
Ignore scissors lines. Useful for overriding mailinfo.scissors settings.
--quoted-cr=<action>::
Action when processes email messages sent with base64 or
quoted-printable encoding, and the decoded lines end with a CRLF
instead of a simple LF.
+
The valid actions are:
+
--
* `nowarn`: Git will do nothing when such a CRLF is found.
* `warn`: Git will issue a warning for each message if such a CRLF is
found.
* `strip`: Git will convert those CRLF to LF.
--
+
The default action could be set by configuration option `mailinfo.quotedCR`.
If no such configuration option has been set, `warn` will be used.
<msg>::
The commit log message extracted from e-mail, usually
except the title line which comes from e-mail Subject.

View File

@ -92,8 +92,10 @@ commit-graph::
prefetch::
The `prefetch` task updates the object directory with the latest
objects from all registered remotes. For each remote, a `git fetch`
command is run. The configured refspec is modified to place all
requested refs within `refs/prefetch/`. Also, tags are not updated.
command is run. The refmap is custom to avoid updating local or remote
branches (those in `refs/heads` or `refs/remotes`). Instead, the
remote refs are stored in `refs/prefetch/<remote>/`. Also, tags are
not updated.
+
This is done to avoid disrupting the remote-tracking branches. The end users
expect these refs to stay unmoved unless they initiate a fetch. With prefetch

View File

@ -11,6 +11,14 @@ SYNOPSIS
[verse]
'git mktag'
OPTIONS
-------
--strict::
By default mktag turns on the equivalent of
linkgit:git-fsck[1] `--strict` mode. Use `--no-strict` to
disable it.
DESCRIPTION
-----------
@ -37,14 +45,6 @@ the appropriate `fsck.<msg-id>` varible:
git -c fsck.extraHeaderEntry=ignore mktag <my-tag-with-headers
OPTIONS
-------
--strict::
By default mktag turns on the equivalent of
linkgit:git-fsck[1] `--strict` mode. Use `--no-strict` to
disable it.
Tag Format
----------
A tag signature file, to be fed to this command's standard input,

View File

@ -9,8 +9,7 @@ git-multi-pack-index - Write and verify multi-pack-indexes
SYNOPSIS
--------
[verse]
'git multi-pack-index' [--object-dir=<dir>] [--[no-]progress]
[--preferred-pack=<pack>] <subcommand>
'git multi-pack-index' [--object-dir=<dir>] [--[no-]progress] <subcommand>
DESCRIPTION
-----------
@ -31,16 +30,7 @@ OPTIONS
The following subcommands are available:
write::
Write a new MIDX file. The following options are available for
the `write` sub-command:
+
--
--preferred-pack=<pack>::
Optionally specify the tie-breaking pack used when
multiple packs contain the same object. If not given,
ties are broken in favor of the pack with the lowest
mtime.
--
Write a new MIDX file.
verify::
Verify the contents of the MIDX file.

View File

@ -762,7 +762,3 @@ IMPLEMENTATION DETAILS
message indicating the p4 depot location and change number. This
line is used by later 'git p4 sync' operations to know which p4
changes are new.
GIT
---
Part of the linkgit:git[1] suite

View File

@ -85,16 +85,6 @@ base-name::
reference was included in the resulting packfile. This
can be useful to send new tags to native Git clients.
--stdin-packs::
Read the basenames of packfiles (e.g., `pack-1234abcd.pack`)
from the standard input, instead of object names or revision
arguments. The resulting pack contains all objects listed in the
included packs (those not beginning with `^`), excluding any
objects listed in the excluded packs (beginning with `^`).
+
Incompatible with `--revs`, or options that imply `--revs` (such as
`--all`), with the exception of `--unpacked`, which is compatible.
--window=<n>::
--depth=<n>::
These two options affect how the objects contained in

View File

@ -600,7 +600,7 @@ EXAMPLES
`git push origin`::
Without additional configuration, pushes the current branch to
the configured upstream (`branch.<name>.merge` configuration
the configured upstream (`remote.origin.merge` configuration
variable) if it has the same name as the current branch, and
errors out without pushing otherwise.
+

View File

@ -200,6 +200,12 @@ Alternatively, you can undo the 'git rebase' with
git rebase --abort
CONFIGURATION
-------------
include::config/rebase.txt[]
include::config/sequencer.txt[]
OPTIONS
-------
--onto <newbase>::
@ -587,17 +593,16 @@ See also INCOMPATIBLE OPTIONS below.
--autosquash::
--no-autosquash::
When the commit log message begins with "squash! ..." or "fixup! ..."
or "amend! ...", and there is already a commit in the todo list that
matches the same `...`, automatically modify the todo list of
`rebase -i`, so that the commit marked for squashing comes right after
the commit to be modified, and change the action of the moved commit
from `pick` to `squash` or `fixup` or `fixup -C` respectively. A commit
matches the `...` if the commit subject matches, or if the `...` refers
to the commit's hash. As a fall-back, partial matches of the commit
subject work, too. The recommended way to create fixup/amend/squash
commits is by using the `--fixup`, `--fixup=amend:` or `--fixup=reword:`
and `--squash` options respectively of linkgit:git-commit[1].
When the commit log message begins with "squash! ..." (or
"fixup! ..."), and there is already a commit in the todo list that
matches the same `...`, automatically modify the todo list of rebase
-i so that the commit marked for squashing comes right after the
commit to be modified, and change the action of the moved commit
from `pick` to `squash` (or `fixup`). A commit matches the `...` if
the commit subject matches, or if the `...` refers to the commit's
hash. As a fall-back, partial matches of the commit subject work,
too. The recommended way to create fixup/squash commits is by using
the `--fixup`/`--squash` options of linkgit:git-commit[1].
+
If the `--autosquash` option is enabled by default using the
configuration variable `rebase.autoSquash`, this option can be
@ -617,14 +622,6 @@ See also INCOMPATIBLE OPTIONS below.
--no-reschedule-failed-exec::
Automatically reschedule `exec` commands that failed. This only makes
sense in interactive mode (or when an `--exec` option was provided).
+
Even though this option applies once a rebase is started, it's set for
the whole rebase at the start based on either the
`rebase.rescheduleFailedExec` configuration (see linkgit:git-config[1]
or "CONFIGURATION" below) or whether this option is
provided. Otherwise an explicit `--no-reschedule-failed-exec` at the
start would be overridden by the presence of
`rebase.rescheduleFailedExec=true` configuration.
INCOMPATIBLE OPTIONS
--------------------
@ -890,17 +887,9 @@ If you want to fold two or more commits into one, replace the command
"pick" for the second and subsequent commits with "squash" or "fixup".
If the commits had different authors, the folded commit will be
attributed to the author of the first commit. The suggested commit
message for the folded commit is the concatenation of the first
commit's message with those identified by "squash" commands, omitting the
messages of commits identified by "fixup" commands, unless "fixup -c"
is used. In that case the suggested commit message is only the message
of the "fixup -c" commit, and an editor is opened allowing you to edit
the message. The contents (patch) of the "fixup -c" commit are still
incorporated into the folded commit. If there is more than one "fixup -c"
commit, the message from the final one is used. You can also use
"fixup -C" to get the same behavior as "fixup -c" except without opening
an editor.
message for the folded commit is the concatenation of the commit
messages of the first commit and of those with the "squash" command,
but omits the commit messages of commits with the "fixup" command.
'git rebase' will stop when "pick" has been replaced with "edit" or
when a command fails due to merge errors. When you are done editing
@ -1268,12 +1257,6 @@ merge tlsv1.3
merge cmake
------------
CONFIGURATION
-------------
include::config/rebase.txt[]
include::config/sequencer.txt[]
BUGS
----
The todo list presented by the deprecated `--preserve-merges --interactive`

View File

@ -165,29 +165,6 @@ depth is 4095.
Pass the `--delta-islands` option to `git-pack-objects`, see
linkgit:git-pack-objects[1].
-g=<factor>::
--geometric=<factor>::
Arrange resulting pack structure so that each successive pack
contains at least `<factor>` times the number of objects as the
next-largest pack.
+
`git repack` ensures this by determining a "cut" of packfiles that need
to be repacked into one in order to ensure a geometric progression. It
picks the smallest set of packfiles such that as many of the larger
packfiles (by count of objects contained in that pack) may be left
intact.
+
Unlike other repack modes, the set of objects to pack is determined
uniquely by the set of packs being "rolled-up"; in other words, the
packs determined to need to be combined in order to restore a geometric
progression.
+
When `--unpacked` is specified, loose objects are implicitly included in
this "roll-up", without respect to their reachability. This is subject
to change in the future. This option (implying a drastically different
repack mode) is not guaranteed to work with all other combinations of
option to `git repack`.
CONFIGURATION
-------------

View File

@ -23,9 +23,7 @@ branch, and no updates to their contents can be staged in the index,
though that default behavior can be overridden with the `-f` option.
When `--cached` is given, the staged content has to
match either the tip of the branch or the file on disk,
allowing the file to be removed from just the index. When
sparse-checkouts are in use (see linkgit:git-sparse-checkout[1]),
`git rm` will only remove paths within the sparse-checkout patterns.
allowing the file to be removed from just the index.
OPTIONS

View File

@ -45,20 +45,6 @@ To avoid interfering with other worktrees, it first enables the
When `--cone` is provided, the `core.sparseCheckoutCone` setting is
also set, allowing for better performance with a limited set of
patterns (see 'CONE PATTERN SET' below).
+
Use the `--[no-]sparse-index` option to toggle the use of the sparse
index format. This reduces the size of the index to be more closely
aligned with your sparse-checkout definition. This can have significant
performance advantages for commands such as `git status` or `git add`.
This feature is still experimental. Some commands might be slower with
a sparse index until they are properly integrated with the feature.
+
**WARNING:** Using a sparse index requires modifying the index in a way
that is not completely understood by external tools. If you have trouble
with this compatibility, then run `git sparse-checkout init --no-sparse-index`
to rewrite your index to not be sparse. Older versions of Git will not
understand the sparse directory entries index extension and may fail to
interact with your repository until it is disabled.
'set'::
Write a set of patterns to the sparse-checkout file, as given as

View File

@ -9,7 +9,7 @@ SYNOPSIS
--------
[verse]
'git stash' list [<log-options>]
'git stash' show [-u|--include-untracked|--only-untracked] [<diff-options>] [<stash>]
'git stash' show [<diff-options>] [<stash>]
'git stash' drop [-q|--quiet] [<stash>]
'git stash' ( pop | apply ) [--index] [-q|--quiet] [<stash>]
'git stash' branch <branchname> [<stash>]
@ -83,7 +83,7 @@ stash@{1}: On master: 9cc0589... Add git-stash
The command takes options applicable to the 'git log'
command to control what is shown and how. See linkgit:git-log[1].
show [-u|--include-untracked|--only-untracked] [<diff-options>] [<stash>]::
show [<diff-options>] [<stash>]::
Show the changes recorded in the stash entry as a diff between the
stashed contents and the commit back when the stash entry was first
@ -91,8 +91,8 @@ show [-u|--include-untracked|--only-untracked] [<diff-options>] [<stash>]::
By default, the command shows the diffstat, but it will accept any
format known to 'git diff' (e.g., `git stash show -p stash@{1}`
to view the second most recent entry in patch form).
You can use stash.showIncludeUntracked, stash.showStat, and
stash.showPatch config variables to change the default behavior.
You can use stash.showStat and/or stash.showPatch config variables
to change the default behavior.
pop [--index] [-q|--quiet] [<stash>]::
@ -160,18 +160,10 @@ up with `git clean`.
-u::
--include-untracked::
--no-include-untracked::
When used with the `push` and `save` commands,
all untracked files are also stashed and then cleaned up with
`git clean`.
This option is only valid for `push` and `save` commands.
+
When used with the `show` command, show the untracked files in the stash
entry as part of the diff.
--only-untracked::
This option is only valid for the `show` command.
+
Show only the untracked files in the stash entry as part of the diff.
All untracked files are also stashed and then cleaned up with
`git clean`.
--index::
This option is only valid for `pop` and `apply` commands.

View File

@ -1061,6 +1061,25 @@ with different name spaces. For example:
branches = stable/*:refs/remotes/svn/stable/*
branches = debug/*:refs/remotes/svn/debug/*
BUGS
----
We ignore all SVN properties except svn:executable. Any unhandled
properties are logged to $GIT_DIR/svn/<refname>/unhandled.log
Renamed and copied directories are not detected by Git and hence not
tracked when committing to SVN. I do not plan on adding support for
this as it's quite difficult and time-consuming to get working for all
the possible corner cases (Git doesn't do it, either). Committing
renamed and copied files is fully supported if they're similar enough
for Git to detect them.
In SVN, it is possible (though discouraged) to commit changes to a tag
(because a tag is just a directory copy, thus technically the same as a
branch). When cloning an SVN repository, 'git svn' cannot know if such a
commit to a tag will happen in the future. Thus it acts conservatively
and imports all SVN tags as branches, prefixing the tag name with 'tags/'.
CONFIGURATION
-------------
@ -1147,25 +1166,6 @@ $GIT_DIR/svn/\**/.rev_map.*::
if it is missing or not up to date. 'git svn reset' automatically
rewinds it.
BUGS
----
We ignore all SVN properties except svn:executable. Any unhandled
properties are logged to $GIT_DIR/svn/<refname>/unhandled.log
Renamed and copied directories are not detected by Git and hence not
tracked when committing to SVN. I do not plan on adding support for
this as it's quite difficult and time-consuming to get working for all
the possible corner cases (Git doesn't do it, either). Committing
renamed and copied files is fully supported if they're similar enough
for Git to detect them.
In SVN, it is possible (though discouraged) to commit changes to a tag
(because a tag is just a directory copy, thus technically the same as a
branch). When cloning an SVN repository, 'git svn' cannot know if such a
commit to a tag will happen in the future. Thus it acts conservatively
and imports all SVN tags as branches, prefixing the tag name with 'tags/'.
SEE ALSO
--------
linkgit:git-rebase[1]

View File

@ -13,7 +13,7 @@ SYNOPSIS
[--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
[-p|--paginate|-P|--no-pager] [--no-replace-objects] [--bare]
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
[--super-prefix=<path>] [--config-env=<name>=<envvar>]
[--super-prefix=<path>] [--config-env <name>=<envvar>]
<command> [<args>]
DESCRIPTION
@ -670,16 +670,6 @@ for further details.
If this environment variable is set to `0`, git will not prompt
on the terminal (e.g., when asking for HTTP authentication).
`GIT_CONFIG_GLOBAL`::
`GIT_CONFIG_SYSTEM`::
Take the configuration from the given files instead from global or
system-level configuration files. If `GIT_CONFIG_SYSTEM` is set, the
system config file defined at build time (usually `/etc/gitconfig`)
will not be read. Likewise, if `GIT_CONFIG_GLOBAL` is set, neither
`$HOME/.gitconfig` nor `$XDG_CONFIG_HOME/git/config` will be read. Can
be set to `/dev/null` to skip reading configuration files of the
respective level.
`GIT_CONFIG_NOSYSTEM`::
Whether to skip reading settings from the system-wide
`$(prefix)/etc/gitconfig` file. This environment variable can

View File

@ -845,8 +845,6 @@ patterns are available:
- `rust` suitable for source code in the Rust language.
- `scheme` suitable for source code in the Scheme language.
- `tex` suitable for source code for LaTeX documents.
@ -1176,8 +1174,7 @@ tag then no replacement will be done. The placeholders are the same
as those for the option `--pretty=format:` of linkgit:git-log[1],
except that they need to be wrapped like this: `$Format:PLACEHOLDERS$`
in the file. E.g. the string `$Format:%H$` will be replaced by the
commit hash. However, only one `%(describe)` placeholder is expanded
per archive to avoid denial-of-service attacks.
commit hash.
Packing objects
@ -1247,12 +1244,6 @@ to:
[attr]binary -diff -merge -text
------------
NOTES
-----
Git does not follow symbolic links when accessing a `.gitattributes`
file in the working tree. This keeps behavior consistent when the file
is accessed from the index or a tree versus from the filesystem.
EXAMPLES
--------

View File

@ -187,7 +187,7 @@ mark a file pair as a rename and stop considering other candidates for
better matches. At most, one comparison is done per file in this
preliminary pass; so if there are several remaining ext.txt files
throughout the directory hierarchy after exact rename detection, this
preliminary step may be skipped for those files.
preliminary step will be skipped for those files.
Note. When the "-C" option is used with `--find-copies-harder`
option, 'git diff-{asterisk}' commands feed unmodified filepairs to

View File

@ -138,7 +138,7 @@ given); `template` (if a `-t` option was given or the
configuration option `commit.template` is set); `merge` (if the
commit is a merge or a `.git/MERGE_MSG` file exists); `squash`
(if a `.git/SQUASH_MSG` file exists); or `commit`, followed by
a commit object name (if a `-c`, `-C` or `--amend` option was given).
a commit SHA-1 (if a `-c`, `-C` or `--amend` option was given).
If the exit status is non-zero, `git commit` will abort.
@ -231,19 +231,19 @@ named remote is not being used both values will be the same.
Information about what is to be pushed is provided on the hook's standard
input with lines of the form:
<local ref> SP <local object name> SP <remote ref> SP <remote object name> LF
<local ref> SP <local sha1> SP <remote ref> SP <remote sha1> LF
For instance, if the command +git push origin master:foreign+ were run the
hook would receive a line like the following:
refs/heads/master 67890 refs/heads/foreign 12345
although the full object name would be supplied. If the foreign ref does not
yet exist the `<remote object name>` will be the all-zeroes object name. If a
ref is to be deleted, the `<local ref>` will be supplied as `(delete)` and the
`<local object name>` will be the all-zeroes object name. If the local commit
was specified by something other than a name which could be expanded (such as
`HEAD~`, or an object name) it will be supplied as it was originally given.
although the full, 40-character SHA-1s would be supplied. If the foreign ref
does not yet exist the `<remote SHA-1>` will be 40 `0`. If a ref is to be
deleted, the `<local ref>` will be supplied as `(delete)` and the `<local
SHA-1>` will be 40 `0`. If the local commit was specified by something other
than a name which could be expanded (such as `HEAD~`, or a SHA-1) it will be
supplied as it was originally given.
If this hook exits with a non-zero status, `git push` will abort without
pushing anything. Information about why the push is rejected may be sent
@ -268,7 +268,7 @@ input a line of the format:
where `<old-value>` is the old object name stored in the ref,
`<new-value>` is the new object name to be stored in the ref and
`<ref-name>` is the full name of the ref.
When creating a new ref, `<old-value>` is the all-zeroes object name.
When creating a new ref, `<old-value>` is 40 `0`.
If the hook exits with non-zero status, none of the refs will be
updated. If the hook exits with zero, updating of individual refs can
@ -473,8 +473,7 @@ reference-transaction
This hook is invoked by any Git command that performs reference
updates. It executes whenever a reference transaction is prepared,
committed or aborted and may thus get called multiple times. The hook
does not cover symbolic references (but that may change in the future).
committed or aborted and may thus get called multiple times.
The hook takes exactly one argument, which is the current state the
given reference transaction is in:
@ -493,14 +492,6 @@ receives on standard input a line of the format:
<old-value> SP <new-value> SP <ref-name> LF
where `<old-value>` is the old object name passed into the reference
transaction, `<new-value>` is the new object name to be stored in the
ref and `<ref-name>` is the full name of the ref. When force updating
the reference regardless of its current value or when the reference is
to be created anew, `<old-value>` is the all-zeroes object name. To
distinguish these cases, you can inspect the current value of
`<ref-name>` via `git rev-parse`.
The exit status of the hook is ignored for any state except for the
"prepared" state. In the "prepared" state, a non-zero exit status will
cause the transaction to be aborted. The hook will not be called with
@ -559,7 +550,7 @@ command-dependent arguments may be passed in the future.
The hook receives a list of the rewritten commits on stdin, in the
format
<old-object-name> SP <new-object-name> [ SP <extra-info> ] LF
<old-sha1> SP <new-sha1> [ SP <extra-info> ] LF
The 'extra-info' is again command-dependent. If it is empty, the
preceding SP is also omitted. Currently, no commands pass any
@ -575,7 +566,7 @@ rebase::
For the 'squash' and 'fixup' operation, all commits that were
squashed are listed as being rewritten to the squashed commit.
This means that there will be several lines sharing the same
'new-object-name'.
'new-sha1'.
+
The commits are guaranteed to be listed in the order that they were
processed by rebase.

View File

@ -149,15 +149,11 @@ not tracked by Git remain untracked.
To stop tracking a file that is currently tracked, use
'git rm --cached'.
Git does not follow symbolic links when accessing a `.gitignore` file in
the working tree. This keeps behavior consistent when the file is
accessed from the index or a tree versus from the filesystem.
EXAMPLES
--------
- The pattern `hello.*` matches any file or folder
whose name begins with `hello.`. If one wants to restrict
whose name begins with `hello`. If one wants to restrict
this only to the directory and not in its subdirectories,
one can prepend the pattern with a slash, i.e. `/hello.*`;
the pattern now matches `hello.txt`, `hello.c` but not

View File

@ -55,13 +55,6 @@ this would also match the 'Commit Name <commit&#64;email.xx>' above:
Proper Name <proper@email.xx> CoMmIt NaMe <CoMmIt@EmAiL.xX>
--
NOTES
-----
Git does not follow symbolic links when accessing a `.mailmap` file in
the working tree. This keeps behavior consistent when the file is
accessed from the index or a tree versus from the filesystem.
EXAMPLES
--------

View File

@ -98,14 +98,6 @@ submodule.<name>.shallow::
shallow clone (with a history depth of 1) unless the user explicitly
asks for a non-shallow clone.
NOTES
-----
Git does not allow the `.gitmodules` file within a working tree to be a
symbolic link, and will refuse to check out such a tree entry. This
keeps behavior consistent when the file is accessed from the index or a
tree versus from the filesystem, and helps Git reliably enforce security
checks of the file contents.
EXAMPLES
--------

View File

@ -62,7 +62,3 @@ git clone ext::'git --namespace=foo %s /tmp/prefixed.git'
----------
include::transfer-data-leaks.txt[]
GIT
---
Part of the linkgit:git[1] suite

View File

@ -751,17 +751,6 @@ default font sizes or lineheights are changed (e.g. via adding extra
CSS stylesheet in `@stylesheets`), it may be appropriate to change
these values.
email-privacy::
Redact e-mail addresses from the generated HTML, etc. content.
This obscures e-mail addresses retrieved from the author/committer
and comment sections of the Git log.
It is meant to hinder web crawlers that harvest and abuse addresses.
Such crawlers may not respect robots.txt.
Note that users and user tools also see the addresses as redacted.
If Gitweb is not the final step in a workflow then subsequent steps
may misbehave because of the redacted information they receive.
Disabled by default.
highlight::
Server-side syntax highlight support in "blob" view. It requires
`$highlight_bin` program to be available (see the description of

View File

@ -1,131 +0,0 @@
Content-type: text/asciidoc
Abstract: When a critical vulnerability is discovered and fixed, we follow this
script to coordinate a public release.
How we coordinate embargoed releases
====================================
To protect Git users from critical vulnerabilities, we do not just release
fixed versions like regular maintenance releases. Instead, we coordinate
releases with packagers, keeping the fixes under an embargo until the release
date. That way, users will have a chance to upgrade on that date, no matter
what Operating System or distribution they run.
Open a Security Advisory draft
------------------------------
The first step is to https://github.com/git/git/security/advisories/new[open an
advisory]. Technically, it is not necessary, but it is convenient and saves a
bit of hassle. This advisory can also be used to obtain the CVE number and it
will give us a private fork associated with it that can be used to collaborate
on a fix.
Release date of the embargoed version
-------------------------------------
If the vulnerability affects Windows users, we want to have our friends over at
Visual Studio on board. This means we need to target a "Patch Tuesday" (i.e. a
second Tuesday of the month), at the minimum three weeks from heads-up to
coordinated release.
If the vulnerability affects the server side, or can benefit from scans on the
server side (i.e. if `git fsck` can detect an attack), it is important to give
all involved Git repository hosting sites enough time to scan all of those
repositories.
Notifying the Linux distributions
---------------------------------
At most two weeks before release date, we need to send a notification to
distros@vs.openwall.org, preferably less than 7 days before the release date.
This will reach most (all?) Linux distributions. See an example below, and the
guidelines for this mailing list at
https://oss-security.openwall.org/wiki/mailing-lists/distros#how-to-use-the-lists[here].
Once the version has been published, we send a note about that to oss-security.
As an example, see https://www.openwall.com/lists/oss-security/2019/12/13/1[the
v2.24.1 mail];
https://oss-security.openwall.org/wiki/mailing-lists/oss-security[Here] are
their guidelines.
The mail to oss-security should also describe the exploit, and give credit to
the reporter(s): security researchers still receive too little respect for the
invaluable service they provide, and public credit goes a long way to keep them
paid by their respective organizations.
Technically, describing any exploit can be delayed up to 7 days, but we usually
refrain from doing that, including it right away.
As a courtesy we typically attach a Git bundle (as `.tar.xz` because the list
will drop `.bundle` attachments) in the mail to distros@ so that the involved
parties can take care of integrating/backporting them. This bundle is typically
created using a command like this:
git bundle create cve-xxx.bundle ^origin/master vA.B.C vD.E.F
tar cJvf cve-xxx.bundle.tar.xz cve-xxx.bundle
Example mail to distros@vs.openwall.org
---------------------------------------
....
To: distros@vs.openwall.org
Cc: git-security@googlegroups.com, <other people involved in the report/fix>
Subject: [vs] Upcoming Git security fix release
Team,
The Git project will release new versions on <date> at 10am Pacific Time or
soon thereafter. I have attached a Git bundle (embedded in a `.tar.xz` to avoid
it being dropped) which you can fetch into a clone of
https://github.com/git/git via `git fetch --tags /path/to/cve-xxx.bundle`,
containing the tags for versions <versions>.
You can verify with `git tag -v <tag>` that the versions were signed by
the Git maintainer, using the same GPG key as e.g. v2.24.0.
Please use these tags to prepare `git` packages for your various
distributions, using the appropriate tagged versions. The added test cases
help verify the correctness.
The addressed issues are:
<list of CVEs with a short description, typically copy/pasted from Git's
release notes, usually demo exploit(s), too>
Credit for finding the vulnerability goes to <reporter>, credit for fixing
it goes to <developer>.
Thanks,
<name>
....
Example mail to oss-security@lists.openwall.com
-----------------------------------------------
....
To: oss-security@lists.openwall.com
Cc: git-security@googlegroups.com, <other people involved in the report/fix>
Subject: git: <copy from security advisory>
Team,
The Git project released new versions on <date>, addressing <CVE>.
All supported platforms are affected in one way or another, and all Git
versions all the way back to <version> are affected. The fixed versions are:
<versions>.
Link to the announcement: <link to lore.kernel.org/git>
We highly recommend to upgrade.
The addressed issues are:
* <list of CVEs and their explanations, along with demo exploits>
Credit for finding the vulnerability goes to <reporter>, credit for fixing
it goes to <developer>.
Thanks,
<name>
....

View File

@ -1,67 +1,71 @@
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
use Getopt::Long;
# Parse arguments, a simple state machine for input like:
#
# howto/*.txt config/*.txt --section=1 git.txt git-add.txt [...] --to-lint git-add.txt a-file.txt [...]
my %TXT;
my %SECTION;
my $section;
my $lint_these = 0;
for my $arg (@ARGV) {
if (my ($sec) = $arg =~ /^--section=(\d+)$/s) {
$section = $sec;
next;
}
my $basedir = ".";
GetOptions("basedir=s" => \$basedir)
or die("Cannot parse command line arguments\n");
my ($name) = $arg =~ /^(.*?)\.txt$/s;
unless (defined $section) {
$TXT{$name} = $arg;
next;
}
my $found_errors = 0;
$SECTION{$name} = $section;
}
my $exit_code = 0;
sub report {
my ($pos, $line, $target, $msg) = @_;
substr($line, $pos) = "' <-- HERE";
$line =~ s/^\s+//;
print "$ARGV:$.: error: $target: $msg, shown with 'HERE' below:\n";
print "$ARGV:$.:\t'$line\n";
$exit_code = 1;
my ($where, $what, $error) = @_;
print "$where: $error: $what\n";
$found_errors = 1;
}
@ARGV = sort values %TXT;
die "BUG: Nothing to process!" unless @ARGV;
while (<>) {
my $line = $_;
while ($line =~ m/linkgit:((.*?)\[(\d)\])/g) {
my $pos = pos $line;
my ($target, $page, $section) = ($1, $2, $3);
sub grab_section {
my ($page) = @_;
open my $fh, "<", "$basedir/$page.txt";
my $firstline = <$fh>;
chomp $firstline;
close $fh;
my ($section) = ($firstline =~ /.*\((\d)\)$/);
return $section;
}
# De-AsciiDoc
$page =~ s/{litdd}/--/g;
sub lint {
my ($file) = @_;
open my $fh, "<", $file
or return;
while (<$fh>) {
my $where = "$file:$.";
while (s/linkgit:((.*?)\[(\d)\])//) {
my ($target, $page, $section) = ($1, $2, $3);
if (!exists $TXT{$page}) {
report($pos, $line, $target, "link outside of our own docs");
next;
}
if (!exists $SECTION{$page}) {
report($pos, $line, $target, "link outside of our sectioned docs");
next;
}
my $real_section = $SECTION{$page};
if ($section != $SECTION{$page}) {
report($pos, $line, $target, "wrong section (should be $real_section)");
next;
# De-AsciiDoc
$page =~ s/{litdd}/--/g;
if ($page !~ /^git/) {
report($where, $target, "nongit link");
next;
}
if (! -f "$basedir/$page.txt") {
report($where, $target, "no such source");
next;
}
$real_section = grab_section($page);
if ($real_section != $section) {
report($where, $target,
"wrong section (should be $real_section)");
next;
}
}
}
# this resets our $. for each file
close ARGV if eof;
close $fh;
}
exit $exit_code;
sub lint_it {
lint($File::Find::name) if -f && /\.txt$/;
}
if (!@ARGV) {
find({ wanted => \&lint_it, no_chdir => 1 }, $basedir);
} else {
for (@ARGV) {
lint($_);
}
}
exit $found_errors;

View File

@ -1,24 +0,0 @@
#!/usr/bin/perl
use strict;
use warnings;
my $exit_code = 0;
sub report {
my ($target, $msg) = @_;
print "error: $target: $msg\n";
$exit_code = 1;
}
local $/;
while (my $slurp = <>) {
report($ARGV, "has no 'Part of the linkgit:git[1] suite' end blurb")
unless $slurp =~ m[
^GIT\n
---\n
\QPart of the linkgit:git[1] suite\E \n
\z
]mx;
}
exit $exit_code;

View File

@ -1,105 +0,0 @@
#!/usr/bin/perl
use strict;
use warnings;
my %SECTIONS;
{
my $order = 0;
%SECTIONS = (
'NAME' => {
required => 1,
order => $order++,
},
'SYNOPSIS' => {
required => 1,
order => $order++,
},
'DESCRIPTION' => {
required => 1,
order => $order++,
},
'OPTIONS' => {
order => $order++,
required => 0,
},
'CONFIGURATION' => {
order => $order++,
},
'BUGS' => {
order => $order++,
},
'SEE ALSO' => {
order => $order++,
},
'GIT' => {
required => 1,
order => $order++,
},
);
}
my $SECTION_RX = do {
my ($names) = join "|", keys %SECTIONS;
qr/^($names)$/s;
};
my $exit_code = 0;
sub report {
my ($msg) = @_;
print "$ARGV:$.: $msg\n";
$exit_code = 1;
}
my $last_was_section;
my @actual_order;
while (my $line = <>) {
chomp $line;
if ($line =~ $SECTION_RX) {
push @actual_order => $line;
$last_was_section = 1;
# Have no "last" section yet, processing NAME
next if @actual_order == 1;
my @expected_order = sort {
$SECTIONS{$a}->{order} <=> $SECTIONS{$b}->{order}
} @actual_order;
my $expected_last = $expected_order[-2];
my $actual_last = $actual_order[-2];
if ($actual_last ne $expected_last) {
report("section '$line' incorrectly ordered, comes after '$actual_last'");
}
next;
}
if ($last_was_section) {
my $last_section = $actual_order[-1];
if (length $last_section ne length $line) {
report("dashes under '$last_section' should match its length!");
}
if ($line !~ /^-+$/) {
report("dashes under '$last_section' should be '-' dashes!");
}
$last_was_section = 0;
}
if (eof) {
# We have both a hash and an array to consider, for
# convenience
my %actual_sections;
@actual_sections{@actual_order} = ();
for my $section (sort keys %SECTIONS) {
next if !$SECTIONS{$section}->{required} or exists $actual_sections{$section};
report("has no required '$section' section!");
}
# Reset per-file state
{
@actual_order = ();
# this resets our $. for each file
close ARGV;
}
}
}
exit $exit_code;

View File

@ -190,8 +190,6 @@ The placeholders are:
'%ai':: author date, ISO 8601-like format
'%aI':: author date, strict ISO 8601 format
'%as':: author date, short format (`YYYY-MM-DD`)
'%ah':: author date, human style (like the `--date=human` option of
linkgit:git-rev-list[1])
'%cn':: committer name
'%cN':: committer name (respecting .mailmap, see
linkgit:git-shortlog[1] or linkgit:git-blame[1])
@ -208,23 +206,8 @@ The placeholders are:
'%ci':: committer date, ISO 8601-like format
'%cI':: committer date, strict ISO 8601 format
'%cs':: committer date, short format (`YYYY-MM-DD`)
'%ch':: committer date, human style (like the `--date=human` option of
linkgit:git-rev-list[1])
'%d':: ref names, like the --decorate option of linkgit:git-log[1]
'%D':: ref names without the " (", ")" wrapping.
'%(describe[:options])':: human-readable name, like
linkgit:git-describe[1]; empty string for
undescribable commits. The `describe` string
may be followed by a colon and zero or more
comma-separated options. Descriptions can be
inconsistent when tags are added or removed at
the same time.
+
** 'match=<pattern>': Only consider tags matching the given
`glob(7)` pattern, excluding the "refs/tags/" prefix.
** 'exclude=<pattern>': Do not consider tags matching the given
`glob(7)` pattern, excluding the "refs/tags/" prefix.
'%S':: ref name given on the command line by which the commit was reached
(like `git log --source`), only works with `git log`
'%e':: encoding
@ -271,7 +254,7 @@ endif::git-rev-list[]
`trailers` string may be followed by a colon
and zero or more comma-separated options.
If any option is provided multiple times the
last occurrence wins.
last occurance wins.
+
The boolean options accept an optional value `[=<BOOL>]`. The values
`true`, `false`, `on`, `off` etc. are all accepted. See the "boolean"

View File

@ -892,9 +892,6 @@ or units. n may be zero. The suffixes k, m, and g can be used to name
units in KiB, MiB, or GiB. For example, 'blob:limit=1k' is the same
as 'blob:limit=1024'.
+
The form '--filter=object:type=(tag|commit|tree|blob)' omits all objects
which are not of the requested type.
+
The form '--filter=sparse:oid=<blob-ish>' uses a sparse-checkout
specification contained in the blob (or blob-expression) '<blob-ish>'
to omit blobs that would not be not required for a sparse checkout on
@ -933,11 +930,6 @@ equivalent.
--no-filter::
Turn off any previous `--filter=` argument.
--filter-provided-objects::
Filter the list of explicitly provided objects, which would otherwise
always be printed even if they did not match any of the filters. Only
useful with `--filter=`.
--filter-print-omitted::
Only useful with `--filter=`; prints a list of the objects omitted
by the filter. Object IDs are prefixed with a ``~'' character.

View File

@ -1,11 +1,8 @@
Error reporting in git
======================
`BUG`, `die`, `usage`, `error`, and `warning` report errors of
various kinds.
- `BUG` is for failed internal assertions that should never happen,
i.e. a bug in git itself.
`die`, `usage`, `error`, and `warning` report errors of various
kinds.
- `die` is for fatal application errors. It prints a message to
the user and exits with status 128.
@ -23,9 +20,6 @@ various kinds.
without running into too many problems. Like `error`, it
returns -1 after reporting the situation to the caller.
These reports will be logged via the trace2 facility. See the "error"
event in link:api-trace2.txt[trace2 API].
Customizable error handlers
---------------------------

View File

@ -1,105 +0,0 @@
Simple-IPC API
==============
The Simple-IPC API is a collection of `ipc_` prefixed library routines
and a basic communication protocol that allow an IPC-client process to
send an application-specific IPC-request message to an IPC-server
process and receive an application-specific IPC-response message.
Communication occurs over a named pipe on Windows and a Unix domain
socket on other platforms. IPC-clients and IPC-servers rendezvous at
a previously agreed-to application-specific pathname (which is outside
the scope of this design) that is local to the computer system.
The IPC-server routines within the server application process create a
thread pool to listen for connections and receive request messages
from multiple concurrent IPC-clients. When received, these messages
are dispatched up to the server application callbacks for handling.
IPC-server routines then incrementally relay responses back to the
IPC-client.
The IPC-client routines within a client application process connect
to the IPC-server and send a request message and wait for a response.
When received, the response is returned back the caller.
For example, the `fsmonitor--daemon` feature will be built as a server
application on top of the IPC-server library routines. It will have
threads watching for file system events and a thread pool waiting for
client connections. Clients, such as `git status` will request a list
of file system events since a point in time and the server will
respond with a list of changed files and directories. The formats of
the request and response are application-specific; the IPC-client and
IPC-server routines treat them as opaque byte streams.
Comparison with sub-process model
---------------------------------
The Simple-IPC mechanism differs from the existing `sub-process.c`
model (Documentation/technical/long-running-process-protocol.txt) and
used by applications like Git-LFS. In the LFS-style sub-process model
the helper is started by the foreground process, communication happens
via a pair of file descriptors bound to the stdin/stdout of the
sub-process, the sub-process only serves the current foreground
process, and the sub-process exits when the foreground process
terminates.
In the Simple-IPC model the server is a very long-running service. It
can service many clients at the same time and has a private socket or
named pipe connection to each active client. It might be started
(on-demand) by the current client process or it might have been
started by a previous client or by the OS at boot time. The server
process is not associated with a terminal and it persists after
clients terminate. Clients do not have access to the stdin/stdout of
the server process and therefore must communicate over sockets or
named pipes.
Server startup and shutdown
---------------------------
How an application server based upon IPC-server is started is also
outside the scope of the Simple-IPC design and is a property of the
application using it. For example, the server might be started or
restarted during routine maintenance operations, or it might be
started as a system service during the system boot-up sequence, or it
might be started on-demand by a foreground Git command when needed.
Similarly, server shutdown is a property of the application using
the simple-ipc routines. For example, the server might decide to
shutdown when idle or only upon explicit request.
Simple-IPC protocol
-------------------
The Simple-IPC protocol consists of a single request message from the
client and an optional response message from the server. Both the
client and server messages are unlimited in length and are terminated
with a flush packet.
The pkt-line routines (Documentation/technical/protocol-common.txt)
are used to simplify buffer management during message generation,
transmission, and reception. A flush packet is used to mark the end
of the message. This allows the sender to incrementally generate and
transmit the message. It allows the receiver to incrementally receive
the message in chunks and to know when they have received the entire
message.
The actual byte format of the client request and server response
messages are application specific. The IPC layer transmits and
receives them as opaque byte buffers without any concern for the
content within. It is the job of the calling application layer to
understand the contents of the request and response messages.
Summary
-------
Conceptually, the Simple-IPC protocol is similar to an HTTP REST
request. Clients connect, make an application-specific and
stateless request, receive an application-specific
response, and disconnect. It is a one round trip facility for
querying the server. The Simple-IPC routines hide the socket,
named pipe, and thread pool details and allow the application
layer to focus on the application at hand.

View File

@ -465,7 +465,7 @@ completed.)
------------
`"error"`::
This event is emitted when one of the `BUG()`, `error()`, `die()`,
This event is emitted when one of the `error()`, `die()`,
`warning()`, or `usage()` functions are called.
+
------------

View File

@ -44,13 +44,6 @@ Git index format
localization, no special casing of directory separator '/'). Entries
with the same name are sorted by their stage field.
An index entry typically represents a file. However, if sparse-checkout
is enabled in cone mode (`core.sparseCheckoutCone` is enabled) and the
`extensions.sparseIndex` extension is enabled, then the index may
contain entries for directories outside of the sparse-checkout definition.
These entries have mode `040000`, include the `SKIP_WORKTREE` bit, and
the path ends in a directory separator.
32-bit ctime seconds, the last time a file's metadata changed
this is stat(2) data
@ -392,15 +385,3 @@ The remaining data of each directory block is grouped by type:
in this block of entries.
- 32-bit count of cache entries in this block
== Sparse Directory Entries
When using sparse-checkout in cone mode, some entire directories within
the index can be summarized by pointing to a tree object instead of the
entire expanded list of paths within that tree. An index containing such
entries is a "sparse index". Index format versions 4 and less were not
implemented with such entries in mind. Thus, for these versions, an
index containing sparse directory entries will include this extension
with signature { 's', 'd', 'i', 'r' }. Like the split-index extension,
tools should avoid interacting with a sparse index unless they understand
this extension.

View File

@ -43,9 +43,8 @@ Design Details
a change in format.
- The MIDX keeps only one record per object ID. If an object appears
in multiple packfiles, then the MIDX selects the copy in the
preferred packfile, otherwise selecting from the most-recently
modified packfile.
in multiple packfiles, then the MIDX selects the copy in the most-
recently modified packfile.
- If there exist packfiles in the pack directory not registered in
the MIDX, then those packfiles are loaded into the `packed_git`

View File

@ -379,86 +379,3 @@ CHUNK DATA:
TRAILER:
Index checksum of the above contents.
== multi-pack-index reverse indexes
Similar to the pack-based reverse index, the multi-pack index can also
be used to generate a reverse index.
Instead of mapping between offset, pack-, and index position, this
reverse index maps between an object's position within the MIDX, and
that object's position within a pseudo-pack that the MIDX describes
(i.e., the ith entry of the multi-pack reverse index holds the MIDX
position of ith object in pseudo-pack order).
To clarify the difference between these orderings, consider a multi-pack
reachability bitmap (which does not yet exist, but is what we are
building towards here). Each bit needs to correspond to an object in the
MIDX, and so we need an efficient mapping from bit position to MIDX
position.
One solution is to let bits occupy the same position in the oid-sorted
index stored by the MIDX. But because oids are effectively random, their
resulting reachability bitmaps would have no locality, and thus compress
poorly. (This is the reason that single-pack bitmaps use the pack
ordering, and not the .idx ordering, for the same purpose.)
So we'd like to define an ordering for the whole MIDX based around
pack ordering, which has far better locality (and thus compresses more
efficiently). We can think of a pseudo-pack created by the concatenation
of all of the packs in the MIDX. E.g., if we had a MIDX with three packs
(a, b, c), with 10, 15, and 20 objects respectively, we can imagine an
ordering of the objects like:
|a,0|a,1|...|a,9|b,0|b,1|...|b,14|c,0|c,1|...|c,19|
where the ordering of the packs is defined by the MIDX's pack list,
and then the ordering of objects within each pack is the same as the
order in the actual packfile.
Given the list of packs and their counts of objects, you can
naïvely reconstruct that pseudo-pack ordering (e.g., the object at
position 27 must be (c,1) because packs "a" and "b" consumed 25 of the
slots). But there's a catch. Objects may be duplicated between packs, in
which case the MIDX only stores one pointer to the object (and thus we'd
want only one slot in the bitmap).
Callers could handle duplicates themselves by reading objects in order
of their bit-position, but that's linear in the number of objects, and
much too expensive for ordinary bitmap lookups. Building a reverse index
solves this, since it is the logical inverse of the index, and that
index has already removed duplicates. But, building a reverse index on
the fly can be expensive. Since we already have an on-disk format for
pack-based reverse indexes, let's reuse it for the MIDX's pseudo-pack,
too.
Objects from the MIDX are ordered as follows to string together the
pseudo-pack. Let `pack(o)` return the pack from which `o` was selected
by the MIDX, and define an ordering of packs based on their numeric ID
(as stored by the MIDX). Let `offset(o)` return the object offset of `o`
within `pack(o)`. Then, compare `o1` and `o2` as follows:
- If one of `pack(o1)` and `pack(o2)` is preferred and the other
is not, then the preferred one sorts first.
+
(This is a detail that allows the MIDX bitmap to determine which
pack should be used by the pack-reuse mechanism, since it can ask
the MIDX for the pack containing the object at bit position 0).
- If `pack(o1) ≠ pack(o2)`, then sort the two objects in descending
order based on the pack ID.
- Otherwise, `pack(o1) = pack(o2)`, and the objects are sorted in
pack-order (i.e., `o1` sorts ahead of `o2` exactly when `offset(o1)
< offset(o2)`).
In short, a MIDX's pseudo-pack is the de-duplicated concatenation of
objects in packs stored by the MIDX, laid out in pack order, and the
packs arranged in MIDX order (with the preferred pack coming first).
Finally, note that the MIDX's reverse index is not stored as a chunk in
the multi-pack-index itself. This is done because the reverse index
includes the checksum of the pack or MIDX to which it belongs, which
makes it impossible to write in the MIDX. To avoid races when rewriting
the MIDX, a MIDX reverse index includes the MIDX's checksum in its
filename (e.g., `multi-pack-index-xyz.rev`).

View File

@ -1,270 +0,0 @@
Parallel Checkout Design Notes
==============================
The "Parallel Checkout" feature attempts to use multiple processes to
parallelize the work of uncompressing the blobs, applying in-core
filters, and writing the resulting contents to the working tree during a
checkout operation. It can be used by all checkout-related commands,
such as `clone`, `checkout`, `reset`, `sparse-checkout`, and others.
These commands share the following basic structure:
* Step 1: Read the current index file into memory.
* Step 2: Modify the in-memory index based upon the command, and
temporarily mark all cache entries that need to be updated.
* Step 3: Populate the working tree to match the new candidate index.
This includes iterating over all of the to-be-updated cache entries
and delete, create, or overwrite the associated files in the working
tree.
* Step 4: Write the new index to disk.
Step 3 is the focus of the "parallel checkout" effort described here.
Sequential Implementation
-------------------------
For the purposes of discussion here, the current sequential
implementation of Step 3 is divided in 3 parts, each one implemented in
its own function:
* Step 3a: `unpack-trees.c:check_updates()` contains a series of
sequential loops iterating over the `cache_entry`'s array. The main
loop in this function calls the Step 3b function for each of the
to-be-updated entries.
* Step 3b: `entry.c:checkout_entry()` examines the existing working tree
for file conflicts, collisions, and unsaved changes. It removes files
and creates leading directories as necessary. It calls the Step 3c
function for each entry to be written.
* Step 3c: `entry.c:write_entry()` loads the blob into memory, smudges
it if necessary, creates the file in the working tree, writes the
smudged contents, calls `fstat()` or `lstat()`, and updates the
associated `cache_entry` struct with the stat information gathered.
It wouldn't be safe to perform Step 3b in parallel, as there could be
race conditions between file creations and removals. Instead, the
parallel checkout framework lets the sequential code handle Step 3b,
and uses parallel workers to replace the sequential
`entry.c:write_entry()` calls from Step 3c.
Rejected Multi-Threaded Solution
--------------------------------
The most "straightforward" implementation would be to spread the set of
to-be-updated cache entries across multiple threads. But due to the
thread-unsafe functions in the ODB code, we would have to use locks to
coordinate the parallel operation. An early prototype of this solution
showed that the multi-threaded checkout would bring performance
improvements over the sequential code, but there was still too much lock
contention. A `perf` profiling indicated that around 20% of the runtime
during a local Linux clone (on an SSD) was spent in locking functions.
For this reason this approach was rejected in favor of using multiple
child processes, which led to a better performance.
Multi-Process Solution
----------------------
Parallel checkout alters the aforementioned Step 3 to use multiple
`checkout--worker` background processes to distribute the work. The
long-running worker processes are controlled by the foreground Git
command using the existing run-command API.
Overview
~~~~~~~~
Step 3b is only slightly altered; for each entry to be checked out, the
main process performs the following steps:
* M1: Check whether there is any untracked or unclean file in the
working tree which would be overwritten by this entry, and decide
whether to proceed (removing the file(s)) or not.
* M2: Create the leading directories.
* M3: Load the conversion attributes for the entry's path.
* M4: Check, based on the entry's type and conversion attributes,
whether the entry is eligible for parallel checkout (more on this
later). If it is eligible, enqueue the entry and the loaded
attributes to later write the entry in parallel. If not, write the
entry right away, using the default sequential code.
Note: we save the conversion attributes associated with each entry
because the workers don't have access to the main process' index state,
so they can't load the attributes by themselves (and the attributes are
needed to properly smudge the entry). Additionally, this has a positive
impact on performance as (1) we don't need to load the attributes twice
and (2) the attributes machinery is optimized to handle paths in
sequential order.
After all entries have passed through the above steps, the main process
checks if the number of enqueued entries is sufficient to spread among
the workers. If not, it just writes them sequentially. Otherwise, it
spawns the workers and distributes the queued entries uniformly in
continuous chunks. This aims to minimize the chances of two workers
writing to the same directory simultaneously, which could increase lock
contention in the kernel.
Then, for each assigned item, each worker:
* W1: Checks if there is any non-directory file in the leading part of
the entry's path or if there already exists a file at the entry' path.
If so, mark the entry with `PC_ITEM_COLLIDED` and skip it (more on
this later).
* W2: Creates the file (with O_CREAT and O_EXCL).
* W3: Loads the blob into memory (inflating and delta reconstructing
it).
* W4: Applies any required in-process filter, like end-of-line
conversion and re-encoding.
* W5: Writes the result to the file descriptor opened at W2.
* W6: Calls `fstat()` or lstat()` on the just-written path, and sends
the result back to the main process, together with the end status of
the operation and the item's identification number.
Note that, when possible, steps W3 to W5 are delegated to the streaming
machinery, removing the need to keep the entire blob in memory.
If the worker fails to read the blob or to write it to the working tree,
it removes the created file to avoid leaving empty files behind. This is
the *only* time a worker is allowed to remove a file.
As mentioned earlier, it is the responsibility of the main process to
remove any file that blocks the checkout operation (or abort if the
removal(s) would cause data loss and the user didn't ask to `--force`).
This is crucial to avoid race conditions and also to properly detect
path collisions at Step W1.
After the workers finish writing the items and sending back the required
information, the main process handles the results in two steps:
- First, it updates the in-memory index with the `lstat()` information
sent by the workers. (This must be done first as this information
might me required in the following step.)
- Then it writes the items which collided on disk (i.e. items marked
with `PC_ITEM_COLLIDED`). More on this below.
Path Collisions
---------------
Path collisions happen when two different paths correspond to the same
entry in the file system. E.g. the paths 'a' and 'A' would collide in a
case-insensitive file system.
The sequential checkout deals with collisions in the same way that it
deals with files that were already present in the working tree before
checkout. Basically, it checks if the path that it wants to write
already exists on disk, makes sure the existing file doesn't have
unsaved data, and then overwrites it. (To be more pedantic: it deletes
the existing file and creates the new one.) So, if there are multiple
colliding files to be checked out, the sequential code will write each
one of them but only the last will actually survive on disk.
Parallel checkout aims to reproduce the same behavior. However, we
cannot let the workers racily write to the same file on disk. Instead,
the workers detect when the entry that they want to check out would
collide with an existing file, and mark it with `PC_ITEM_COLLIDED`.
Later, the main process can sequentially feed these entries back to
`checkout_entry()` without the risk of race conditions. On clone, this
also has the effect of marking the colliding entries to later emit a
warning for the user, like the classic sequential checkout does.
The workers are able to detect both collisions among the entries being
concurrently written and collisions between a parallel-eligible entry
and an ineligible entry. The general idea for collision detection is
quite straightforward: for each parallel-eligible entry, the main
process must remove all files that prevent this entry from being written
(before enqueueing it). This includes any non-directory file in the
leading path of the entry. Later, when a worker gets assigned the entry,
it looks again for the non-directories files and for an already existing
file at the entry's path. If any of these checks finds something, the
worker knows that there was a path collision.
Because parallel checkout can distinguish path collisions from the case
where the file was already present in the working tree before checkout,
we could alternatively choose to skip the checkout of colliding entries.
However, each entry that doesn't get written would have NULL `lstat()`
fields on the index. This could cause performance penalties for
subsequent commands that need to refresh the index, as they would have
to go to the file system to see if the entry is dirty. Thus, if we have
N entries in a colliding group and we decide to write and `lstat()` only
one of them, every subsequent `git-status` will have to read, convert,
and hash the written file N - 1 times. By checking out all colliding
entries (like the sequential code does), we only pay the overhead once,
during checkout.
Eligible Entries for Parallel Checkout
--------------------------------------
As previously mentioned, not all entries passed to `checkout_entry()`
will be considered eligible for parallel checkout. More specifically, we
exclude:
- Symbolic links; to avoid race conditions that, in combination with
path collisions, could cause workers to write files at the wrong
place. For example, if we were to concurrently check out a symlink
'a' -> 'b' and a regular file 'A/f' in a case-insensitive file system,
we could potentially end up writing the file 'A/f' at 'a/f', due to a
race condition.
- Regular files that require external filters (either "one shot" filters
or long-running process filters). These filters are black-boxes to Git
and may have their own internal locking or non-concurrent assumptions.
So it might not be safe to run multiple instances in parallel.
+
Besides, long-running filters may use the delayed checkout feature to
postpone the return of some filtered blobs. The delayed checkout queue
and the parallel checkout queue are not compatible and should remain
separate.
+
Note: regular files that only require internal filters, like end-of-line
conversion and re-encoding, are eligible for parallel checkout.
Ineligible entries are checked out by the classic sequential codepath
*before* spawning workers.
Note: submodules's files are also eligible for parallel checkout (as
long as they don't fall into any of the excluding categories mentioned
above). But since each submodule is checked out in its own child
process, we don't mix the superproject's and the submodules' files in
the same parallel checkout process or queue.
The API
-------
The parallel checkout API was designed with the goal of minimizing
changes to the current users of the checkout machinery. This means that
they don't have to call a different function for sequential or parallel
checkout. As already mentioned, `checkout_entry()` will automatically
insert the given entry in the parallel checkout queue when this feature
is enabled and the entry is eligible; otherwise, it will just write the
entry right away, using the sequential code. In general, callers of the
parallel checkout API should look similar to this:
----------------------------------------------
int pc_workers, pc_threshold, err = 0;
struct checkout state;
get_parallel_checkout_configs(&pc_workers, &pc_threshold);
/*
* This check is not strictly required, but it
* should save some time in sequential mode.
*/
if (pc_workers > 1)
init_parallel_checkout();
for (each cache_entry ce to-be-updated)
err |= checkout_entry(ce, &state, NULL, NULL);
err |= run_parallel_checkout(&state, pc_workers, pc_threshold, NULL, NULL);
----------------------------------------------

View File

@ -346,14 +346,6 @@ explained below.
client should download from all given URIs. Currently, the
protocols supported are "http" and "https".
If the 'wait-for-done' feature is advertised, the following argument
can be included in the client's request.
wait-for-done
Indicates to the server that it should never send "ready", but
should wait for the client to say "done" before sending the
packfile.
The response of `fetch` is broken into a number of sections separated by
delimiter packets (0001), with each section beginning with its section
header. Most sections are sent only when the packfile is sent.
@ -522,34 +514,3 @@ packet-line, and must not contain non-printable or whitespace characters. The
current implementation uses trace2 session IDs (see
link:api-trace2.html[api-trace2] for details), but this may change and users of
the session ID should not rely on this fact.
object-info
~~~~~~~~~~~
`object-info` is the command to retrieve information about one or more objects.
Its main purpose is to allow a client to make decisions based on this
information without having to fully fetch objects. Object size is the only
information that is currently supported.
An `object-info` request takes the following arguments:
size
Requests size information to be returned for each listed object id.
oid <oid>
Indicates to the server an object which the client wants to obtain
information for.
The response of `object-info` is a list of the the requested object ids
and associated requested information, each separated by a single space.
output = info flush-pkt
info = PKT-LINE(attrs) LF)
*PKT-LINE(obj-info LF)
attrs = attr | attrs SP attrs
attr = "size"
obj-info = obj-id SP obj-size

View File

@ -1011,13 +1011,8 @@ reftable stack, reload `tables.list`, and delete any tables no longer mentioned
in `tables.list`.
Irregular program exit may still leave about unused files. In this case, a
cleanup operation should proceed as follows:
* take a lock `tables.list.lock` to prevent concurrent modifications
* refresh the reftable stack, by reading `tables.list`
* for each `*.ref` file, remove it if
** it is not mentioned in `tables.list`, and
** its max update_index is not beyond the max update_index of the stack
cleanup operation can read `tables.list`, note its modification timestamp, and
delete any unreferenced `*.ref` files that are older.
Alternatives considered

View File

@ -1,208 +0,0 @@
Git Sparse-Index Design Document
================================
The sparse-checkout feature allows users to focus a working directory on
a subset of the files at HEAD. The cone mode patterns, enabled by
`core.sparseCheckoutCone`, allow for very fast pattern matching to
discover which files at HEAD belong in the sparse-checkout cone.
Three important scale dimensions for a Git working directory are:
* `HEAD`: How many files are present at `HEAD`?
* Populated: How many files are within the sparse-checkout cone.
* Modified: How many files has the user modified in the working directory?
We will use big-O notation -- O(X) -- to denote how expensive certain
operations are in terms of these dimensions.
These dimensions are ordered by their magnitude: users (typically) modify
fewer files than are populated, and we can only populate files at `HEAD`.
Problems occur if there is an extreme imbalance in these dimensions. For
example, if `HEAD` contains millions of paths but the populated set has
only tens of thousands, then commands like `git status` and `git add` can
be dominated by operations that require O(`HEAD`) operations instead of
O(Populated). Primarily, the cost is in parsing and rewriting the index,
which is filled primarily with files at `HEAD` that are marked with the
`SKIP_WORKTREE` bit.
The sparse-index intends to take these commands that read and modify the
index from O(`HEAD`) to O(Populated). To do this, we need to modify the
index format in a significant way: add "sparse directory" entries.
With cone mode patterns, it is possible to detect when an entire
directory will have its contents outside of the sparse-checkout definition.
Instead of listing all of the files it contains as individual entries, a
sparse-index contains an entry with the directory name, referencing the
object ID of the tree at `HEAD` and marked with the `SKIP_WORKTREE` bit.
If we need to discover the details for paths within that directory, we
can parse trees to find that list.
At time of writing, sparse-directory entries violate expectations about the
index format and its in-memory data structure. There are many consumers in
the codebase that expect to iterate through all of the index entries and
see only files. In fact, these loops expect to see a reference to every
staged file. One way to handle this is to parse trees to replace a
sparse-directory entry with all of the files within that tree as the index
is loaded. However, parsing trees is slower than parsing the index format,
so that is a slower operation than if we left the index alone. The plan is
to make all of these integrations "sparse aware" so this expansion through
tree parsing is unnecessary and they use fewer resources than when using a
full index.
The implementation plan below follows four phases to slowly integrate with
the sparse-index. The intention is to incrementally update Git commands to
interact safely with the sparse-index without significant slowdowns. This
may not always be possible, but the hope is that the primary commands that
users need in their daily work are dramatically improved.
Phase I: Format and initial speedups
------------------------------------
During this phase, Git learns to enable the sparse-index and safely parse
one. Protections are put in place so that every consumer of the in-memory
data structure can operate with its current assumption of every file at
`HEAD`.
At first, every index parse will call a helper method,
`ensure_full_index()`, which scans the index for sparse-directory entries
(pointing to trees) and replaces them with the full list of paths (with
blob contents) by parsing tree objects. This will be slower in all cases.
The only noticeable change in behavior will be that the serialized index
file contains sparse-directory entries.
To start, we use a new required index extension, `sdir`, to allow
inserting sparse-directory entries into indexes with file format
versions 2, 3, and 4. This prevents Git versions that do not understand
the sparse-index from operating on one, while allowing tools that do not
understand the sparse-index to operate on repositories as long as they do
not interact with the index. A new format, index v5, will be introduced
that includes sparse-directory entries by default. It might also
introduce other features that have been considered for improving the
index, as well.
Next, consumers of the index will be guarded against operating on a
sparse-index by inserting calls to `ensure_full_index()` or
`expand_index_to_path()`. If a specific path is requested, then those will
be protected from within the `index_file_exists()` and `index_name_pos()`
API calls: they will call `ensure_full_index()` if necessary. The
intention here is to preserve existing behavior when interacting with a
sparse-checkout. We don't want a change to happen by accident, without
tests. Many of these locations may not need any change before removing the
guards, but we should not do so without tests to ensure the expected
behavior happens.
It may be desirable to _change_ the behavior of some commands in the
presence of a sparse index or more generally in any sparse-checkout
scenario. In such cases, these should be carefully communicated and
tested. No such behavior changes are intended during this phase.
During a scan of the codebase, not every iteration of the cache entries
needs an `ensure_full_index()` check. The basic reasons include:
1. The loop is scanning for entries with non-zero stage. These entries
are not collapsed into a sparse-directory entry.
2. The loop is scanning for submodules. These entries are not collapsed
into a sparse-directory entry.
3. The loop is part of the index API, especially around reading or
writing the format.
4. The loop is checking for correct order of cache entries and that is
correct if and only if the sparse-directory entries are in the correct
location.
5. The loop ignores entries with the `SKIP_WORKTREE` bit set, or is
otherwise already aware of sparse directory entries.
6. The sparse-index is disabled at this point when using the split-index
feature, so no effort is made to protect the split-index API.
Even after inserting these guards, we will keep expanding sparse-indexes
for most Git commands using the `command_requires_full_index` repository
setting. This setting will be on by default and disabled one builtin at a
time until we have sufficient confidence that all of the index operations
are properly guarded.
To complete this phase, the commands `git status` and `git add` will be
integrated with the sparse-index so that they operate with O(Populated)
performance. They will be carefully tested for operations within and
outside the sparse-checkout definition.
Phase II: Careful integrations
------------------------------
This phase focuses on ensuring that all index extensions and APIs work
well with a sparse-index. This requires significant increases to our test
coverage, especially for operations that interact with the working
directory outside of the sparse-checkout definition. Some of these
behaviors may not be the desirable ones, such as some tests already
marked for failure in `t1092-sparse-checkout-compatibility.sh`.
The index extensions that may require special integrations are:
* FS Monitor
* Untracked cache
While integrating with these features, we should look for patterns that
might lead to better APIs for interacting with the index. Coalescing
common usage patterns into an API call can reduce the number of places
where sparse-directories need to be handled carefully.
Phase III: Important command speedups
-------------------------------------
At this point, the patterns for testing and implementing sparse-directory
logic should be relatively stable. This phase focuses on updating some of
the most common builtins that use the index to operate as O(Populated).
Here is a potential list of commands that could be valuable to integrate
at this point:
* `git commit`
* `git checkout`
* `git merge`
* `git rebase`
Hopefully, commands such as `git merge` and `git rebase` can benefit
instead from merge algorithms that do not use the index as a data
structure, such as the merge-ORT strategy. As these topics mature, we
may enable the ORT strategy by default for repositories using the
sparse-index feature.
Along with `git status` and `git add`, these commands cover the majority
of users' interactions with the working directory. In addition, we can
integrate with these commands:
* `git grep`
* `git rm`
These have been proposed as some whose behavior could change when in a
repo with a sparse-checkout definition. It would be good to include this
behavior automatically when using a sparse-index. Some clarity is needed
to make the behavior switch clear to the user.
This phase is the first where parallel work might be possible without too
much conflicts between topics.
Phase IV: The long tail
-----------------------
This last phase is less a "phase" and more "the new normal" after all of
the previous work.
To start, the `command_requires_full_index` option could be removed in
favor of expanding only when hitting an API guard.
There are many Git commands that could use special attention to operate as
O(Populated), while some might be so rare that it is acceptable to leave
them with additional overhead when a sparse-index is present.
Here are some commands that might be useful to update:
* `git sparse-checkout set`
* `git am`
* `git clean`
* `git stash`

View File

@ -1,8 +1,5 @@
= Git User Manual
[preface]
== Introduction
Git is a fast distributed revision control system.
This manual is designed to be readable by someone with basic UNIX

View File

@ -1,7 +1,7 @@
#!/bin/sh
GVF=GIT-VERSION-FILE
DEF_VER=v2.32.0-rc0
DEF_VER=v2.31.4
LF='
'

View File

@ -197,9 +197,7 @@ Issues of note:
Building and installing the pdf file additionally requires
dblatex. Version >= 0.2.7 is known to work.
All formats require at least asciidoc 8.4.1. Alternatively, you can
use Asciidoctor (requires Ruby) by passing USE_ASCIIDOCTOR=YesPlease
to make. You need at least Asciidoctor version 1.5.
All formats require at least asciidoc 8.4.1.
There are also "make quick-install-doc", "make quick-install-man"
and "make quick-install-html" which install preformatted man pages

View File

@ -578,9 +578,7 @@ GENERATED_H =
EXTRA_CPPFLAGS =
FUZZ_OBJS =
FUZZ_PROGRAMS =
GIT_OBJS =
LIB_OBJS =
OBJECTS =
PROGRAM_OBJS =
PROGRAMS =
EXCLUDED_PROGRAMS =
@ -589,7 +587,6 @@ SCRIPT_PYTHON =
SCRIPT_SH =
SCRIPT_LIB =
TEST_BUILTINS_OBJS =
TEST_OBJS =
TEST_PROGRAMS_NEED_X =
THIRD_PARTY_SOURCES =
@ -665,8 +662,6 @@ ETAGS_TARGET = TAGS
FUZZ_OBJS += fuzz-commit-graph.o
FUZZ_OBJS += fuzz-pack-headers.o
FUZZ_OBJS += fuzz-pack-idx.o
.PHONY: fuzz-objs
fuzz-objs: $(FUZZ_OBJS)
# Always build fuzz objects even if not testing, to prevent bit-rot.
all:: $(FUZZ_OBJS)
@ -684,8 +679,6 @@ PROGRAM_OBJS += http-backend.o
PROGRAM_OBJS += imap-send.o
PROGRAM_OBJS += sh-i18n--envsubst.o
PROGRAM_OBJS += shell.o
.PHONY: program-objs
program-objs: $(PROGRAM_OBJS)
# Binary suffix, set to .exe for Windows builds
X =
@ -693,7 +686,6 @@ X =
PROGRAMS += $(patsubst %.o,git-%$X,$(PROGRAM_OBJS))
TEST_BUILTINS_OBJS += test-advise.o
TEST_BUILTINS_OBJS += test-bitmap.o
TEST_BUILTINS_OBJS += test-bloom.o
TEST_BUILTINS_OBJS += test-chmtime.o
TEST_BUILTINS_OBJS += test-config.o
@ -745,7 +737,6 @@ TEST_BUILTINS_OBJS += test-serve-v2.o
TEST_BUILTINS_OBJS += test-sha1.o
TEST_BUILTINS_OBJS += test-sha256.o
TEST_BUILTINS_OBJS += test-sigchain.o
TEST_BUILTINS_OBJS += test-simple-ipc.o
TEST_BUILTINS_OBJS += test-strcmp-offset.o
TEST_BUILTINS_OBJS += test-string-list.o
TEST_BUILTINS_OBJS += test-submodule-config.o
@ -753,7 +744,6 @@ TEST_BUILTINS_OBJS += test-submodule-nested-repo-config.o
TEST_BUILTINS_OBJS += test-subprocess.o
TEST_BUILTINS_OBJS += test-trace2.o
TEST_BUILTINS_OBJS += test-urlmatch-normalization.o
TEST_BUILTINS_OBJS += test-userdiff.o
TEST_BUILTINS_OBJS += test-wildmatch.o
TEST_BUILTINS_OBJS += test-windows-named-pipe.o
TEST_BUILTINS_OBJS += test-write-cache.o
@ -948,7 +938,6 @@ LIB_OBJS += pack-revindex.o
LIB_OBJS += pack-write.o
LIB_OBJS += packfile.o
LIB_OBJS += pager.o
LIB_OBJS += parallel-checkout.o
LIB_OBJS += parse-options-cb.o
LIB_OBJS += parse-options.o
LIB_OBJS += patch-delta.o
@ -963,7 +952,6 @@ LIB_OBJS += progress.o
LIB_OBJS += promisor-remote.o
LIB_OBJS += prompt.o
LIB_OBJS += protocol.o
LIB_OBJS += protocol-caps.o
LIB_OBJS += prune-packed.o
LIB_OBJS += quote.o
LIB_OBJS += range-diff.o
@ -997,7 +985,6 @@ LIB_OBJS += setup.o
LIB_OBJS += shallow.o
LIB_OBJS += sideband.o
LIB_OBJS += sigchain.o
LIB_OBJS += sparse-index.o
LIB_OBJS += split-index.o
LIB_OBJS += stable-qsort.o
LIB_OBJS += strbuf.o
@ -1066,7 +1053,6 @@ BUILTIN_OBJS += builtin/check-attr.o
BUILTIN_OBJS += builtin/check-ignore.o
BUILTIN_OBJS += builtin/check-mailmap.o
BUILTIN_OBJS += builtin/check-ref-format.o
BUILTIN_OBJS += builtin/checkout--worker.o
BUILTIN_OBJS += builtin/checkout-index.o
BUILTIN_OBJS += builtin/checkout.o
BUILTIN_OBJS += builtin/clean.o
@ -1686,14 +1672,6 @@ ifdef NO_UNIX_SOCKETS
BASIC_CFLAGS += -DNO_UNIX_SOCKETS
else
LIB_OBJS += unix-socket.o
LIB_OBJS += unix-stream-server.o
LIB_OBJS += compat/simple-ipc/ipc-shared.o
LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o
endif
ifdef USE_WIN32_IPC
LIB_OBJS += compat/simple-ipc/ipc-shared.o
LIB_OBJS += compat/simple-ipc/ipc-win32.o
endif
ifdef NO_ICONV
@ -2208,13 +2186,13 @@ $(BUILT_INS): git$X
config-list.h: generate-configlist.sh
config-list.h: Documentation/*config.txt Documentation/config/*.txt
config-list.h:
$(QUIET_GEN)$(SHELL_PATH) ./generate-configlist.sh \
>$@+ && mv $@+ $@
command-list.h: generate-cmdlist.sh command-list.txt
command-list.h: $(wildcard Documentation/git*.txt)
command-list.h: $(wildcard Documentation/git*.txt) Documentation/*config.txt Documentation/config/*.txt
$(QUIET_GEN)$(SHELL_PATH) ./generate-cmdlist.sh \
$(patsubst %,--exclude-program %,$(EXCLUDED_PROGRAMS)) \
command-list.txt >$@+ && mv $@+ $@
@ -2400,30 +2378,16 @@ XDIFF_OBJS += xdiff/xmerge.o
XDIFF_OBJS += xdiff/xpatience.o
XDIFF_OBJS += xdiff/xprepare.o
XDIFF_OBJS += xdiff/xutils.o
.PHONY: xdiff-objs
xdiff-objs: $(XDIFF_OBJS)
TEST_OBJS := $(patsubst %$X,%.o,$(TEST_PROGRAMS)) $(patsubst %,t/helper/%,$(TEST_BUILTINS_OBJS))
.PHONY: test-objs
test-objs: $(TEST_OBJS)
GIT_OBJS += $(LIB_OBJS)
GIT_OBJS += $(BUILTIN_OBJS)
GIT_OBJS += common-main.o
GIT_OBJS += git.o
.PHONY: git-objs
git-objs: $(GIT_OBJS)
OBJECTS += $(GIT_OBJS)
OBJECTS += $(PROGRAM_OBJS)
OBJECTS += $(TEST_OBJS)
OBJECTS += $(XDIFF_OBJS)
OBJECTS += $(FUZZ_OBJS)
OBJECTS := $(LIB_OBJS) $(BUILTIN_OBJS) $(PROGRAM_OBJS) $(TEST_OBJS) \
$(XDIFF_OBJS) \
$(FUZZ_OBJS) \
common-main.o \
git.o
ifndef NO_CURL
OBJECTS += http.o http-walker.o remote-curl.o
endif
.PHONY: objects
objects: $(OBJECTS)
dep_files := $(foreach f,$(OBJECTS),$(dir $f).depend/$(notdir $f).d)
dep_dirs := $(addsuffix .depend,$(sort $(dir $(OBJECTS))))
@ -2705,14 +2669,12 @@ FIND_SOURCE_FILES = ( \
)
$(ETAGS_TARGET): FORCE
$(QUIET_GEN)$(RM) "$(ETAGS_TARGET)+" && \
$(FIND_SOURCE_FILES) | xargs etags -a -o "$(ETAGS_TARGET)+" && \
mv "$(ETAGS_TARGET)+" "$(ETAGS_TARGET)"
$(RM) $(ETAGS_TARGET)
$(FIND_SOURCE_FILES) | xargs etags -a -o $(ETAGS_TARGET)
tags: FORCE
$(QUIET_GEN)$(RM) tags+ && \
$(FIND_SOURCE_FILES) | xargs ctags -a -o tags+ && \
mv tags+ tags
$(RM) tags
$(FIND_SOURCE_FILES) | xargs ctags -a
cscope:
$(RM) cscope*

View File

@ -1 +1 @@
Documentation/RelNotes/2.32.0.txt
Documentation/RelNotes/2.31.4.txt

View File

@ -1,51 +0,0 @@
# Security Policy
## Reporting a vulnerability
Please send a detailed mail to git-security@googlegroups.com to
report vulnerabilities in Git.
Even when unsure whether the bug in question is an exploitable
vulnerability, it is recommended to send the report to
git-security@googlegroups.com (and obviously not to discuss the
issue anywhere else).
Vulnerabilities are expected to be discussed _only_ on that
list, and not in public, until the official announcement on the
Git mailing list on the release date.
Examples for details to include:
- Ideally a short description (or a script) to demonstrate an
exploit.
- The affected platforms and scenarios (the vulnerability might
only affect setups with case-sensitive file systems, for
example).
- The name and affiliation of the security researchers who are
involved in the discovery, if any.
- Whether the vulnerability has already been disclosed.
- How long an embargo would be required to be safe.
## Supported Versions
There are no official "Long Term Support" versions in Git.
Instead, the maintenance track (i.e. the versions based on the
most recently published feature release, also known as ".0"
version) sees occasional updates with bug fixes.
Fixes to vulnerabilities are made for the maintenance track for
the latest feature release and merged up to the in-development
branches. The Git project makes no formal guarantee for any
older maintenance tracks to receive updates. In practice,
though, critical vulnerability fixes are applied not only to the
most recent track, but to at least a couple more maintenance
tracks.
This is typically done by making the fix on the oldest and still
relevant maintenance track, and merging it upwards to newer and
newer maintenance tracks.
For example, v2.24.1 was released to address a couple of
[CVEs](https://cve.mitre.org/), and at the same time v2.14.6,
v2.15.4, v2.16.6, v2.17.3, v2.18.2, v2.19.3, v2.20.2, v2.21.1,
v2.22.2 and v2.23.1 were released.

View File

@ -2,7 +2,6 @@
#include "config.h"
#include "color.h"
#include "help.h"
#include "string-list.h"
int advice_fetch_show_forced_updates = 1;
int advice_push_update_rejected = 1;
@ -137,7 +136,6 @@ static struct {
[ADVICE_STATUS_HINTS] = { "statusHints", 1 },
[ADVICE_STATUS_U_OPTION] = { "statusUoption", 1 },
[ADVICE_SUBMODULE_ALTERNATE_ERROR_STRATEGY_DIE] = { "submoduleAlternateErrorStrategyDie", 1 },
[ADVICE_UPDATE_SPARSE_PATH] = { "updateSparsePath", 1 },
[ADVICE_WAITING_FOR_EDITOR] = { "waitingForEditor", 1 },
};
@ -286,24 +284,6 @@ void NORETURN die_conclude_merge(void)
die(_("Exiting because of unfinished merge."));
}
void advise_on_updating_sparse_paths(struct string_list *pathspec_list)
{
struct string_list_item *item;
if (!pathspec_list->nr)
return;
fprintf(stderr, _("The following pathspecs didn't match any"
" eligible path, but they do match index\n"
"entries outside the current sparse checkout:\n"));
for_each_string_list_item(item, pathspec_list)
fprintf(stderr, "%s\n", item->string);
advise_if_enabled(ADVICE_UPDATE_SPARSE_PATH,
_("Disable or modify the sparsity rules if you intend"
" to update such entries."));
}
void detach_advice(const char *new_name)
{
const char *fmt =

View File

@ -3,8 +3,6 @@
#include "git-compat-util.h"
struct string_list;
extern int advice_fetch_show_forced_updates;
extern int advice_push_update_rejected;
extern int advice_push_non_ff_current;
@ -73,7 +71,6 @@ extern int advice_add_empty_pathspec;
ADVICE_STATUS_HINTS,
ADVICE_STATUS_U_OPTION,
ADVICE_SUBMODULE_ALTERNATE_ERROR_STRATEGY_DIE,
ADVICE_UPDATE_SPARSE_PATH,
ADVICE_WAITING_FOR_EDITOR,
};
@ -95,7 +92,6 @@ void advise_if_enabled(enum advice_type type, const char *advice, ...);
int error_resolve_conflict(const char *me);
void NORETURN die_resolve_conflict(const char *me);
void NORETURN die_conclude_merge(void);
void advise_on_updating_sparse_paths(struct string_list *pathspec_list);
void detach_advice(const char *new_name);
#endif /* ADVICE_H */

29
apply.c
View File

@ -21,7 +21,6 @@
#include "quote.h"
#include "rerere.h"
#include "apply.h"
#include "entry.h"
struct gitdiff_data {
struct strbuf *root;
@ -134,6 +133,8 @@ int check_apply_state(struct apply_state *state, int force_apply)
if (state->apply_with_reject && state->threeway)
return error(_("--reject and --3way cannot be used together."));
if (state->cached && state->threeway)
return error(_("--cached and --3way cannot be used together."));
if (state->threeway) {
if (is_not_gitdir)
return error(_("--3way outside a repository"));
@ -3568,10 +3569,10 @@ static int try_threeway(struct apply_state *state,
write_object_file("", 0, blob_type, &pre_oid);
else if (get_oid(patch->old_oid_prefix, &pre_oid) ||
read_blob_object(&buf, &pre_oid, patch->old_mode))
return error(_("repository lacks the necessary blob to perform 3-way merge."));
return error(_("repository lacks the necessary blob to fall back on 3-way merge."));
if (state->apply_verbosity > verbosity_silent && patch->direct_to_threeway)
fprintf(stderr, _("Performing three-way merge...\n"));
if (state->apply_verbosity > verbosity_silent)
fprintf(stderr, _("Falling back to three-way merge...\n"));
img = strbuf_detach(&buf, &len);
prepare_image(&tmp_image, img, len, 1);
@ -3603,7 +3604,7 @@ static int try_threeway(struct apply_state *state,
if (status < 0) {
if (state->apply_verbosity > verbosity_silent)
fprintf(stderr,
_("Failed to perform three-way merge...\n"));
_("Failed to fall back on three-way merge...\n"));
return status;
}
@ -3636,13 +3637,10 @@ static int apply_data(struct apply_state *state, struct patch *patch,
if (load_preimage(state, &image, patch, st, ce) < 0)
return -1;
if (!state->threeway || try_threeway(state, &image, patch, st, ce) < 0) {
if (state->apply_verbosity > verbosity_silent &&
state->threeway && !patch->direct_to_threeway)
fprintf(stderr, _("Falling back to direct application...\n"));
if (patch->direct_to_threeway ||
apply_fragments(state, &image, patch) < 0) {
/* Note: with --reject, apply_fragments() returns 0 */
if (patch->direct_to_threeway || apply_fragments(state, &image, patch) < 0)
if (!state->threeway || try_threeway(state, &image, patch, st, ce) < 0)
return -1;
}
patch->result = image.buf;
@ -4648,12 +4646,7 @@ static int write_out_results(struct apply_state *state, struct patch *list)
}
string_list_clear(&cpath, 0);
/*
* rerere relies on the partially merged result being in the working
* tree with conflict markers, but that isn't written with --cached.
*/
if (!state->cached)
repo_rerere(state->repo, 0);
repo_rerere(state->repo, 0);
}
return errs;
@ -5024,7 +5017,7 @@ int apply_parse_options(int argc, const char **argv,
OPT_BOOL(0, "apply", force_apply,
N_("also apply the patch (use with --stat/--summary/--check)")),
OPT_BOOL('3', "3way", &state->threeway,
N_( "attempt three-way merge, fall back on normal patch if that fails")),
N_( "attempt three-way merge if a patch does not apply")),
OPT_FILENAME(0, "build-fake-ancestor", &state->fake_ancestor,
N_("build a temporary index based on embedded index information")),
/* Think twice before adding "--nul" synonym to this */

View File

@ -37,10 +37,13 @@ void init_archivers(void)
static void format_subst(const struct commit *commit,
const char *src, size_t len,
struct strbuf *buf, struct pretty_print_context *ctx)
struct strbuf *buf)
{
char *to_free = NULL;
struct strbuf fmt = STRBUF_INIT;
struct pretty_print_context ctx = {0};
ctx.date_mode.type = DATE_NORMAL;
ctx.abbrev = DEFAULT_ABBREV;
if (src == buf->buf)
to_free = strbuf_detach(buf, NULL);
@ -58,7 +61,7 @@ static void format_subst(const struct commit *commit,
strbuf_add(&fmt, b + 8, c - b - 8);
strbuf_add(buf, src, b - src);
format_commit_message(commit, fmt.buf, buf, ctx);
format_commit_message(commit, fmt.buf, buf, &ctx);
len -= c + 1 - src;
src = c + 1;
}
@ -91,7 +94,7 @@ static void *object_file_to_archive(const struct archiver_args *args,
strbuf_attach(&buf, buffer, *sizep, *sizep + 1);
convert_to_working_tree(args->repo->index, path, buf.buf, buf.len, &buf, &meta);
if (commit)
format_subst(commit, buf.buf, buf.len, &buf, args->pretty_ctx);
format_subst(commit, buf.buf, buf.len, &buf);
buffer = strbuf_detach(&buf, &size);
*sizep = size;
}
@ -104,6 +107,7 @@ struct directory {
struct object_id oid;
int baselen, len;
unsigned mode;
int stage;
char path[FLEX_ARRAY];
};
@ -134,7 +138,7 @@ static int check_attr_export_subst(const struct attr_check *check)
}
static int write_archive_entry(const struct object_id *oid, const char *base,
int baselen, const char *filename, unsigned mode,
int baselen, const char *filename, unsigned mode, int stage,
void *context)
{
static struct strbuf path = STRBUF_INIT;
@ -193,7 +197,7 @@ static int write_archive_entry(const struct object_id *oid, const char *base,
static void queue_directory(const unsigned char *sha1,
struct strbuf *base, const char *filename,
unsigned mode, struct archiver_context *c)
unsigned mode, int stage, struct archiver_context *c)
{
struct directory *d;
size_t len = st_add4(base->len, 1, strlen(filename), 1);
@ -201,9 +205,10 @@ static void queue_directory(const unsigned char *sha1,
d->up = c->bottom;
d->baselen = base->len;
d->mode = mode;
d->stage = stage;
c->bottom = d;
d->len = xsnprintf(d->path, len, "%.*s%s/", (int)base->len, base->buf, filename);
oidread(&d->oid, sha1);
hashcpy(d->oid.hash, sha1);
}
static int write_directory(struct archiver_context *c)
@ -219,14 +224,14 @@ static int write_directory(struct archiver_context *c)
write_directory(c) ||
write_archive_entry(&d->oid, d->path, d->baselen,
d->path + d->baselen, d->mode,
c) != READ_TREE_RECURSIVE;
d->stage, c) != READ_TREE_RECURSIVE;
free(d);
return ret ? -1 : 0;
}
static int queue_or_write_archive_entry(const struct object_id *oid,
struct strbuf *base, const char *filename,
unsigned mode, void *context)
unsigned mode, int stage, void *context)
{
struct archiver_context *c = context;
@ -251,14 +256,14 @@ static int queue_or_write_archive_entry(const struct object_id *oid,
if (check_attr_export_ignore(check))
return 0;
queue_directory(oid->hash, base, filename,
mode, c);
mode, stage, c);
return READ_TREE_RECURSIVE;
}
if (write_directory(c))
return -1;
return write_archive_entry(oid, base->buf, base->len, filename, mode,
context);
stage, context);
}
struct extra_file_info {
@ -275,11 +280,9 @@ int write_archive_entries(struct archiver_args *args,
int err;
struct strbuf path_in_archive = STRBUF_INIT;
struct strbuf content = STRBUF_INIT;
struct object_id fake_oid;
struct object_id fake_oid = null_oid;
int i;
oidcpy(&fake_oid, null_oid());
if (args->baselen > 0 && args->base[args->baselen - 1] == '/') {
size_t len = args->baselen;
@ -313,10 +316,10 @@ int write_archive_entries(struct archiver_args *args,
git_attr_set_direction(GIT_ATTR_INDEX);
}
err = read_tree(args->repo, args->tree,
&args->pathspec,
queue_or_write_archive_entry,
&context);
err = read_tree_recursive(args->repo, args->tree, "",
0, 0, &args->pathspec,
queue_or_write_archive_entry,
&context);
if (err == READ_TREE_RECURSIVE)
err = 0;
while (context.bottom) {
@ -375,7 +378,7 @@ struct path_exists_context {
static int reject_entry(const struct object_id *oid, struct strbuf *base,
const char *filename, unsigned mode,
void *context)
int stage, void *context)
{
int ret = -1;
struct path_exists_context *ctx = context;
@ -402,9 +405,9 @@ static int path_exists(struct archiver_args *args, const char *path)
ctx.args = args;
parse_pathspec(&ctx.pathspec, 0, 0, "", paths);
ctx.pathspec.recursive = 1;
ret = read_tree(args->repo, args->tree,
&ctx.pathspec,
reject_entry, &ctx);
ret = read_tree_recursive(args->repo, args->tree, "",
0, 0, &ctx.pathspec,
reject_entry, &ctx);
clear_pathspec(&ctx.pathspec);
return ret != 0;
}
@ -630,19 +633,12 @@ int write_archive(int argc, const char **argv, const char *prefix,
const char *name_hint, int remote)
{
const struct archiver *ar = NULL;
struct pretty_print_describe_status describe_status = {0};
struct pretty_print_context ctx = {0};
struct archiver_args args;
int rc;
git_config_get_bool("uploadarchive.allowunreachable", &remote_allow_unreachable);
git_config(git_default_config, NULL);
describe_status.max_invocations = 1;
ctx.date_mode.type = DATE_NORMAL;
ctx.abbrev = DEFAULT_ABBREV;
ctx.describe_status = &describe_status;
args.pretty_ctx = &ctx;
args.repo = repo;
args.prefix = prefix;
string_list_init(&args.extra_files, 1);

View File

@ -5,7 +5,6 @@
#include "pathspec.h"
struct repository;
struct pretty_print_context;
struct archiver_args {
struct repository *repo;
@ -23,7 +22,6 @@ struct archiver_args {
unsigned int convert : 1;
int compression_level;
struct string_list extra_files;
struct pretty_print_context *pretty_ctx;
};
/* main api */

74
attr.c
View File

@ -278,10 +278,6 @@ struct match_attr {
static const char blank[] = " \t\r\n";
/* Flags usable in read_attr() and parse_attr_line() family of functions. */
#define READ_ATTR_MACRO_OK (1<<0)
#define READ_ATTR_NOFOLLOW (1<<1)
/*
* Parse a whitespace-delimited attribute state (i.e., "attr",
* "-attr", "!attr", or "attr=value") from the string starting at src.
@ -335,7 +331,7 @@ static const char *parse_attr(const char *src, int lineno, const char *cp,
}
static struct match_attr *parse_attr_line(const char *line, const char *src,
int lineno, unsigned flags)
int lineno, int macro_ok)
{
int namelen;
int num_attr, i;
@ -359,7 +355,7 @@ static struct match_attr *parse_attr_line(const char *line, const char *src,
if (strlen(ATTRIBUTE_MACRO_PREFIX) < namelen &&
starts_with(name, ATTRIBUTE_MACRO_PREFIX)) {
if (!(flags & READ_ATTR_MACRO_OK)) {
if (!macro_ok) {
fprintf_ln(stderr, _("%s not allowed: %s:%d"),
name, src, lineno);
goto fail_return;
@ -657,11 +653,11 @@ static void handle_attr_line(struct attr_stack *res,
const char *line,
const char *src,
int lineno,
unsigned flags)
int macro_ok)
{
struct match_attr *a;
a = parse_attr_line(line, src, lineno, flags);
a = parse_attr_line(line, src, lineno, macro_ok);
if (!a)
return;
ALLOC_GROW(res->attrs, res->num_matches + 1, res->alloc);
@ -676,8 +672,7 @@ static struct attr_stack *read_attr_from_array(const char **list)
CALLOC_ARRAY(res, 1);
while ((line = *(list++)) != NULL)
handle_attr_line(res, line, "[builtin]", ++lineno,
READ_ATTR_MACRO_OK);
handle_attr_line(res, line, "[builtin]", ++lineno, 1);
return res;
}
@ -703,39 +698,29 @@ void git_attr_set_direction(enum git_attr_direction new_direction)
direction = new_direction;
}
static struct attr_stack *read_attr_from_file(const char *path, unsigned flags)
static struct attr_stack *read_attr_from_file(const char *path, int macro_ok)
{
int fd;
FILE *fp;
FILE *fp = fopen_or_warn(path, "r");
struct attr_stack *res;
char buf[2048];
int lineno = 0;
if (flags & READ_ATTR_NOFOLLOW)
fd = open_nofollow(path, O_RDONLY);
else
fd = open(path, O_RDONLY);
if (fd < 0) {
warn_on_fopen_errors(path);
if (!fp)
return NULL;
}
fp = xfdopen(fd, "r");
CALLOC_ARRAY(res, 1);
while (fgets(buf, sizeof(buf), fp)) {
char *bufp = buf;
if (!lineno)
skip_utf8_bom(&bufp, strlen(bufp));
handle_attr_line(res, bufp, path, ++lineno, flags);
handle_attr_line(res, bufp, path, ++lineno, macro_ok);
}
fclose(fp);
return res;
}
static struct attr_stack *read_attr_from_index(struct index_state *istate,
static struct attr_stack *read_attr_from_index(const struct index_state *istate,
const char *path,
unsigned flags)
int macro_ok)
{
struct attr_stack *res;
char *buf, *sp;
@ -756,27 +741,27 @@ static struct attr_stack *read_attr_from_index(struct index_state *istate,
ep = strchrnul(sp, '\n');
more = (*ep == '\n');
*ep = '\0';
handle_attr_line(res, sp, path, ++lineno, flags);
handle_attr_line(res, sp, path, ++lineno, macro_ok);
sp = ep + more;
}
free(buf);
return res;
}
static struct attr_stack *read_attr(struct index_state *istate,
const char *path, unsigned flags)
static struct attr_stack *read_attr(const struct index_state *istate,
const char *path, int macro_ok)
{
struct attr_stack *res = NULL;
if (direction == GIT_ATTR_INDEX) {
res = read_attr_from_index(istate, path, flags);
res = read_attr_from_index(istate, path, macro_ok);
} else if (!is_bare_repository()) {
if (direction == GIT_ATTR_CHECKOUT) {
res = read_attr_from_index(istate, path, flags);
res = read_attr_from_index(istate, path, macro_ok);
if (!res)
res = read_attr_from_file(path, flags);
res = read_attr_from_file(path, macro_ok);
} else if (direction == GIT_ATTR_CHECKIN) {
res = read_attr_from_file(path, flags);
res = read_attr_from_file(path, macro_ok);
if (!res)
/*
* There is no checked out .gitattributes file
@ -784,7 +769,7 @@ static struct attr_stack *read_attr(struct index_state *istate,
* We allow operation in a sparsely checked out
* work tree, so read from it.
*/
res = read_attr_from_index(istate, path, flags);
res = read_attr_from_index(istate, path, macro_ok);
}
}
@ -855,11 +840,10 @@ static void push_stack(struct attr_stack **attr_stack_p,
}
}
static void bootstrap_attr_stack(struct index_state *istate,
static void bootstrap_attr_stack(const struct index_state *istate,
struct attr_stack **stack)
{
struct attr_stack *e;
unsigned flags = READ_ATTR_MACRO_OK;
if (*stack)
return;
@ -870,23 +854,23 @@ static void bootstrap_attr_stack(struct index_state *istate,
/* system-wide frame */
if (git_attr_system()) {
e = read_attr_from_file(git_etc_gitattributes(), flags);
e = read_attr_from_file(git_etc_gitattributes(), 1);
push_stack(stack, e, NULL, 0);
}
/* home directory */
if (get_home_gitattributes()) {
e = read_attr_from_file(get_home_gitattributes(), flags);
e = read_attr_from_file(get_home_gitattributes(), 1);
push_stack(stack, e, NULL, 0);
}
/* root directory */
e = read_attr(istate, GITATTRIBUTES_FILE, flags | READ_ATTR_NOFOLLOW);
e = read_attr(istate, GITATTRIBUTES_FILE, 1);
push_stack(stack, e, xstrdup(""), 0);
/* info frame */
if (startup_info->have_repository)
e = read_attr_from_file(git_path_info_attributes(), flags);
e = read_attr_from_file(git_path_info_attributes(), 1);
else
e = NULL;
if (!e)
@ -894,7 +878,7 @@ static void bootstrap_attr_stack(struct index_state *istate,
push_stack(stack, e, NULL, 0);
}
static void prepare_attr_stack(struct index_state *istate,
static void prepare_attr_stack(const struct index_state *istate,
const char *path, int dirlen,
struct attr_stack **stack)
{
@ -972,7 +956,7 @@ static void prepare_attr_stack(struct index_state *istate,
strbuf_add(&pathbuf, path + pathbuf.len, (len - pathbuf.len));
strbuf_addf(&pathbuf, "/%s", GITATTRIBUTES_FILE);
next = read_attr(istate, pathbuf.buf, READ_ATTR_NOFOLLOW);
next = read_attr(istate, pathbuf.buf, 0);
/* reset the pathbuf to not include "/.gitattributes" */
strbuf_setlen(&pathbuf, len);
@ -1094,7 +1078,7 @@ static void determine_macros(struct all_attrs_item *all_attrs,
* If check->check_nr is non-zero, only attributes in check[] are collected.
* Otherwise all attributes are collected.
*/
static void collect_some_attrs(struct index_state *istate,
static void collect_some_attrs(const struct index_state *istate,
const char *path,
struct attr_check *check)
{
@ -1123,7 +1107,7 @@ static void collect_some_attrs(struct index_state *istate,
fill(path, pathlen, basename_offset, check->stack, check->all_attrs, rem);
}
void git_check_attr(struct index_state *istate,
void git_check_attr(const struct index_state *istate,
const char *path,
struct attr_check *check)
{
@ -1140,7 +1124,7 @@ void git_check_attr(struct index_state *istate,
}
}
void git_all_attrs(struct index_state *istate,
void git_all_attrs(const struct index_state *istate,
const char *path, struct attr_check *check)
{
int i;

4
attr.h
View File

@ -190,14 +190,14 @@ void attr_check_free(struct attr_check *check);
*/
const char *git_attr_name(const struct git_attr *);
void git_check_attr(struct index_state *istate,
void git_check_attr(const struct index_state *istate,
const char *path, struct attr_check *check);
/*
* Retrieve all attributes that apply to the specified path.
* check holds the attributes and their values.
*/
void git_all_attrs(struct index_state *istate,
void git_all_attrs(const struct index_state *istate,
const char *path, struct attr_check *check);
enum git_attr_direction {

View File

@ -242,7 +242,7 @@ static struct commit *fake_working_tree_commit(struct repository *r,
switch (st.st_mode & S_IFMT) {
case S_IFREG:
if (opt->flags.allow_textconv &&
textconv_object(r, read_from, mode, null_oid(), 0, &buf_ptr, &buf_len))
textconv_object(r, read_from, mode, &null_oid, 0, &buf_ptr, &buf_len))
strbuf_attach(&buf, buf_ptr, buf_len, buf_len + 1);
else if (strbuf_read_file(&buf, read_from, st.st_size) != st.st_size)
die_errno("cannot open or read '%s'", read_from);

View File

@ -283,7 +283,6 @@ struct bloom_filter *get_or_compute_bloom_filter(struct repository *r,
struct bloom_key key;
fill_bloom_key(e->path, strlen(e->path), &key, settings);
add_key_to_filter(&key, filter, settings);
clear_bloom_key(&key);
}
cleanup:

View File

@ -294,7 +294,7 @@ void create_branch(struct repository *r,
if (explicit_tracking)
die(_(upstream_not_branch), start_name);
else
FREE_AND_NULL(real_ref);
real_ref = NULL;
}
break;
default:
@ -322,7 +322,7 @@ void create_branch(struct repository *r,
transaction = ref_transaction_begin(&err);
if (!transaction ||
ref_transaction_update(transaction, ref.buf,
&oid, forcing ? NULL : null_oid(),
&oid, forcing ? NULL : &null_oid,
0, msg, &err) ||
ref_transaction_commit(transaction, &err))
die("%s", err.buf);
@ -344,7 +344,6 @@ void remove_merge_branch_state(struct repository *r)
unlink(git_path_merge_rr(r));
unlink(git_path_merge_msg(r));
unlink(git_path_merge_mode(r));
unlink(git_path_auto_merge(r));
save_autostash(git_path_merge_autostash(r));
}

View File

@ -123,7 +123,6 @@ int cmd_bugreport(int argc, const char **argv, const char *prefix);
int cmd_bundle(int argc, const char **argv, const char *prefix);
int cmd_cat_file(int argc, const char **argv, const char *prefix);
int cmd_checkout(int argc, const char **argv, const char *prefix);
int cmd_checkout__worker(int argc, const char **argv, const char *prefix);
int cmd_checkout_index(int argc, const char **argv, const char *prefix);
int cmd_check_attr(int argc, const char **argv, const char *prefix);
int cmd_check_ignore(int argc, const char **argv, const char *prefix);

View File

@ -46,9 +46,6 @@ static int chmod_pathspec(struct pathspec *pathspec, char flip, int show_only)
struct cache_entry *ce = active_cache[i];
int err;
if (ce_skip_worktree(ce))
continue;
if (pathspec && !ce_path_match(&the_index, ce, pathspec, NULL))
continue;
@ -144,13 +141,9 @@ static int renormalize_tracked_files(const struct pathspec *pathspec, int flags)
{
int i, retval = 0;
/* TODO: audit for interaction with sparse-index. */
ensure_full_index(&the_index);
for (i = 0; i < active_nr; i++) {
struct cache_entry *ce = active_cache[i];
if (ce_skip_worktree(ce))
continue;
if (ce_stage(ce))
continue; /* do not touch unmerged paths */
if (!S_ISREG(ce->ce_mode) && !S_ISLNK(ce->ce_mode))
@ -179,44 +172,24 @@ static char *prune_directory(struct dir_struct *dir, struct pathspec *pathspec,
*dst++ = entry;
}
dir->nr = dst - dir->entries;
add_pathspec_matches_against_index(pathspec, &the_index, seen,
PS_IGNORE_SKIP_WORKTREE);
add_pathspec_matches_against_index(pathspec, &the_index, seen);
return seen;
}
static int refresh(int verbose, const struct pathspec *pathspec)
static void refresh(int verbose, const struct pathspec *pathspec)
{
char *seen;
int i, ret = 0;
char *skip_worktree_seen = NULL;
struct string_list only_match_skip_worktree = STRING_LIST_INIT_NODUP;
int flags = REFRESH_IGNORE_SKIP_WORKTREE |
(verbose ? REFRESH_IN_PORCELAIN : REFRESH_QUIET);
int i;
seen = xcalloc(pathspec->nr, 1);
refresh_index(&the_index, flags, pathspec, seen,
_("Unstaged changes after refreshing the index:"));
refresh_index(&the_index, verbose ? REFRESH_IN_PORCELAIN : REFRESH_QUIET,
pathspec, seen, _("Unstaged changes after refreshing the index:"));
for (i = 0; i < pathspec->nr; i++) {
if (!seen[i]) {
if (matches_skip_worktree(pathspec, i, &skip_worktree_seen)) {
string_list_append(&only_match_skip_worktree,
pathspec->items[i].original);
} else {
die(_("pathspec '%s' did not match any files"),
pathspec->items[i].original);
}
}
if (!seen[i])
die(_("pathspec '%s' did not match any files"),
pathspec->items[i].match);
}
if (only_match_skip_worktree.nr) {
advise_on_updating_sparse_paths(&only_match_skip_worktree);
ret = 1;
}
free(seen);
free(skip_worktree_seen);
string_list_clear(&only_match_skip_worktree, 0);
return ret;
}
int run_add_interactive(const char *revision, const char *patch_mode,
@ -484,8 +457,6 @@ int cmd_add(int argc, const char **argv, const char *prefix)
if (patch_interactive)
add_interactive = 1;
if (add_interactive) {
if (show_only)
die(_("--dry-run is incompatible with --interactive/--patch"));
if (pathspec_from_file)
die(_("--pathspec-from-file is incompatible with --interactive/--patch"));
exit(interactive_add(argv + 1, prefix, patch_interactive));
@ -594,18 +565,15 @@ int cmd_add(int argc, const char **argv, const char *prefix)
}
if (refresh_only) {
exit_status |= refresh(verbose, &pathspec);
refresh(verbose, &pathspec);
goto finish;
}
if (pathspec.nr) {
int i;
char *skip_worktree_seen = NULL;
struct string_list only_match_skip_worktree = STRING_LIST_INIT_NODUP;
if (!seen)
seen = find_pathspecs_matching_against_index(&pathspec,
&the_index, PS_IGNORE_SKIP_WORKTREE);
seen = find_pathspecs_matching_against_index(&pathspec, &the_index);
/*
* file_exists() assumes exact match
@ -619,24 +587,12 @@ int cmd_add(int argc, const char **argv, const char *prefix)
for (i = 0; i < pathspec.nr; i++) {
const char *path = pathspec.items[i].match;
if (pathspec.items[i].magic & PATHSPEC_EXCLUDE)
continue;
if (seen[i])
continue;
if (matches_skip_worktree(&pathspec, i, &skip_worktree_seen)) {
string_list_append(&only_match_skip_worktree,
pathspec.items[i].original);
continue;
}
/* Don't complain at 'git add .' on empty repo */
if (!path[0])
continue;
if ((pathspec.items[i].magic & (PATHSPEC_GLOB | PATHSPEC_ICASE)) ||
!file_exists(path)) {
if (!seen[i] && path[0] &&
((pathspec.items[i].magic &
(PATHSPEC_GLOB | PATHSPEC_ICASE)) ||
!file_exists(path))) {
if (ignore_missing) {
int dtype = DT_UNKNOWN;
if (is_excluded(&dir, &the_index, path, &dtype))
@ -647,16 +603,7 @@ int cmd_add(int argc, const char **argv, const char *prefix)
pathspec.items[i].original);
}
}
if (only_match_skip_worktree.nr) {
advise_on_updating_sparse_paths(&only_match_skip_worktree);
exit_status = 1;
}
free(seen);
free(skip_worktree_seen);
string_list_clear(&only_match_skip_worktree, 0);
}
plug_bulk_checkin();

View File

@ -116,7 +116,6 @@ struct am_state {
int keep; /* enum keep_type */
int message_id;
int scissors; /* enum scissors_type */
int quoted_cr; /* enum quoted_cr_action */
struct strvec git_apply_opts;
const char *resolvemsg;
int committer_date_is_author_date;
@ -146,7 +145,6 @@ static void am_state_init(struct am_state *state)
git_config_get_bool("am.messageid", &state->message_id);
state->scissors = SCISSORS_UNSET;
state->quoted_cr = quoted_cr_unset;
strvec_init(&state->git_apply_opts);
@ -167,16 +165,6 @@ static void am_state_release(struct am_state *state)
strvec_clear(&state->git_apply_opts);
}
static int am_option_parse_quoted_cr(const struct option *opt,
const char *arg, int unset)
{
BUG_ON_OPT_NEG(unset);
if (mailinfo_parse_quoted_cr_action(arg, opt->value) != 0)
return error(_("bad action '%s' for '%s'"), arg, "--quoted-cr");
return 0;
}
/**
* Returns path relative to the am_state directory.
*/
@ -409,12 +397,6 @@ static void am_load(struct am_state *state)
else
state->scissors = SCISSORS_UNSET;
read_state_file(&sb, state, "quoted-cr", 1);
if (!*sb.buf)
state->quoted_cr = quoted_cr_unset;
else if (mailinfo_parse_quoted_cr_action(sb.buf, &state->quoted_cr) != 0)
die(_("could not parse %s"), am_path(state, "quoted-cr"));
read_state_file(&sb, state, "apply-opt", 1);
strvec_clear(&state->git_apply_opts);
if (sq_dequote_to_strvec(sb.buf, &state->git_apply_opts) < 0)
@ -1020,24 +1002,6 @@ static void am_setup(struct am_state *state, enum patch_format patch_format,
}
write_state_text(state, "scissors", str);
switch (state->quoted_cr) {
case quoted_cr_unset:
str = "";
break;
case quoted_cr_nowarn:
str = "nowarn";
break;
case quoted_cr_warn:
str = "warn";
break;
case quoted_cr_strip:
str = "strip";
break;
default:
BUG("invalid value for state->quoted_cr");
}
write_state_text(state, "quoted-cr", str);
sq_quote_argv(&sb, state->git_apply_opts.v);
write_state_text(state, "apply-opt", sb.buf);
@ -1198,18 +1162,6 @@ static int parse_mail(struct am_state *state, const char *mail)
BUG("invalid value for state->scissors");
}
switch (state->quoted_cr) {
case quoted_cr_unset:
break;
case quoted_cr_nowarn:
case quoted_cr_warn:
case quoted_cr_strip:
mi.quoted_cr = state->quoted_cr;
break;
default:
BUG("invalid value for state->quoted_cr");
}
mi.input = xfopen(mail, "r");
mi.output = xfopen(am_path(state, "info"), "w");
if (mailinfo(&mi, am_path(state, "msg"), am_path(state, "patch")))
@ -2290,9 +2242,6 @@ int cmd_am(int argc, const char **argv, const char *prefix)
0, PARSE_OPT_NONEG),
OPT_BOOL('c', "scissors", &state.scissors,
N_("strip everything before a scissors line")),
OPT_CALLBACK_F(0, "quoted-cr", &state.quoted_cr, N_("action"),
N_("pass it through git-mailinfo"),
PARSE_OPT_NONEG, am_option_parse_quoted_cr),
OPT_PASSTHRU_ARGV(0, "whitespace", &state.git_apply_opts, N_("action"),
N_("pass it through git-apply"),
0),

View File

@ -1126,7 +1126,6 @@ int cmd_bisect__helper(int argc, const char **argv, const char *prefix)
break;
case BISECT_SKIP:
set_terms(&terms, "bad", "good");
get_terms(&terms);
res = bisect_skip(&terms, argv, argc);
break;
default:

View File

@ -411,8 +411,6 @@ static void print_ref_list(struct ref_filter *filter, struct ref_sorting *sortin
{
int i;
struct ref_array array;
struct strbuf out = STRBUF_INIT;
struct strbuf err = STRBUF_INIT;
int maxwidth = 0;
const char *remote_prefix = "";
char *to_free = NULL;
@ -442,8 +440,8 @@ static void print_ref_list(struct ref_filter *filter, struct ref_sorting *sortin
ref_array_sort(sorting, &array);
for (i = 0; i < array.nr; i++) {
strbuf_reset(&err);
strbuf_reset(&out);
struct strbuf out = STRBUF_INIT;
struct strbuf err = STRBUF_INIT;
if (format_ref_array_item(array.items[i], format, &out, &err))
die("%s", err.buf);
if (column_active(colopts)) {
@ -454,10 +452,10 @@ static void print_ref_list(struct ref_filter *filter, struct ref_sorting *sortin
fwrite(out.buf, 1, out.len, stdout);
putchar('\n');
}
strbuf_release(&err);
strbuf_release(&out);
}
strbuf_release(&err);
strbuf_release(&out);
ref_array_clear(&array);
free(to_free);
}

View File

@ -129,7 +129,6 @@ int cmd_bugreport(int argc, const char **argv, const char *prefix)
char *option_output = NULL;
char *option_suffix = "%Y-%m-%d-%H%M";
const char *user_relative_path = NULL;
char *prefixed_filename;
const struct option bugreport_options[] = {
OPT_STRING('o', "output-directory", &option_output, N_("path"),
@ -143,9 +142,9 @@ int cmd_bugreport(int argc, const char **argv, const char *prefix)
bugreport_usage, 0);
/* Prepare the path to put the result */
prefixed_filename = prefix_filename(prefix,
option_output ? option_output : "");
strbuf_addstr(&report_path, prefixed_filename);
strbuf_addstr(&report_path,
prefix_filename(prefix,
option_output ? option_output : ""));
strbuf_complete(&report_path, '/');
strbuf_addstr(&report_path, "git-bugreport-");
@ -190,7 +189,6 @@ int cmd_bugreport(int argc, const char **argv, const char *prefix)
fprintf(stderr, _("Created new report at '%s'.\n"),
user_relative_path);
free(prefixed_filename);
UNLEAK(buffer);
UNLEAK(report_path);
return !!launch_editor(report_path.buf, NULL, NULL);

Some files were not shown because too many files have changed in this diff Show More