Compare commits
34 Commits
v1.7.7-rc0
...
v1.7.6.4
Author | SHA1 | Date | |
---|---|---|---|
6320526415 | |||
a0b1cb60ab | |||
85b3c75f4f | |||
84b051462f | |||
406c1c4dd4 | |||
be5acb3b63 | |||
503359f13a | |||
40ffc49876 | |||
e622f41dcd | |||
740a8fc224 | |||
8702fee617 | |||
c2d53586dd | |||
e1fd529f2f | |||
2f19a52c64 | |||
5d4fcd9ac0 | |||
fcfc2d5879 | |||
908bb1a9b7 | |||
b3038a5adb | |||
eff7c32cfd | |||
7baf32a829 | |||
3fc44a10f6 | |||
30962fb7fb | |||
18322badc2 | |||
509d59705e | |||
5a277f3ff7 | |||
b15b5b10a7 | |||
dff4b0ef30 | |||
385ceec1cb | |||
f3738c1ce9 | |||
2f633f41d6 | |||
e6baf4a1ae | |||
dbc92b072d | |||
9b0ebc722c | |||
13d6ec9133 |
@ -6,7 +6,7 @@ MAN5_TXT=gitattributes.txt gitignore.txt gitmodules.txt githooks.txt \
|
||||
gitrepository-layout.txt
|
||||
MAN7_TXT=gitcli.txt gittutorial.txt gittutorial-2.txt \
|
||||
gitcvs-migration.txt gitcore-tutorial.txt gitglossary.txt \
|
||||
gitdiffcore.txt gitnamespaces.txt gitrevisions.txt gitworkflows.txt
|
||||
gitdiffcore.txt gitrevisions.txt gitworkflows.txt
|
||||
|
||||
MAN_TXT = $(MAN1_TXT) $(MAN5_TXT) $(MAN7_TXT)
|
||||
MAN_XML=$(patsubst %.txt,%.xml,$(MAN_TXT))
|
||||
|
8
Documentation/RelNotes/1.7.6.2.txt
Normal file
8
Documentation/RelNotes/1.7.6.2.txt
Normal file
@ -0,0 +1,8 @@
|
||||
Git v1.7.6.2 Release Notes
|
||||
==========================
|
||||
|
||||
Fixes since v1.7.6.1
|
||||
--------------------
|
||||
|
||||
* v1.7.6.1 broke "git push --quiet"; it used to be a no-op against an old
|
||||
version of Git running on the other end, but v1.7.6.1 made it abort.
|
24
Documentation/RelNotes/1.7.6.3.txt
Normal file
24
Documentation/RelNotes/1.7.6.3.txt
Normal file
@ -0,0 +1,24 @@
|
||||
Git v1.7.6.3 Release Notes
|
||||
==========================
|
||||
|
||||
Fixes since v1.7.6.2
|
||||
--------------------
|
||||
|
||||
* "git -c var=value subcmd" misparsed the custom configuration when
|
||||
value contained an equal sign.
|
||||
|
||||
* "git fetch" had a major performance regression, wasting many
|
||||
needless cycles in a repository where there is no submodules
|
||||
present. This was especially bad, when there were many refs.
|
||||
|
||||
* "git reflog $refname" did not default to the "show" subcommand as
|
||||
the documentation advertised the command to do.
|
||||
|
||||
* "git reset" did not leave meaningful log message in the reflog.
|
||||
|
||||
* "git status --ignored" did not show ignored items when there is no
|
||||
untracked items.
|
||||
|
||||
* "git tag --contains $commit" was unnecessarily inefficient.
|
||||
|
||||
Also contains minor fixes and documentation updates.
|
32
Documentation/RelNotes/1.7.6.4.txt
Normal file
32
Documentation/RelNotes/1.7.6.4.txt
Normal file
@ -0,0 +1,32 @@
|
||||
Git v1.7.6.4 Release Notes
|
||||
==========================
|
||||
|
||||
Fixes since v1.7.6.3
|
||||
--------------------
|
||||
|
||||
* The error reporting logic of "git am" when the command is fed a file
|
||||
whose mail-storage format is unknown was fixed.
|
||||
|
||||
* "git branch --set-upstream @{-1} foo" did not expand @{-1} correctly.
|
||||
|
||||
* "git check-ref-format --print" used to parrot a candidate string that
|
||||
began with a slash (e.g. /refs/heads/master) without stripping it, to make
|
||||
the result a suitably normalized string the caller can append to "$GIT_DIR/".
|
||||
|
||||
* "git clone" failed to clone locally from a ".git" file that itself
|
||||
is not a directory but is a pointer to one.
|
||||
|
||||
* "git clone" from a local repository that borrows from another
|
||||
object store using a relative path in its objects/info/alternates
|
||||
file did not adjust the alternates in the resulting repository.
|
||||
|
||||
* "git describe --dirty" did not refresh the index before checking the
|
||||
state of the working tree files.
|
||||
|
||||
* "git ls-files ../$path" that is run from a subdirectory reported errors
|
||||
incorrectly when there is no such path that matches the given pathspec.
|
||||
|
||||
* "git mergetool" could loop forever prompting when nothing can be read
|
||||
from the standard input.
|
||||
|
||||
Also contains minor fixes and documentation updates.
|
@ -1,123 +0,0 @@
|
||||
Git v1.7.7 Release Notes
|
||||
========================
|
||||
|
||||
Updates since v1.7.6
|
||||
--------------------
|
||||
|
||||
* The scripting part of the codebase is getting prepared for i18n/l10n.
|
||||
|
||||
* Interix, Cygwin and Minix ports got updated.
|
||||
|
||||
* A handful of patches to update git-p4 (in contrib/).
|
||||
|
||||
* Gitweb learned to read from /etc/gitweb-common.conf when it exists,
|
||||
before reading from gitweb_config.perl or from /etc/gitweb.conf
|
||||
(this last one is read only when per-repository gitweb_config.perl
|
||||
does not exist).
|
||||
|
||||
* Various codepaths that invoked zlib deflate/inflate assumed that these
|
||||
functions can compress or uncompress more than 4GB data in one call on
|
||||
platforms with 64-bit long, which has been corrected.
|
||||
|
||||
* Git now recognizes loose objects written by other implementations that
|
||||
uses non-standard window size for zlib deflation (e.g. Agit running on
|
||||
Android with 4kb window). We used to reject anything that was not
|
||||
deflated with 32kb window.
|
||||
|
||||
* "git am" learned to pass "--exclude=<path>" option through to underlying
|
||||
"git apply".
|
||||
|
||||
* You can now feed many empty lines before feeding a mbox file to
|
||||
"git am".
|
||||
|
||||
* "git archive" can be told to pass the output to gzip compression and
|
||||
produce "archive.tar.gz".
|
||||
|
||||
* "git bisect" can be used in a bare repository (provided if the test
|
||||
you perform per each iteration does not need a working tree, of
|
||||
course).
|
||||
|
||||
* "git check-attr" can take relative paths from the command line.
|
||||
|
||||
* "git check-attr" learned "--all" option to list the attributes for a
|
||||
given path.
|
||||
|
||||
* "git checkout" (both the code to update the files upon checking out a
|
||||
different branch, the code to checkout specific set of files) learned
|
||||
to stream the data from object store when possible, without having to
|
||||
read the entire contents of a file in memory first. An earlier round
|
||||
of this code that is not in any released version had a large leak but
|
||||
now it has been plugged.
|
||||
|
||||
* "git clone" can now take "--config key=value" option to set the
|
||||
repository configuration options that affect the initial checkout.
|
||||
|
||||
* "git commit <paths>..." now lets you feed relative pathspecs that
|
||||
refer outside your current subdirectory.
|
||||
|
||||
* "git diff --stat" learned --stat-count option to limit the output of
|
||||
diffstat report.
|
||||
|
||||
* "git diff" learned "--histogram" option, to use a different diff
|
||||
generation machinery stolen from jgit, which might give better
|
||||
performance.
|
||||
|
||||
* "git fetch", "git push" and friends no longer show connection
|
||||
errors for addresses that couldn't be connected when at least one
|
||||
address succeeds (this is arguably a regression but a deliberate
|
||||
one).
|
||||
|
||||
* "git grep" learned --break and --heading options, to let users mimic
|
||||
output format of "ack".
|
||||
|
||||
* "git grep" learned "-W" option that shows wider context using the same
|
||||
logic used by "git diff" to determine the hunk header.
|
||||
|
||||
* "git rebase master topci" no longer spews usage hints after giving
|
||||
"fatal: no such branch: topci" error message.
|
||||
|
||||
* "git stash" learned --include-untracked option.
|
||||
|
||||
* "git submodule update" used to stop at the first error updating a
|
||||
submodule; it now goes on to update other submodules that can be
|
||||
updated, and reports the ones with errors at the end.
|
||||
|
||||
* "git upload-pack" and "git receive-pack" learned to pretend only a
|
||||
subset of the refs exist in a repository. This may help a site to
|
||||
put many tiny repositories into one repository (this would not be
|
||||
useful for larger repositories as repacking would be problematic).
|
||||
|
||||
* "git verify-pack" has been rewritten to use the "index-pack" machinery
|
||||
that is more efficient in reading objects in packfiles.
|
||||
|
||||
* test scripts for gitweb tried to run even when CGI-related perl modules
|
||||
are not installed; it now exits early when they are unavailable.
|
||||
|
||||
Also contains various documentation updates and minor miscellaneous
|
||||
changes.
|
||||
|
||||
|
||||
Fixes since v1.7.6
|
||||
------------------
|
||||
|
||||
Unless otherwise noted, all the fixes in 1.7.6.X maintenance track are
|
||||
included in this release.
|
||||
|
||||
* "git branch --set-upstream @{-1} foo" did not expand @{-1} correctly.
|
||||
(merge e9d4f74 mg/branch-set-upstream-previous later to 'maint').
|
||||
|
||||
* "git describe --dirty" did not refresh the index before checking the
|
||||
state of the working tree files.
|
||||
(cherry-pick bb57148 ac/describe-dirty-refresh later to 'maint').
|
||||
|
||||
* "git ls-files ../$path" that is run from a subdirectory reported errors
|
||||
incorrectly when there is no such path that matches the given pathspec.
|
||||
(merge 0f64bfa cb/maint-ls-files-error-report later to 'maint').
|
||||
|
||||
--
|
||||
exec >/var/tmp/1
|
||||
echo O=$(git describe master)
|
||||
O=v1.7.6.1-415-g284daf2
|
||||
git log --first-parent --oneline $O..master
|
||||
echo
|
||||
git shortlog --no-merges ^maint ^$O master
|
@ -134,8 +134,7 @@ Another thing: NULL pointers shall be written as NULL, not as 0.
|
||||
|
||||
(2) Generate your patch using git tools out of your commits.
|
||||
|
||||
git based diff tools (git, Cogito, and StGIT included) generate
|
||||
unidiff which is the preferred format.
|
||||
git based diff tools generate unidiff which is the preferred format.
|
||||
|
||||
You do not have to be afraid to use -M option to "git diff" or
|
||||
"git format-patch", if your patch involves file renames. The
|
||||
|
@ -1198,14 +1198,6 @@ http.proxy::
|
||||
environment variable (see linkgit:curl[1]). This can be overridden
|
||||
on a per-remote basis; see remote.<name>.proxy
|
||||
|
||||
http.cookiefile::
|
||||
File containing previously stored cookie lines which should be used
|
||||
in the git http session, if they match the server. The file format
|
||||
of the file to read cookies from should be plain HTTP headers or
|
||||
the Netscape/Mozilla cookie file format (see linkgit:curl[1]).
|
||||
NOTE that the file specified with http.cookiefile is only used as
|
||||
input. No cookies will be stored in the file.
|
||||
|
||||
http.sslVerify::
|
||||
Whether to verify the SSL certificate when fetching or pushing
|
||||
over HTTPS. Can be overridden by the 'GIT_SSL_NO_VERIFY' environment
|
||||
|
@ -48,17 +48,11 @@ endif::git-format-patch[]
|
||||
--patience::
|
||||
Generate a diff using the "patience diff" algorithm.
|
||||
|
||||
--stat[=<width>[,<name-width>[,<count>]]]::
|
||||
--stat[=<width>[,<name-width>]]::
|
||||
Generate a diffstat. You can override the default
|
||||
output width for 80-column terminal by `--stat=<width>`.
|
||||
The width of the filename part can be controlled by
|
||||
giving another width to it separated by a comma.
|
||||
By giving a third parameter `<count>`, you can limit the
|
||||
output to the first `<count>` lines, followed by
|
||||
`...` if there are more.
|
||||
+
|
||||
These parameters can also be set individually with `--stat-width=<width>`,
|
||||
`--stat-name-width=<name-width>` and `--stat-count=<count>`.
|
||||
|
||||
--numstat::
|
||||
Similar to `\--stat`, but shows number of added and
|
||||
|
@ -13,8 +13,7 @@ SYNOPSIS
|
||||
[--3way] [--interactive] [--committer-date-is-author-date]
|
||||
[--ignore-date] [--ignore-space-change | --ignore-whitespace]
|
||||
[--whitespace=<option>] [-C<n>] [-p<n>] [--directory=<dir>]
|
||||
[--exclude=<path>] [--reject] [-q | --quiet]
|
||||
[--scissors | --no-scissors]
|
||||
[--reject] [-q | --quiet] [--scissors | --no-scissors]
|
||||
[(<mbox> | <Maildir>)...]
|
||||
'git am' (--continue | --skip | --abort)
|
||||
|
||||
@ -88,7 +87,6 @@ default. You can use `--no-utf8` to override this.
|
||||
-C<n>::
|
||||
-p<n>::
|
||||
--directory=<dir>::
|
||||
--exclude=<path>::
|
||||
--reject::
|
||||
These flags are passed to the 'git apply' (see linkgit:git-apply[1])
|
||||
program that applies
|
||||
|
@ -101,25 +101,6 @@ tar.umask::
|
||||
details. If `--remote` is used then only the configuration of
|
||||
the remote repository takes effect.
|
||||
|
||||
tar.<format>.command::
|
||||
This variable specifies a shell command through which the tar
|
||||
output generated by `git archive` should be piped. The command
|
||||
is executed using the shell with the generated tar file on its
|
||||
standard input, and should produce the final output on its
|
||||
standard output. Any compression-level options will be passed
|
||||
to the command (e.g., "-9"). An output file with the same
|
||||
extension as `<format>` will be use this format if no other
|
||||
format is given.
|
||||
+
|
||||
The "tar.gz" and "tgz" formats are defined automatically and default to
|
||||
`gzip -cn`. You may override them with custom commands.
|
||||
|
||||
tar.<format>.remote::
|
||||
If true, enable `<format>` for use by remote clients via
|
||||
linkgit:git-upload-archive[1]. Defaults to false for
|
||||
user-defined formats, but true for the "tar.gz" and "tgz"
|
||||
formats.
|
||||
|
||||
ATTRIBUTES
|
||||
----------
|
||||
|
||||
@ -142,46 +123,32 @@ while archiving any tree in your `$GIT_DIR/info/attributes` file.
|
||||
|
||||
EXAMPLES
|
||||
--------
|
||||
`git archive --format=tar --prefix=junk/ HEAD | (cd /var/tmp/ && tar xf -)`::
|
||||
git archive --format=tar --prefix=junk/ HEAD | (cd /var/tmp/ && tar xf -)::
|
||||
|
||||
Create a tar archive that contains the contents of the
|
||||
latest commit on the current branch, and extract it in the
|
||||
`/var/tmp/junk` directory.
|
||||
|
||||
`git archive --format=tar --prefix=git-1.4.0/ v1.4.0 | gzip >git-1.4.0.tar.gz`::
|
||||
git archive --format=tar --prefix=git-1.4.0/ v1.4.0 | gzip >git-1.4.0.tar.gz::
|
||||
|
||||
Create a compressed tarball for v1.4.0 release.
|
||||
|
||||
`git archive --format=tar.gz --prefix=git-1.4.0/ v1.4.0 >git-1.4.0.tar.gz`::
|
||||
|
||||
Same as above, but using the builtin tar.gz handling.
|
||||
|
||||
`git archive --prefix=git-1.4.0/ -o git-1.4.0.tar.gz v1.4.0`::
|
||||
|
||||
Same as above, but the format is inferred from the output file.
|
||||
|
||||
`git archive --format=tar --prefix=git-1.4.0/ v1.4.0{caret}\{tree\} | gzip >git-1.4.0.tar.gz`::
|
||||
git archive --format=tar --prefix=git-1.4.0/ v1.4.0{caret}\{tree\} | gzip >git-1.4.0.tar.gz::
|
||||
|
||||
Create a compressed tarball for v1.4.0 release, but without a
|
||||
global extended pax header.
|
||||
|
||||
`git archive --format=zip --prefix=git-docs/ HEAD:Documentation/ > git-1.4.0-docs.zip`::
|
||||
git archive --format=zip --prefix=git-docs/ HEAD:Documentation/ > git-1.4.0-docs.zip::
|
||||
|
||||
Put everything in the current head's Documentation/ directory
|
||||
into 'git-1.4.0-docs.zip', with the prefix 'git-docs/'.
|
||||
|
||||
`git archive -o latest.zip HEAD`::
|
||||
git archive -o latest.zip HEAD::
|
||||
|
||||
Create a Zip archive that contains the contents of the latest
|
||||
commit on the current branch. Note that the output format is
|
||||
inferred by the extension of the output file.
|
||||
|
||||
`git config tar.tar.xz.command "xz -c"`::
|
||||
|
||||
Configure a "tar.xz" format for making LZMA-compressed tarfiles.
|
||||
You can use it specifying `--format=tar.xz`, or by creating an
|
||||
output file like `-o foo.tar.xz`.
|
||||
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
|
@ -17,7 +17,7 @@ The command takes various subcommands, and different options depending
|
||||
on the subcommand:
|
||||
|
||||
git bisect help
|
||||
git bisect start [--no-checkout] [<bad> [<good>...]] [--] [<paths>...]
|
||||
git bisect start [<bad> [<good>...]] [--] [<paths>...]
|
||||
git bisect bad [<rev>]
|
||||
git bisect good [<rev>...]
|
||||
git bisect skip [(<rev>|<range>)...]
|
||||
@ -263,19 +263,6 @@ rewind the tree to the pristine state. Finally the script should exit
|
||||
with the status of the real test to let the "git bisect run" command loop
|
||||
determine the eventual outcome of the bisect session.
|
||||
|
||||
OPTIONS
|
||||
-------
|
||||
--no-checkout::
|
||||
+
|
||||
Do not checkout the new working tree at each iteration of the bisection
|
||||
process. Instead just update a special reference named 'BISECT_HEAD' to make
|
||||
it point to the commit that should be tested.
|
||||
+
|
||||
This option may be useful when the test you would perform in each step
|
||||
does not require a checked out tree.
|
||||
+
|
||||
If the repository is bare, `--no-checkout` is assumed.
|
||||
|
||||
EXAMPLES
|
||||
--------
|
||||
|
||||
@ -356,25 +343,6 @@ $ git bisect run sh -c "make || exit 125; ~/check_test_case.sh"
|
||||
This shows that you can do without a run script if you write the test
|
||||
on a single line.
|
||||
|
||||
* Locate a good region of the object graph in a damaged repository
|
||||
+
|
||||
------------
|
||||
$ git bisect start HEAD <known-good-commit> [ <boundary-commit> ... ] --no-checkout
|
||||
$ git bisect run sh -c '
|
||||
GOOD=$(git for-each-ref "--format=%(objectname)" refs/bisect/good-*) &&
|
||||
git rev-list --objects BISECT_HEAD --not $GOOD >tmp.$$ &&
|
||||
git pack-objects --stdout >/dev/null <tmp.$$
|
||||
rc=$?
|
||||
rm -f tmp.$$
|
||||
test $rc = 0'
|
||||
|
||||
------------
|
||||
+
|
||||
In this case, when 'git bisect run' finishes, bisect/bad will refer to a commit that
|
||||
has at least one parent whose reachable graph is fully traversable in the sense
|
||||
required by 'git pack objects'.
|
||||
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
link:git-bisect-lk2009.html[Fighting regressions with git bisect],
|
||||
|
@ -9,8 +9,8 @@ git-check-attr - Display gitattributes information
|
||||
SYNOPSIS
|
||||
--------
|
||||
[verse]
|
||||
'git check-attr' [-a | --all | attr...] [--] pathname...
|
||||
'git check-attr' --stdin [-z] [-a | --all | attr...] < <list-of-paths>
|
||||
'git check-attr' attr... [--] pathname...
|
||||
'git check-attr' --stdin [-z] attr... < <list-of-paths>
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
@ -19,11 +19,6 @@ For every pathname, this command will list if each attribute is 'unspecified',
|
||||
|
||||
OPTIONS
|
||||
-------
|
||||
-a, --all::
|
||||
List all attributes that are associated with the specified
|
||||
paths. If this option is used, then 'unspecified' attributes
|
||||
will not be included in the output.
|
||||
|
||||
--stdin::
|
||||
Read file names from stdin instead of from the command-line.
|
||||
|
||||
@ -33,11 +28,8 @@ OPTIONS
|
||||
|
||||
\--::
|
||||
Interpret all preceding arguments as attributes and all following
|
||||
arguments as path names.
|
||||
|
||||
If none of `--stdin`, `--all`, or `--` is used, the first argument
|
||||
will be treated as an attribute and the rest of the arguments as
|
||||
pathnames.
|
||||
arguments as path names. If not supplied, only the first argument will
|
||||
be treated as an attribute.
|
||||
|
||||
OUTPUT
|
||||
------
|
||||
@ -77,13 +69,6 @@ org/example/MyClass.java: diff: java
|
||||
org/example/MyClass.java: myAttr: set
|
||||
---------------
|
||||
|
||||
* Listing all attributes for a file:
|
||||
---------------
|
||||
$ git check-attr --all -- org/example/MyClass.java
|
||||
org/example/MyClass.java: diff: java
|
||||
org/example/MyClass.java: myAttr: set
|
||||
---------------
|
||||
|
||||
* Listing an attribute for multiple files:
|
||||
---------------
|
||||
$ git check-attr myAttr -- org/example/MyClass.java org/example/NoMyAttr.java
|
||||
|
@ -112,31 +112,31 @@ effect to your index in a row.
|
||||
|
||||
EXAMPLES
|
||||
--------
|
||||
`git cherry-pick master`::
|
||||
git cherry-pick master::
|
||||
|
||||
Apply the change introduced by the commit at the tip of the
|
||||
master branch and create a new commit with this change.
|
||||
|
||||
`git cherry-pick ..master`::
|
||||
`git cherry-pick ^HEAD master`::
|
||||
git cherry-pick ..master::
|
||||
git cherry-pick ^HEAD master::
|
||||
|
||||
Apply the changes introduced by all commits that are ancestors
|
||||
of master but not of HEAD to produce new commits.
|
||||
|
||||
`git cherry-pick master{tilde}4 master{tilde}2`::
|
||||
git cherry-pick master{tilde}4 master{tilde}2::
|
||||
|
||||
Apply the changes introduced by the fifth and third last
|
||||
commits pointed to by master and create 2 new commits with
|
||||
these changes.
|
||||
|
||||
`git cherry-pick -n master~1 next`::
|
||||
git cherry-pick -n master~1 next::
|
||||
|
||||
Apply to the working tree and the index the changes introduced
|
||||
by the second last commit pointed to by master and by the last
|
||||
commit pointed to by next, but do not create any commit with
|
||||
these changes.
|
||||
|
||||
`git cherry-pick --ff ..next`::
|
||||
git cherry-pick --ff ..next::
|
||||
|
||||
If history is linear and HEAD is an ancestor of next, update
|
||||
the working tree and advance the HEAD pointer to match next.
|
||||
@ -144,7 +144,7 @@ EXAMPLES
|
||||
are in next but not HEAD to the current branch, creating a new
|
||||
commit for each new change.
|
||||
|
||||
`git rev-list --reverse master \-- README | git cherry-pick -n --stdin`::
|
||||
git rev-list --reverse master \-- README | git cherry-pick -n --stdin::
|
||||
|
||||
Apply the changes introduced by all commits on the master
|
||||
branch that touched README to the working tree and index,
|
||||
|
@ -159,17 +159,6 @@ objects from the source repository into a pack in the cloned repository.
|
||||
Specify the directory from which templates will be used;
|
||||
(See the "TEMPLATE DIRECTORY" section of linkgit:git-init[1].)
|
||||
|
||||
--config <key>=<value>::
|
||||
-c <key>=<value>::
|
||||
Set a configuration variable in the newly-created repository;
|
||||
this takes effect immediately after the repository is
|
||||
initialized, but before the remote history is fetched or any
|
||||
files checked out. The key is in the same format as expected by
|
||||
linkgit:git-config[1] (e.g., `core.eol=true`). If multiple
|
||||
values are given for the same key, each value will be written to
|
||||
the config file. This makes it safe, for example, to add
|
||||
additional fetch refspecs to the origin remote.
|
||||
|
||||
--depth <depth>::
|
||||
Create a 'shallow' clone with a history truncated to the
|
||||
specified number of revisions. A shallow repository has a
|
||||
|
@ -83,10 +83,6 @@ marks the same across runs.
|
||||
allow that. So fake a tagger to be able to fast-import the
|
||||
output.
|
||||
|
||||
--use-done-feature::
|
||||
Start the stream with a 'feature done' stanza, and terminate
|
||||
it with a 'done' command.
|
||||
|
||||
--no-data::
|
||||
Skip output of blob objects and instead refer to blobs via
|
||||
their original SHA-1 hash. This is useful when rewriting the
|
||||
|
@ -102,12 +102,6 @@ OPTIONS
|
||||
when the `cat-blob` command is encountered in the stream.
|
||||
The default behaviour is to write to `stdout`.
|
||||
|
||||
--done::
|
||||
Require a `done` command at the end of the stream.
|
||||
This option might be useful for detecting errors that
|
||||
cause the frontend to terminate before it has started to
|
||||
write a stream.
|
||||
|
||||
--export-pack-edges=<file>::
|
||||
After creating a packfile, print a line of data to
|
||||
<file> listing the filename of the packfile and the last
|
||||
@ -337,11 +331,6 @@ and control the current import process. More detailed discussion
|
||||
standard output. This command is optional and is not needed
|
||||
to perform an import.
|
||||
|
||||
`done`::
|
||||
Marks the end of the stream. This command is optional
|
||||
unless the `done` feature was requested using the
|
||||
`--done` command line option or `feature done` command.
|
||||
|
||||
`cat-blob`::
|
||||
Causes fast-import to print a blob in 'cat-file --batch'
|
||||
format to the file descriptor set with `--cat-blob-fd` or
|
||||
@ -1012,14 +1001,10 @@ force::
|
||||
(see OPTIONS, above).
|
||||
|
||||
import-marks::
|
||||
import-marks-if-exists::
|
||||
Like --import-marks except in two respects: first, only one
|
||||
"feature import-marks" or "feature import-marks-if-exists"
|
||||
command is allowed per stream; second, an --import-marks=
|
||||
or --import-marks-if-exists command-line option overrides
|
||||
any of these "feature" commands in the stream; third,
|
||||
"feature import-marks-if-exists" like a corresponding
|
||||
command-line option silently skips a nonexistent file.
|
||||
"feature import-marks" command is allowed per stream;
|
||||
second, an --import-marks= command-line option overrides
|
||||
any "feature import-marks" command in the stream.
|
||||
|
||||
cat-blob::
|
||||
ls::
|
||||
@ -1036,11 +1021,6 @@ notes::
|
||||
Versions of fast-import not supporting notes will exit
|
||||
with a message indicating so.
|
||||
|
||||
done::
|
||||
Error out if the stream ends without a 'done' command.
|
||||
Without this feature, errors causing the frontend to end
|
||||
abruptly at a convenient point in the stream can go
|
||||
undetected.
|
||||
|
||||
`option`
|
||||
~~~~~~~~
|
||||
@ -1070,15 +1050,6 @@ not be passed as option:
|
||||
* cat-blob-fd
|
||||
* force
|
||||
|
||||
`done`
|
||||
~~~~~~
|
||||
If the `done` feature is not in use, treated as if EOF was read.
|
||||
This can be used to tell fast-import to finish early.
|
||||
|
||||
If the `--done` command line option or `feature done` command is
|
||||
in use, the `done` command is mandatory and marks the end of the
|
||||
stream.
|
||||
|
||||
Crash Reports
|
||||
-------------
|
||||
If fast-import is supplied invalid input it will terminate with a
|
||||
|
@ -148,12 +148,14 @@ OPTIONS
|
||||
gives the default to color output.
|
||||
Same as `--color=never`.
|
||||
|
||||
--break::
|
||||
Print an empty line between matches from different files.
|
||||
-[ABC] <context>::
|
||||
Show `context` trailing (`A` -- after), or leading (`B`
|
||||
-- before), or both (`C` -- context) lines, and place a
|
||||
line containing `--` between contiguous groups of
|
||||
matches.
|
||||
|
||||
--heading::
|
||||
Show the filename above the matches in that file instead of
|
||||
at the start of each shown line.
|
||||
-<num>::
|
||||
A shortcut for specifying `-C<num>`.
|
||||
|
||||
-p::
|
||||
--show-function::
|
||||
@ -163,29 +165,6 @@ OPTIONS
|
||||
patch hunk headers (see 'Defining a custom hunk-header' in
|
||||
linkgit:gitattributes[5]).
|
||||
|
||||
-<num>::
|
||||
-C <num>::
|
||||
--context <num>::
|
||||
Show <num> leading and trailing lines, and place a line
|
||||
containing `--` between contiguous groups of matches.
|
||||
|
||||
-A <num>::
|
||||
--after-context <num>::
|
||||
Show <num> trailing lines, and place a line containing
|
||||
`--` between contiguous groups of matches.
|
||||
|
||||
-B <num>::
|
||||
--before-context <num>::
|
||||
Show <num> leading lines, and place a line containing
|
||||
`--` between contiguous groups of matches.
|
||||
|
||||
-W::
|
||||
--function-context::
|
||||
Show the surrounding text from the previous line containing a
|
||||
function name up to the one before the next function name,
|
||||
effectively showing the whole function in which the match was
|
||||
found.
|
||||
|
||||
-f <file>::
|
||||
Read patterns from <file>, one per line.
|
||||
|
||||
@ -229,15 +208,15 @@ OPTIONS
|
||||
Examples
|
||||
--------
|
||||
|
||||
`git grep {apostrophe}time_t{apostrophe} \-- {apostrophe}*.[ch]{apostrophe}`::
|
||||
git grep {apostrophe}time_t{apostrophe} \-- {apostrophe}*.[ch]{apostrophe}::
|
||||
Looks for `time_t` in all tracked .c and .h files in the working
|
||||
directory and its subdirectories.
|
||||
|
||||
`git grep -e {apostrophe}#define{apostrophe} --and \( -e MAX_PATH -e PATH_MAX \)`::
|
||||
git grep -e {apostrophe}#define{apostrophe} --and \( -e MAX_PATH -e PATH_MAX \)::
|
||||
Looks for a line that has `#define` and either `MAX_PATH` or
|
||||
`PATH_MAX`.
|
||||
|
||||
`git grep --all-match -e NODE -e Unexpected`::
|
||||
git grep --all-match -e NODE -e Unexpected::
|
||||
Looks for a line that has `NODE` or `Unexpected` in
|
||||
files that have lines that match both.
|
||||
|
||||
|
@ -50,7 +50,7 @@ version::
|
||||
|
||||
Examples
|
||||
--------
|
||||
`git gui blame Makefile`::
|
||||
git gui blame Makefile::
|
||||
|
||||
Show the contents of the file 'Makefile' in the current
|
||||
working directory, and provide annotations for both the
|
||||
@ -59,41 +59,41 @@ Examples
|
||||
uncommitted changes (if any) are explicitly attributed to
|
||||
'Not Yet Committed'.
|
||||
|
||||
`git gui blame v0.99.8 Makefile`::
|
||||
git gui blame v0.99.8 Makefile::
|
||||
|
||||
Show the contents of 'Makefile' in revision 'v0.99.8'
|
||||
and provide annotations for each line. Unlike the above
|
||||
example the file is read from the object database and not
|
||||
the working directory.
|
||||
|
||||
`git gui blame --line=100 Makefile`::
|
||||
git gui blame --line=100 Makefile::
|
||||
|
||||
Loads annotations as described above and automatically
|
||||
scrolls the view to center on line '100'.
|
||||
|
||||
`git gui citool`::
|
||||
git gui citool::
|
||||
|
||||
Make one commit and return to the shell when it is complete.
|
||||
This command returns a non-zero exit code if the window was
|
||||
closed in any way other than by making a commit.
|
||||
|
||||
`git gui citool --amend`::
|
||||
git gui citool --amend::
|
||||
|
||||
Automatically enter the 'Amend Last Commit' mode of
|
||||
the interface.
|
||||
|
||||
`git gui citool --nocommit`::
|
||||
git gui citool --nocommit::
|
||||
|
||||
Behave as normal citool, but instead of making a commit
|
||||
simply terminate with a zero exit code. It still checks
|
||||
that the index does not contain any unmerged entries, so
|
||||
you can use it as a GUI version of linkgit:git-mergetool[1]
|
||||
|
||||
`git citool`::
|
||||
git citool::
|
||||
|
||||
Same as `git gui citool` (above).
|
||||
|
||||
`git gui browser maint`::
|
||||
git gui browser maint::
|
||||
|
||||
Show a browser for the tree of the 'maint' branch. Files
|
||||
selected in the browser can be viewed with the internal
|
||||
|
@ -119,14 +119,6 @@ ScriptAliasMatch \
|
||||
|
||||
ScriptAlias /git/ /var/www/cgi-bin/gitweb.cgi/
|
||||
----------------------------------------------------------------
|
||||
+
|
||||
To serve multiple repositories from different linkgit:gitnamespaces[7] in a
|
||||
single repository:
|
||||
+
|
||||
----------------------------------------------------------------
|
||||
SetEnvIf Request_URI "^/git/([^/]*)" GIT_NAMESPACE=$1
|
||||
ScriptAliasMatch ^/git/[^/]*(.*) /usr/libexec/git-core/git-http-backend/storage.git$1
|
||||
----------------------------------------------------------------
|
||||
|
||||
Accelerated static Apache 2.x::
|
||||
Similar to the above, but Apache can be used to return static
|
||||
|
@ -51,8 +51,8 @@ OPTIONS
|
||||
|
||||
start::
|
||||
--start::
|
||||
Start the httpd instance and exit. Regenerate configuration files
|
||||
as necessary for spawning a new instance.
|
||||
Start the httpd instance and exit. This does not generate
|
||||
any of the configuration files for spawning a new instance.
|
||||
|
||||
stop::
|
||||
--stop::
|
||||
@ -62,8 +62,8 @@ stop::
|
||||
|
||||
restart::
|
||||
--restart::
|
||||
Restart the httpd instance and exit. Regenerate configuration files
|
||||
as necessary for spawning a new instance.
|
||||
Restart the httpd instance and exit. This does not generate
|
||||
any of the configuration files for spawning a new instance.
|
||||
|
||||
CONFIGURATION
|
||||
-------------
|
||||
|
@ -69,10 +69,13 @@ produced by --stat etc.
|
||||
its size is not included.
|
||||
|
||||
[\--] <path>...::
|
||||
Show only commits that affect any of the specified paths. To
|
||||
prevent confusion with options and branch names, paths may need
|
||||
to be prefixed with "\-- " to separate them from options or
|
||||
refnames.
|
||||
Show only commits that are enough to explain how the files
|
||||
that match the specified paths came to be. See "History
|
||||
Simplification" below for details and other simplification
|
||||
modes.
|
||||
+
|
||||
To prevent confusion with options and branch names, paths may need to
|
||||
be prefixed with "\-- " to separate them from options or refnames.
|
||||
|
||||
include::rev-list-options.txt[]
|
||||
|
||||
@ -88,45 +91,45 @@ include::diff-generate-patch.txt[]
|
||||
|
||||
Examples
|
||||
--------
|
||||
`git log --no-merges`::
|
||||
git log --no-merges::
|
||||
|
||||
Show the whole commit history, but skip any merges
|
||||
|
||||
`git log v2.6.12.. include/scsi drivers/scsi`::
|
||||
git log v2.6.12.. include/scsi drivers/scsi::
|
||||
|
||||
Show all commits since version 'v2.6.12' that changed any file
|
||||
in the include/scsi or drivers/scsi subdirectories
|
||||
|
||||
`git log --since="2 weeks ago" \-- gitk`::
|
||||
git log --since="2 weeks ago" \-- gitk::
|
||||
|
||||
Show the changes during the last two weeks to the file 'gitk'.
|
||||
The "--" is necessary to avoid confusion with the *branch* named
|
||||
'gitk'
|
||||
|
||||
`git log --name-status release..test`::
|
||||
git log --name-status release..test::
|
||||
|
||||
Show the commits that are in the "test" branch but not yet
|
||||
in the "release" branch, along with the list of paths
|
||||
each commit modifies.
|
||||
|
||||
`git log --follow builtin-rev-list.c`::
|
||||
git log --follow builtin-rev-list.c::
|
||||
|
||||
Shows the commits that changed builtin-rev-list.c, including
|
||||
those commits that occurred before the file was given its
|
||||
present name.
|
||||
|
||||
`git log --branches --not --remotes=origin`::
|
||||
git log --branches --not --remotes=origin::
|
||||
|
||||
Shows all commits that are in any of local branches but not in
|
||||
any of remote-tracking branches for 'origin' (what you have that
|
||||
origin doesn't).
|
||||
|
||||
`git log master --not --remotes=*/master`::
|
||||
git log master --not --remotes=*/master::
|
||||
|
||||
Shows all commits that are in local master but not in any remote
|
||||
repository master branches.
|
||||
|
||||
`git log -p -m --first-parent`::
|
||||
git log -p -m --first-parent::
|
||||
|
||||
Shows the history including change diffs, but only from the
|
||||
"main branch" perspective, skipping commits that come from merged
|
||||
|
@ -76,12 +76,12 @@ OPTIONS
|
||||
EXAMPLES
|
||||
--------
|
||||
|
||||
`git merge-file README.my README README.upstream`::
|
||||
git merge-file README.my README README.upstream::
|
||||
|
||||
combines the changes of README.my and README.upstream since README,
|
||||
tries to merge them and writes the result into README.my.
|
||||
|
||||
`git merge-file -L a -L b -L c tmp/a123 tmp/b234 tmp/c345`::
|
||||
git merge-file -L a -L b -L c tmp/a123 tmp/b234 tmp/c345::
|
||||
|
||||
merges tmp/a123 and tmp/c345 with the base tmp/b234, but uses labels
|
||||
`a` and `c` instead of `tmp/a123` and `tmp/c345`.
|
||||
|
@ -327,12 +327,12 @@ a case where you do mean to lose history.
|
||||
Examples
|
||||
--------
|
||||
|
||||
`git push`::
|
||||
git push::
|
||||
Works like `git push <remote>`, where <remote> is the
|
||||
current branch's remote (or `origin`, if no remote is
|
||||
configured for the current branch).
|
||||
|
||||
`git push origin`::
|
||||
git push origin::
|
||||
Without additional configuration, works like
|
||||
`git push origin :`.
|
||||
+
|
||||
@ -344,45 +344,45 @@ use `git config remote.origin.push HEAD`. Any valid <refspec> (like
|
||||
the ones in the examples below) can be configured as the default for
|
||||
`git push origin`.
|
||||
|
||||
`git push origin :`::
|
||||
git push origin :::
|
||||
Push "matching" branches to `origin`. See
|
||||
<refspec> in the <<OPTIONS,OPTIONS>> section above for a
|
||||
description of "matching" branches.
|
||||
|
||||
`git push origin master`::
|
||||
git push origin master::
|
||||
Find a ref that matches `master` in the source repository
|
||||
(most likely, it would find `refs/heads/master`), and update
|
||||
the same ref (e.g. `refs/heads/master`) in `origin` repository
|
||||
with it. If `master` did not exist remotely, it would be
|
||||
created.
|
||||
|
||||
`git push origin HEAD`::
|
||||
git push origin HEAD::
|
||||
A handy way to push the current branch to the same name on the
|
||||
remote.
|
||||
|
||||
`git push origin master:satellite/master dev:satellite/dev`::
|
||||
git push origin master:satellite/master dev:satellite/dev::
|
||||
Use the source ref that matches `master` (e.g. `refs/heads/master`)
|
||||
to update the ref that matches `satellite/master` (most probably
|
||||
`refs/remotes/satellite/master`) in the `origin` repository, then
|
||||
do the same for `dev` and `satellite/dev`.
|
||||
|
||||
`git push origin HEAD:master`::
|
||||
git push origin HEAD:master::
|
||||
Push the current branch to the remote ref matching `master` in the
|
||||
`origin` repository. This form is convenient to push the current
|
||||
branch without thinking about its local name.
|
||||
|
||||
`git push origin master:refs/heads/experimental`::
|
||||
git push origin master:refs/heads/experimental::
|
||||
Create the branch `experimental` in the `origin` repository
|
||||
by copying the current `master` branch. This form is only
|
||||
needed to create a new branch or tag in the remote repository when
|
||||
the local name and the remote name are different; otherwise,
|
||||
the ref name on its own will work.
|
||||
|
||||
`git push origin :experimental`::
|
||||
git push origin :experimental::
|
||||
Find a ref that matches `experimental` in the `origin` repository
|
||||
(e.g. `refs/heads/experimental`), and delete it.
|
||||
|
||||
`git push origin {plus}dev:master`::
|
||||
git push origin {plus}dev:master::
|
||||
Update the origin repository's master branch with the dev branch,
|
||||
allowing non-fast-forward updates. *This can leave unreferenced
|
||||
commits dangling in the origin repository.* Consider the
|
||||
|
@ -9,7 +9,7 @@ git-receive-pack - Receive what is pushed into the repository
|
||||
SYNOPSIS
|
||||
--------
|
||||
[verse]
|
||||
'git-receive-pack' [--quiet] <directory>
|
||||
'git-receive-pack' <directory>
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
@ -35,9 +35,6 @@ are not fast-forwards.
|
||||
|
||||
OPTIONS
|
||||
-------
|
||||
--quiet::
|
||||
Print only error messages.
|
||||
|
||||
<directory>::
|
||||
The repository to sync into.
|
||||
|
||||
@ -153,7 +150,7 @@ if the repository is packed and is served via a dumb transport.
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
linkgit:git-send-pack[1], linkgit:gitnamespaces[7]
|
||||
linkgit:git-send-pack[1]
|
||||
|
||||
GIT
|
||||
---
|
||||
|
@ -35,19 +35,19 @@ GIT_TRANSLOOP_DEBUG::
|
||||
|
||||
EXAMPLES
|
||||
--------
|
||||
`git fetch fd::17 master`::
|
||||
git fetch fd::17 master::
|
||||
Fetch master, using file descriptor #17 to communicate with
|
||||
git-upload-pack.
|
||||
|
||||
`git fetch fd::17/foo master`::
|
||||
git fetch fd::17/foo master::
|
||||
Same as above.
|
||||
|
||||
`git push fd::7,8 master (as URL)`::
|
||||
git push fd::7,8 master (as URL)::
|
||||
Push master, using file descriptor #7 to read data from
|
||||
git-receive-pack and file descriptor #8 to write data to
|
||||
same service.
|
||||
|
||||
`git push fd::7,8/bar master`::
|
||||
git push fd::7,8/bar master::
|
||||
Same as above.
|
||||
|
||||
Documentation
|
||||
|
@ -48,9 +48,6 @@ arguments. The first argument specifies a remote repository as in git;
|
||||
it is either the name of a configured remote or a URL. The second
|
||||
argument specifies a URL; it is usually of the form
|
||||
'<transport>://<address>', but any arbitrary string is possible.
|
||||
The 'GIT_DIR' environment variable is set up for the remote helper
|
||||
and can be used to determine where to store additional data or from
|
||||
which directory to invoke auxiliary git commands.
|
||||
|
||||
When git encounters a URL of the form '<transport>://<address>', where
|
||||
'<transport>' is a protocol that it cannot handle natively, it
|
||||
|
@ -93,12 +93,12 @@ effect to your index in a row.
|
||||
|
||||
EXAMPLES
|
||||
--------
|
||||
`git revert HEAD~3`::
|
||||
git revert HEAD~3::
|
||||
|
||||
Revert the changes specified by the fourth last commit in HEAD
|
||||
and create a new commit with the reverted changes.
|
||||
|
||||
`git revert -n master{tilde}5..master{tilde}2`::
|
||||
git revert -n master{tilde}5..master{tilde}2::
|
||||
|
||||
Revert the changes done by commits from the fifth last commit
|
||||
in master (included) to the third last commit in master
|
||||
|
@ -137,7 +137,7 @@ git diff --name-only --diff-filter=D -z | xargs -0 git rm --cached
|
||||
|
||||
EXAMPLES
|
||||
--------
|
||||
`git rm Documentation/\*.txt`::
|
||||
git rm Documentation/\*.txt::
|
||||
Removes all `*.txt` files from the index that are under the
|
||||
`Documentation` directory and any of its subdirectories.
|
||||
+
|
||||
@ -145,7 +145,7 @@ Note that the asterisk `*` is quoted from the shell in this
|
||||
example; this lets git, and not the shell, expand the pathnames
|
||||
of files and subdirectories under the `Documentation/` directory.
|
||||
|
||||
`git rm -f git-*.sh`::
|
||||
git rm -f git-*.sh::
|
||||
Because this example lets the shell expand the asterisk
|
||||
(i.e. you are listing the files explicitly), it
|
||||
does not remove `subdir/git-foo.sh`.
|
||||
|
@ -9,7 +9,7 @@ git-send-pack - Push objects over git protocol to another repository
|
||||
SYNOPSIS
|
||||
--------
|
||||
[verse]
|
||||
'git send-pack' [--all] [--dry-run] [--force] [--receive-pack=<git-receive-pack>] [--quiet] [--verbose] [--thin] [<host>:]<directory> [<ref>...]
|
||||
'git send-pack' [--all] [--dry-run] [--force] [--receive-pack=<git-receive-pack>] [--verbose] [--thin] [<host>:]<directory> [<ref>...]
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
@ -45,9 +45,6 @@ OPTIONS
|
||||
the remote repository can lose commits; use it with
|
||||
care.
|
||||
|
||||
--quiet::
|
||||
Print only error messages.
|
||||
|
||||
--verbose::
|
||||
Run verbosely.
|
||||
|
||||
|
@ -48,23 +48,23 @@ include::pretty-formats.txt[]
|
||||
EXAMPLES
|
||||
--------
|
||||
|
||||
`git show v1.0.0`::
|
||||
git show v1.0.0::
|
||||
Shows the tag `v1.0.0`, along with the object the tags
|
||||
points at.
|
||||
|
||||
`git show v1.0.0^\{tree\}`::
|
||||
git show v1.0.0^\{tree\}::
|
||||
Shows the tree pointed to by the tag `v1.0.0`.
|
||||
|
||||
`git show -s --format=%s v1.0.0^\{commit\}`::
|
||||
git show -s --format=%s v1.0.0^\{commit\}::
|
||||
Shows the subject of the commit pointed to by the
|
||||
tag `v1.0.0`.
|
||||
|
||||
`git show next~10:Documentation/README`::
|
||||
git show next~10:Documentation/README::
|
||||
Shows the contents of the file `Documentation/README` as
|
||||
they were current in the 10th last commit of the branch
|
||||
`next`.
|
||||
|
||||
`git show master:Makefile master:t/Makefile`::
|
||||
git show master:Makefile master:t/Makefile::
|
||||
Concatenates the contents of said Makefiles in the head
|
||||
of the branch `master`.
|
||||
|
||||
|
@ -13,8 +13,7 @@ SYNOPSIS
|
||||
'git stash' drop [-q|--quiet] [<stash>]
|
||||
'git stash' ( pop | apply ) [--index] [-q|--quiet] [<stash>]
|
||||
'git stash' branch <branchname> [<stash>]
|
||||
'git stash' [save [--patch] [-k|--[no-]keep-index] [-q|--quiet]
|
||||
[-u|--include-untracked] [-a|--all] [<message>]]
|
||||
'git stash' [save [-p|--patch] [-k|--[no-]keep-index] [-q|--quiet] [<message>]]
|
||||
'git stash' clear
|
||||
'git stash' create
|
||||
|
||||
@ -43,7 +42,7 @@ is also possible).
|
||||
OPTIONS
|
||||
-------
|
||||
|
||||
save [-p|--patch] [--[no-]keep-index] [-u|--include-untracked] [-a|--all] [-q|--quiet] [<message>]::
|
||||
save [-p|--patch] [--[no-]keep-index] [-q|--quiet] [<message>]::
|
||||
|
||||
Save your local modifications to a new 'stash', and run `git reset
|
||||
--hard` to revert them. The <message> part is optional and gives
|
||||
@ -55,11 +54,6 @@ save [-p|--patch] [--[no-]keep-index] [-u|--include-untracked] [-a|--all] [-q|--
|
||||
If the `--keep-index` option is used, all changes already added to the
|
||||
index are left intact.
|
||||
+
|
||||
If the `--include-untracked` option is used, all untracked files are also
|
||||
stashed and then cleaned up with `git clean`, leaving the working directory
|
||||
in a very clean state. If the `--all` option is used instead then the
|
||||
ignored files are stashed and cleaned in addition to the untracked files.
|
||||
+
|
||||
With `--patch`, you can interactively select hunks from the diff
|
||||
between HEAD and the working tree to be stashed. The stash entry is
|
||||
constructed such that its index state is the same as the index state
|
||||
|
@ -15,8 +15,7 @@ SYNOPSIS
|
||||
'git submodule' [--quiet] init [--] [<path>...]
|
||||
'git submodule' [--quiet] update [--init] [-N|--no-fetch] [--rebase]
|
||||
[--reference <repository>] [--merge] [--recursive] [--] [<path>...]
|
||||
'git submodule' [--quiet] summary [--cached|--files] [(-n|--summary-limit) <n>]
|
||||
[commit] [--] [<path>...]
|
||||
'git submodule' [--quiet] summary [--cached|--files] [--summary-limit <n>] [commit] [--] [<path>...]
|
||||
'git submodule' [--quiet] foreach [--recursive] <command>
|
||||
'git submodule' [--quiet] sync [--] [<path>...]
|
||||
|
||||
@ -109,13 +108,8 @@ status::
|
||||
repository and `U` if the submodule has merge conflicts.
|
||||
This command is the default command for 'git submodule'.
|
||||
+
|
||||
If `--recursive` is specified, this command will recurse into nested
|
||||
If '--recursive' is specified, this command will recurse into nested
|
||||
submodules, and show their status as well.
|
||||
+
|
||||
If you are only interested in changes of the currently initialized
|
||||
submodules with respect to the commit recorded in the index or the HEAD,
|
||||
linkgit:git-status[1] and linkgit:git-diff[1] will provide that information
|
||||
too (and can also report changes to a submodule's work tree).
|
||||
|
||||
init::
|
||||
Initialize the submodules, i.e. register each submodule name
|
||||
@ -131,29 +125,26 @@ init::
|
||||
update::
|
||||
Update the registered submodules, i.e. clone missing submodules and
|
||||
checkout the commit specified in the index of the containing repository.
|
||||
This will make the submodules HEAD be detached unless `--rebase` or
|
||||
`--merge` is specified or the key `submodule.$name.update` is set to
|
||||
This will make the submodules HEAD be detached unless '--rebase' or
|
||||
'--merge' is specified or the key `submodule.$name.update` is set to
|
||||
`rebase` or `merge`.
|
||||
+
|
||||
If the submodule is not yet initialized, and you just want to use the
|
||||
setting as stored in .gitmodules, you can automatically initialize the
|
||||
submodule with the `--init` option.
|
||||
submodule with the --init option.
|
||||
+
|
||||
If `--recursive` is specified, this command will recurse into the
|
||||
If '--recursive' is specified, this command will recurse into the
|
||||
registered submodules, and update any nested submodules within.
|
||||
|
||||
summary::
|
||||
Show commit summary between the given commit (defaults to HEAD) and
|
||||
working tree/index. For a submodule in question, a series of commits
|
||||
in the submodule between the given super project commit and the
|
||||
index or working tree (switched by `--cached`) are shown. If the option
|
||||
`--files` is given, show the series of commits in the submodule between
|
||||
index or working tree (switched by --cached) are shown. If the option
|
||||
--files is given, show the series of commits in the submodule between
|
||||
the index of the super project and the working tree of the submodule
|
||||
(this option doesn't allow to use the `--cached` option or to provide an
|
||||
(this option doesn't allow to use the --cached option or to provide an
|
||||
explicit commit).
|
||||
+
|
||||
Using the `--submodule=log` option with linkgit:git-diff[1] will provide that
|
||||
information too.
|
||||
|
||||
foreach::
|
||||
Evaluates an arbitrary shell command in each checked out submodule.
|
||||
@ -164,9 +155,9 @@ foreach::
|
||||
superproject, $sha1 is the commit as recorded in the superproject,
|
||||
and $toplevel is the absolute path to the top-level of the superproject.
|
||||
Any submodules defined in the superproject but not checked out are
|
||||
ignored by this command. Unless given `--quiet`, foreach prints the name
|
||||
ignored by this command. Unless given --quiet, foreach prints the name
|
||||
of each submodule before evaluating the command.
|
||||
If `--recursive` is given, submodules are traversed recursively (i.e.
|
||||
If --recursive is given, submodules are traversed recursively (i.e.
|
||||
the given shell command is evaluated in nested submodules as well).
|
||||
A non-zero return from the command in any submodule causes
|
||||
the processing to terminate. This can be overridden by adding '|| :'
|
||||
@ -246,18 +237,13 @@ OPTIONS
|
||||
If the key `submodule.$name.update` is set to `rebase`, this option is
|
||||
implicit.
|
||||
|
||||
--init::
|
||||
This option is only valid for the update command.
|
||||
Initialize all submodules for which "git submodule init" has not been
|
||||
called so far before updating.
|
||||
|
||||
--reference <repository>::
|
||||
This option is only valid for add and update commands. These
|
||||
commands sometimes need to clone a remote repository. In this case,
|
||||
this option will be passed to the linkgit:git-clone[1] command.
|
||||
+
|
||||
*NOTE*: Do *not* use this option unless you have read the note
|
||||
for linkgit:git-clone[1]'s `--reference` and `--shared` options carefully.
|
||||
for linkgit:git-clone[1]'s --reference and --shared options carefully.
|
||||
|
||||
--recursive::
|
||||
This option is only valid for foreach, update and status commands.
|
||||
|
@ -53,26 +53,26 @@ tar.umask::
|
||||
|
||||
EXAMPLES
|
||||
--------
|
||||
`git tar-tree HEAD junk | (cd /var/tmp/ && tar xf -)`::
|
||||
git tar-tree HEAD junk | (cd /var/tmp/ && tar xf -)::
|
||||
|
||||
Create a tar archive that contains the contents of the
|
||||
latest commit on the current branch, and extracts it in
|
||||
`/var/tmp/junk` directory.
|
||||
|
||||
`git tar-tree v1.4.0 git-1.4.0 | gzip >git-1.4.0.tar.gz`::
|
||||
git tar-tree v1.4.0 git-1.4.0 | gzip >git-1.4.0.tar.gz::
|
||||
|
||||
Create a tarball for v1.4.0 release.
|
||||
|
||||
`git tar-tree v1.4.0{caret}\{tree\} git-1.4.0 | gzip >git-1.4.0.tar.gz`::
|
||||
git tar-tree v1.4.0{caret}\{tree\} git-1.4.0 | gzip >git-1.4.0.tar.gz::
|
||||
|
||||
Create a tarball for v1.4.0 release, but without a
|
||||
global extended pax header.
|
||||
|
||||
`git tar-tree --remote=example.com:git.git v1.4.0 >git-1.4.0.tar`::
|
||||
git tar-tree --remote=example.com:git.git v1.4.0 >git-1.4.0.tar::
|
||||
|
||||
Get a tarball v1.4.0 from example.com.
|
||||
|
||||
`git tar-tree HEAD:Documentation/ git-docs > git-1.4.0-docs.tar`::
|
||||
git tar-tree HEAD:Documentation/ git-docs > git-1.4.0-docs.tar::
|
||||
|
||||
Put everything in the current head's Documentation/ directory
|
||||
into 'git-1.4.0-docs.tar', with the prefix 'git-docs/'.
|
||||
|
@ -34,10 +34,6 @@ OPTIONS
|
||||
<directory>::
|
||||
The repository to sync from.
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
linkgit:gitnamespaces[7]
|
||||
|
||||
GIT
|
||||
---
|
||||
Part of the linkgit:git[1] suite
|
||||
|
@ -53,12 +53,12 @@ include::pretty-formats.txt[]
|
||||
|
||||
Examples
|
||||
--------
|
||||
`git whatchanged -p v2.6.12.. include/scsi drivers/scsi`::
|
||||
git whatchanged -p v2.6.12.. include/scsi drivers/scsi::
|
||||
|
||||
Show as patches the commits since version 'v2.6.12' that changed
|
||||
any file in the include/scsi or drivers/scsi subdirectories
|
||||
|
||||
`git whatchanged --since="2 weeks ago" \-- gitk`::
|
||||
git whatchanged --since="2 weeks ago" \-- gitk::
|
||||
|
||||
Show the changes during the last two weeks to the file 'gitk'.
|
||||
The "--" is necessary to avoid confusion with the *branch* named
|
||||
|
@ -10,8 +10,8 @@ SYNOPSIS
|
||||
--------
|
||||
[verse]
|
||||
'git' [--version] [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
|
||||
[-p|--paginate|--no-pager] [--no-replace-objects] [--bare]
|
||||
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
|
||||
[-p|--paginate|--no-pager] [--no-replace-objects]
|
||||
[--bare] [--git-dir=<path>] [--work-tree=<path>]
|
||||
[-c <name>=<value>]
|
||||
[--help] <command> [<args>]
|
||||
|
||||
@ -44,10 +44,13 @@ unreleased) version of git, that is available from 'master'
|
||||
branch of the `git.git` repository.
|
||||
Documentation for older releases are available here:
|
||||
|
||||
* link:v1.7.6.1/git.html[documentation for release 1.7.6.1]
|
||||
* link:v1.7.6.4/git.html[documentation for release 1.7.6.4]
|
||||
|
||||
* release notes for
|
||||
link:RelNotes/1.7.6.1.txt[1.7.6.1].
|
||||
link:RelNotes/1.7.6.4.txt[1.7.6.4],
|
||||
link:RelNotes/1.7.6.3.txt[1.7.6.3],
|
||||
link:RelNotes/1.7.6.2.txt[1.7.6.2],
|
||||
link:RelNotes/1.7.6.1.txt[1.7.6.1],
|
||||
link:RelNotes/1.7.6.txt[1.7.6].
|
||||
|
||||
* link:v1.7.5.4/git.html[documentation for release 1.7.5.4]
|
||||
@ -331,11 +334,6 @@ help ...`.
|
||||
variable (see core.worktree in linkgit:git-config[1] for a
|
||||
more detailed discussion).
|
||||
|
||||
--namespace=<path>::
|
||||
Set the git namespace. See linkgit:gitnamespaces[7] for more
|
||||
details. Equivalent to setting the `GIT_NAMESPACE` environment
|
||||
variable.
|
||||
|
||||
--bare::
|
||||
Treat the repository as a bare repository. If GIT_DIR
|
||||
environment is not set, it is set to the current working
|
||||
@ -599,10 +597,6 @@ git so take care if using Cogito etc.
|
||||
This can also be controlled by the '--work-tree' command line
|
||||
option and the core.worktree configuration variable.
|
||||
|
||||
'GIT_NAMESPACE'::
|
||||
Set the git namespace; see linkgit:gitnamespaces[7] for details.
|
||||
The '--namespace' command-line option also sets this value.
|
||||
|
||||
'GIT_CEILING_DIRECTORIES'::
|
||||
This should be a colon-separated list of absolute paths.
|
||||
If set, it is a list of directories that git should not chdir
|
||||
|
@ -955,9 +955,6 @@ frotz unspecified
|
||||
----------------------------------------------------------------
|
||||
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
linkgit:git-check-attr[1].
|
||||
|
||||
GIT
|
||||
---
|
||||
|
@ -1,75 +0,0 @@
|
||||
gitnamespaces(7)
|
||||
================
|
||||
|
||||
NAME
|
||||
----
|
||||
gitnamespaces - Git namespaces
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
|
||||
Git supports dividing the refs of a single repository into multiple
|
||||
namespaces, each of which has its own branches, tags, and HEAD. Git can
|
||||
expose each namespace as an independent repository to pull from and push
|
||||
to, while sharing the object store, and exposing all the refs to
|
||||
operations such as linkgit:git-gc[1].
|
||||
|
||||
Storing multiple repositories as namespaces of a single repository
|
||||
avoids storing duplicate copies of the same objects, such as when
|
||||
storing multiple branches of the same source. The alternates mechanism
|
||||
provides similar support for avoiding duplicates, but alternates do not
|
||||
prevent duplication between new objects added to the repositories
|
||||
without ongoing maintenance, while namespaces do.
|
||||
|
||||
To specify a namespace, set the `GIT_NAMESPACE` environment variable to
|
||||
the namespace. For each ref namespace, git stores the corresponding
|
||||
refs in a directory under `refs/namespaces/`. For example,
|
||||
`GIT_NAMESPACE=foo` will store refs under `refs/namespaces/foo/`. You
|
||||
can also specify namespaces via the `--namespace` option to
|
||||
linkgit:git[1].
|
||||
|
||||
Note that namespaces which include a `/` will expand to a hierarchy of
|
||||
namespaces; for example, `GIT_NAMESPACE=foo/bar` will store refs under
|
||||
`refs/namespaces/foo/refs/namespaces/bar/`. This makes paths in
|
||||
`GIT_NAMESPACE` behave hierarchically, so that cloning with
|
||||
`GIT_NAMESPACE=foo/bar` produces the same result as cloning with
|
||||
`GIT_NAMESPACE=foo` and cloning from that repo with `GIT_NAMESPACE=bar`. It
|
||||
also avoids ambiguity with strange namespace paths such as `foo/refs/heads/`,
|
||||
which could otherwise generate directory/file conflicts within the `refs`
|
||||
directory.
|
||||
|
||||
linkgit:git-upload-pack[1] and linkgit:git-receive-pack[1] rewrite the
|
||||
names of refs as specified by `GIT_NAMESPACE`. git-upload-pack and
|
||||
git-receive-pack will ignore all references outside the specified
|
||||
namespace.
|
||||
|
||||
The smart HTTP server, linkgit:git-http-backend[1], will pass
|
||||
GIT_NAMESPACE through to the backend programs; see
|
||||
linkgit:git-http-backend[1] for sample configuration to expose
|
||||
repository namespaces as repositories.
|
||||
|
||||
For a simple local test, you can use linkgit:git-remote-ext[1]:
|
||||
|
||||
----------
|
||||
git clone ext::'git --namespace=foo %s /tmp/prefixed.git'
|
||||
----------
|
||||
|
||||
SECURITY
|
||||
--------
|
||||
|
||||
Anyone with access to any namespace within a repository can potentially
|
||||
access objects from any other namespace stored in the same repository.
|
||||
You can't directly say "give me object ABCD" if you don't have a ref to
|
||||
it, but you can do some other sneaky things like:
|
||||
|
||||
. Claiming to push ABCD, at which point the server will optimize out the
|
||||
need for you to actually send it. Now you have a ref to ABCD and can
|
||||
fetch it (claiming not to have it, of course).
|
||||
|
||||
. Requesting other refs, claiming that you have ABCD, at which point the
|
||||
server may generate deltas against ABCD.
|
||||
|
||||
None of this causes a problem if you only host public repositories, or
|
||||
if everyone who may read one namespace may also read everything in every
|
||||
other namespace (for instance, if everyone in an organization has read
|
||||
permission to every repository).
|
@ -11,15 +11,27 @@ Data Structure
|
||||
`struct git_attr`::
|
||||
|
||||
An attribute is an opaque object that is identified by its name.
|
||||
Pass the name to `git_attr()` function to obtain the object of
|
||||
this type. The internal representation of this structure is
|
||||
of no interest to the calling programs. The name of the
|
||||
attribute can be retrieved by calling `git_attr_name()`.
|
||||
Pass the name and its length to `git_attr()` function to obtain
|
||||
the object of this type. The internal representation of this
|
||||
structure is of no interest to the calling programs.
|
||||
|
||||
`struct git_attr_check`::
|
||||
|
||||
This structure represents a set of attributes to check in a call
|
||||
to `git_check_attr()` function, and receives the results.
|
||||
to `git_checkattr()` function, and receives the results.
|
||||
|
||||
|
||||
Calling Sequence
|
||||
----------------
|
||||
|
||||
* Prepare an array of `struct git_attr_check` to define the list of
|
||||
attributes you would want to check. To populate this array, you would
|
||||
need to define necessary attributes by calling `git_attr()` function.
|
||||
|
||||
* Call git_checkattr() to check the attributes for the path.
|
||||
|
||||
* Inspect `git_attr_check` structure to see how each of the attribute in
|
||||
the array is defined for the path.
|
||||
|
||||
|
||||
Attribute Values
|
||||
@ -45,19 +57,6 @@ If none of the above returns true, `.value` member points at a string
|
||||
value of the attribute for the path.
|
||||
|
||||
|
||||
Querying Specific Attributes
|
||||
----------------------------
|
||||
|
||||
* Prepare an array of `struct git_attr_check` to define the list of
|
||||
attributes you would want to check. To populate this array, you would
|
||||
need to define necessary attributes by calling `git_attr()` function.
|
||||
|
||||
* Call `git_check_attr()` to check the attributes for the path.
|
||||
|
||||
* Inspect `git_attr_check` structure to see how each of the attribute in
|
||||
the array is defined for the path.
|
||||
|
||||
|
||||
Example
|
||||
-------
|
||||
|
||||
@ -73,18 +72,18 @@ static void setup_check(void)
|
||||
{
|
||||
if (check[0].attr)
|
||||
return; /* already done */
|
||||
check[0].attr = git_attr("crlf");
|
||||
check[1].attr = git_attr("ident");
|
||||
check[0].attr = git_attr("crlf", 4);
|
||||
check[1].attr = git_attr("ident", 5);
|
||||
}
|
||||
------------
|
||||
|
||||
. Call `git_check_attr()` with the prepared array of `struct git_attr_check`:
|
||||
. Call `git_checkattr()` with the prepared array of `struct git_attr_check`:
|
||||
|
||||
------------
|
||||
const char *path;
|
||||
|
||||
setup_check();
|
||||
git_check_attr(path, ARRAY_SIZE(check), check);
|
||||
git_checkattr(path, ARRAY_SIZE(check), check);
|
||||
------------
|
||||
|
||||
. Act on `.value` member of the result, left in `check[]`:
|
||||
@ -109,20 +108,4 @@ static void setup_check(void)
|
||||
}
|
||||
------------
|
||||
|
||||
|
||||
Querying All Attributes
|
||||
-----------------------
|
||||
|
||||
To get the values of all attributes associated with a file:
|
||||
|
||||
* Call `git_all_attrs()`, which returns an array of `git_attr_check`
|
||||
structures.
|
||||
|
||||
* Iterate over the `git_attr_check` array to examine the attribute
|
||||
names and values. The name of the attribute described by a
|
||||
`git_attr_check` object can be retrieved via
|
||||
`git_attr_name(check[i].attr)`. (Please note that no items will be
|
||||
returned for unset attributes, so `ATTR_UNSET()` will return false
|
||||
for all returned `git_array_check` objects.)
|
||||
|
||||
* Free the `git_array_check` array.
|
||||
(JC)
|
||||
|
@ -1,7 +1,7 @@
|
||||
#!/bin/sh
|
||||
|
||||
GVF=GIT-VERSION-FILE
|
||||
DEF_VER=v1.7.7-rc0
|
||||
DEF_VER=v1.7.6.4
|
||||
|
||||
LF='
|
||||
'
|
||||
|
13
INSTALL
13
INSTALL
@ -25,19 +25,6 @@ set up install paths (via config.mak.autogen), so you can write instead
|
||||
$ make all doc ;# as yourself
|
||||
# make install install-doc install-html;# as root
|
||||
|
||||
If you're willing to trade off (much) longer build time for a later
|
||||
faster git you can also do a profile feedback build with
|
||||
|
||||
$ make profile-all
|
||||
# make prefix=... install
|
||||
|
||||
This will run the complete test suite as training workload and then
|
||||
rebuild git with the generated profile feedback. This results in a git
|
||||
which is a few percent faster on CPU intensive workloads. This
|
||||
may be a good tradeoff for distribution packagers.
|
||||
|
||||
Note that the profile feedback build stage currently generates
|
||||
a lot of additional compiler warnings.
|
||||
|
||||
Issues of note:
|
||||
|
||||
|
82
Makefile
82
Makefile
@ -30,15 +30,15 @@ all::
|
||||
# Define LIBPCREDIR=/foo/bar if your libpcre header and library files are in
|
||||
# /foo/bar/include and /foo/bar/lib directories.
|
||||
#
|
||||
# Define NO_CURL if you do not have libcurl installed. git-http-fetch and
|
||||
# Define NO_CURL if you do not have libcurl installed. git-http-pull and
|
||||
# git-http-push are not built, and you cannot use http:// and https://
|
||||
# transports (neither smart nor dumb).
|
||||
# transports.
|
||||
#
|
||||
# Define CURLDIR=/foo/bar if your curl header and library files are in
|
||||
# /foo/bar/include and /foo/bar/lib directories.
|
||||
#
|
||||
# Define NO_EXPAT if you do not have expat installed. git-http-push is
|
||||
# not built, and you cannot push using http:// and https:// transports (dumb).
|
||||
# not built, and you cannot push using http:// and https:// transports.
|
||||
#
|
||||
# Define EXPATDIR=/foo/bar if your expat header and library files are in
|
||||
# /foo/bar/include and /foo/bar/lib directories.
|
||||
@ -115,10 +115,6 @@ all::
|
||||
#
|
||||
# Define NEEDS_SSL_WITH_CRYPTO if you need -lssl when using -lcrypto (Darwin).
|
||||
#
|
||||
# Define NEEDS_SSL_WITH_CURL if you need -lssl with -lcurl (Minix).
|
||||
#
|
||||
# Define NEEDS_IDN_WITH_CURL if you need -lidn when using -lcurl (Minix).
|
||||
#
|
||||
# Define NEEDS_LIBICONV if linking with libc is not enough (Darwin).
|
||||
#
|
||||
# Define NEEDS_SOCKET if linking with libc is not enough (SunOS,
|
||||
@ -157,9 +153,6 @@ all::
|
||||
# that tells runtime paths to dynamic libraries;
|
||||
# "-Wl,-rpath=/path/lib" is used instead.
|
||||
#
|
||||
# Define NO_NORETURN if using buggy versions of gcc 4.6+ and profile feedback,
|
||||
# as the compiler can crash (http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49299)
|
||||
#
|
||||
# Define USE_NSEC below if you want git to care about sub-second file mtimes
|
||||
# and ctimes. Note that you need recent glibc (at least 2.2.4) for this, and
|
||||
# it will BREAK YOUR LOCAL DIFFS! show-diff and anything using it will likely
|
||||
@ -302,7 +295,6 @@ bindir = $(prefix)/$(bindir_relative)
|
||||
mandir = share/man
|
||||
infodir = share/info
|
||||
gitexecdir = libexec/git-core
|
||||
mergetoolsdir = $(gitexecdir)/mergetools
|
||||
sharedir = $(prefix)/share
|
||||
gitwebdir = $(sharedir)/gitweb
|
||||
template_dir = share/git-core/templates
|
||||
@ -564,7 +556,6 @@ LIB_H += sha1-lookup.h
|
||||
LIB_H += sideband.h
|
||||
LIB_H += sigchain.h
|
||||
LIB_H += strbuf.h
|
||||
LIB_H += streaming.h
|
||||
LIB_H += string-list.h
|
||||
LIB_H += submodule.h
|
||||
LIB_H += tag.h
|
||||
@ -643,7 +634,6 @@ LIB_OBJS += pack-revindex.o
|
||||
LIB_OBJS += pack-write.o
|
||||
LIB_OBJS += pager.o
|
||||
LIB_OBJS += parse-options.o
|
||||
LIB_OBJS += parse-options-cb.o
|
||||
LIB_OBJS += patch-delta.o
|
||||
LIB_OBJS += patch-ids.o
|
||||
LIB_OBJS += path.o
|
||||
@ -672,7 +662,6 @@ LIB_OBJS += shallow.o
|
||||
LIB_OBJS += sideband.o
|
||||
LIB_OBJS += sigchain.o
|
||||
LIB_OBJS += strbuf.o
|
||||
LIB_OBJS += streaming.o
|
||||
LIB_OBJS += string-list.o
|
||||
LIB_OBJS += submodule.o
|
||||
LIB_OBJS += symlinks.o
|
||||
@ -1135,6 +1124,8 @@ endif
|
||||
X = .exe
|
||||
endif
|
||||
ifeq ($(uname_S),Interix)
|
||||
NO_SYS_POLL_H = YesPlease
|
||||
NO_INTTYPES_H = YesPlease
|
||||
NO_INITGROUPS = YesPlease
|
||||
NO_IPV6 = YesPlease
|
||||
NO_MEMMEM = YesPlease
|
||||
@ -1145,30 +1136,12 @@ ifeq ($(uname_S),Interix)
|
||||
ifeq ($(uname_R),3.5)
|
||||
NO_INET_NTOP = YesPlease
|
||||
NO_INET_PTON = YesPlease
|
||||
NO_SOCKADDR_STORAGE = YesPlease
|
||||
NO_FNMATCH_CASEFOLD = YesPlease
|
||||
endif
|
||||
ifeq ($(uname_R),5.2)
|
||||
NO_INET_NTOP = YesPlease
|
||||
NO_INET_PTON = YesPlease
|
||||
NO_SOCKADDR_STORAGE = YesPlease
|
||||
NO_FNMATCH_CASEFOLD = YesPlease
|
||||
endif
|
||||
endif
|
||||
ifeq ($(uname_S),Minix)
|
||||
NO_IPV6 = YesPlease
|
||||
NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease
|
||||
NO_NSEC = YesPlease
|
||||
NEEDS_LIBGEN =
|
||||
NEEDS_CRYPTO_WITH_SSL = YesPlease
|
||||
NEEDS_IDN_WITH_CURL = YesPlease
|
||||
NEEDS_SSL_WITH_CURL = YesPlease
|
||||
NEEDS_RESOLV =
|
||||
NO_HSTRERROR = YesPlease
|
||||
NO_MMAP = YesPlease
|
||||
NO_CURL =
|
||||
NO_EXPAT =
|
||||
endif
|
||||
ifneq (,$(findstring MINGW,$(uname_S)))
|
||||
pathsep = ;
|
||||
NO_PREAD = YesPlease
|
||||
@ -1313,16 +1286,6 @@ else
|
||||
else
|
||||
CURL_LIBCURL = -lcurl
|
||||
endif
|
||||
ifdef NEEDS_SSL_WITH_CURL
|
||||
CURL_LIBCURL += -lssl
|
||||
ifdef NEEDS_CRYPTO_WITH_SSL
|
||||
CURL_LIBCURL += -lcrypto
|
||||
endif
|
||||
endif
|
||||
ifdef NEEDS_IDN_WITH_CURL
|
||||
CURL_LIBCURL += -lidn
|
||||
endif
|
||||
|
||||
REMOTE_CURL_PRIMARY = git-remote-http$X
|
||||
REMOTE_CURL_ALIASES = git-remote-https$X git-remote-ftp$X git-remote-ftps$X
|
||||
REMOTE_CURL_NAMES = $(REMOTE_CURL_PRIMARY) $(REMOTE_CURL_ALIASES)
|
||||
@ -1359,7 +1322,7 @@ ifndef NO_OPENSSL
|
||||
OPENSSL_LINK =
|
||||
endif
|
||||
ifdef NEEDS_CRYPTO_WITH_SSL
|
||||
OPENSSL_LIBSSL += -lcrypto
|
||||
OPENSSL_LINK += -lcrypto
|
||||
endif
|
||||
else
|
||||
BASIC_CFLAGS += -DNO_OPENSSL
|
||||
@ -1411,9 +1374,6 @@ endif
|
||||
ifdef USE_ST_TIMESPEC
|
||||
BASIC_CFLAGS += -DUSE_ST_TIMESPEC
|
||||
endif
|
||||
ifdef NO_NORETURN
|
||||
BASIC_CFLAGS += -DNO_NORETURN
|
||||
endif
|
||||
ifdef NO_NSEC
|
||||
BASIC_CFLAGS += -DNO_NSEC
|
||||
endif
|
||||
@ -1878,7 +1838,7 @@ ifndef NO_CURL
|
||||
GIT_OBJS += http.o http-walker.o remote-curl.o
|
||||
endif
|
||||
XDIFF_OBJS = xdiff/xdiffi.o xdiff/xprepare.o xdiff/xutils.o xdiff/xemit.o \
|
||||
xdiff/xmerge.o xdiff/xpatience.o xdiff/xhistogram.o
|
||||
xdiff/xmerge.o xdiff/xpatience.o
|
||||
VCSSVN_OBJS = vcs-svn/string_pool.o vcs-svn/line_buffer.o \
|
||||
vcs-svn/repo_tree.o vcs-svn/fast_export.o vcs-svn/svndump.o
|
||||
VCSSVN_TEST_OBJS = test-obj-pool.o test-string-pool.o \
|
||||
@ -2206,7 +2166,7 @@ test-delta$X: diff-delta.o patch-delta.o
|
||||
|
||||
test-line-buffer$X: vcs-svn/lib.a
|
||||
|
||||
test-parse-options$X: parse-options.o parse-options-cb.o
|
||||
test-parse-options$X: parse-options.o
|
||||
|
||||
test-string-pool$X: vcs-svn/lib.a
|
||||
|
||||
@ -2259,13 +2219,6 @@ endif
|
||||
gitexec_instdir_SQ = $(subst ','\'',$(gitexec_instdir))
|
||||
export gitexec_instdir
|
||||
|
||||
ifneq ($(filter /%,$(firstword $(mergetoolsdir))),)
|
||||
mergetools_instdir = $(mergetoolsdir)
|
||||
else
|
||||
mergetools_instdir = $(prefix)/$(mergetoolsdir)
|
||||
endif
|
||||
mergetools_instdir_SQ = $(subst ','\'',$(mergetools_instdir))
|
||||
|
||||
install_bindir_programs := $(patsubst %,%$X,$(BINDIR_PROGRAMS_NEED_X)) $(BINDIR_PROGRAMS_NO_X)
|
||||
|
||||
install: all
|
||||
@ -2275,9 +2228,6 @@ install: all
|
||||
$(INSTALL) -m 644 $(SCRIPT_LIB) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)'
|
||||
$(INSTALL) $(install_bindir_programs) '$(DESTDIR_SQ)$(bindir_SQ)'
|
||||
$(MAKE) -C templates DESTDIR='$(DESTDIR_SQ)' install
|
||||
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(mergetools_instdir_SQ)'
|
||||
(cd mergetools && $(TAR) cf - .) | \
|
||||
(cd '$(DESTDIR_SQ)$(mergetools_instdir_SQ)' && umask 022 && $(TAR) xof -)
|
||||
ifndef NO_PERL
|
||||
$(MAKE) -C perl prefix='$(prefix_SQ)' DESTDIR='$(DESTDIR_SQ)' install
|
||||
$(MAKE) -C gitweb install
|
||||
@ -2545,19 +2495,3 @@ cover_db: coverage-report
|
||||
|
||||
cover_db_html: cover_db
|
||||
cover -report html -outputdir cover_db_html cover_db
|
||||
|
||||
### profile feedback build
|
||||
#
|
||||
.PHONY: profile-all profile-clean
|
||||
|
||||
PROFILE_GEN_CFLAGS := $(CFLAGS) -fprofile-generate -DNO_NORETURN=1
|
||||
PROFILE_USE_CFLAGS := $(CFLAGS) -fprofile-use -fprofile-correction -DNO_NORETURN=1
|
||||
|
||||
profile-clean:
|
||||
$(RM) $(addsuffix *.gcda,$(object_dirs))
|
||||
$(RM) $(addsuffix *.gcno,$(object_dirs))
|
||||
|
||||
profile-all: profile-clean
|
||||
$(MAKE) CFLAGS="$(PROFILE_GEN_CFLAGS)" all
|
||||
$(MAKE) CFLAGS="$(PROFILE_GEN_CFLAGS)" -j1 test
|
||||
$(MAKE) CFLAGS="$(PROFILE_USE_CFLAGS)" all
|
||||
|
32
abspath.c
32
abspath.c
@ -40,7 +40,7 @@ const char *real_path(const char *path)
|
||||
|
||||
while (depth--) {
|
||||
if (!is_directory(buf)) {
|
||||
char *last_slash = find_last_dir_sep(buf);
|
||||
char *last_slash = strrchr(buf, '/');
|
||||
if (last_slash) {
|
||||
*last_slash = '\0';
|
||||
last_elem = xstrdup(last_slash + 1);
|
||||
@ -65,7 +65,7 @@ const char *real_path(const char *path)
|
||||
if (len + strlen(last_elem) + 2 > PATH_MAX)
|
||||
die ("Too long path name: '%s/%s'",
|
||||
buf, last_elem);
|
||||
if (len && !is_dir_sep(buf[len-1]))
|
||||
if (len && buf[len-1] != '/')
|
||||
buf[len++] = '/';
|
||||
strcpy(buf + len, last_elem);
|
||||
free(last_elem);
|
||||
@ -139,31 +139,3 @@ const char *absolute_path(const char *path)
|
||||
}
|
||||
return buf;
|
||||
}
|
||||
|
||||
/*
|
||||
* Unlike prefix_path, this should be used if the named file does
|
||||
* not have to interact with index entry; i.e. name of a random file
|
||||
* on the filesystem.
|
||||
*/
|
||||
const char *prefix_filename(const char *pfx, int pfx_len, const char *arg)
|
||||
{
|
||||
static char path[PATH_MAX];
|
||||
#ifndef WIN32
|
||||
if (!pfx_len || is_absolute_path(arg))
|
||||
return arg;
|
||||
memcpy(path, pfx, pfx_len);
|
||||
strcpy(path + pfx_len, arg);
|
||||
#else
|
||||
char *p;
|
||||
/* don't add prefix to absolute paths, but still replace '\' by '/' */
|
||||
if (is_absolute_path(arg))
|
||||
pfx_len = 0;
|
||||
else if (pfx_len)
|
||||
memcpy(path, pfx, pfx_len);
|
||||
strcpy(path + pfx_len, arg);
|
||||
for (p = path + pfx_len; *p; p++)
|
||||
if (*p == '\\')
|
||||
*p = '/';
|
||||
#endif
|
||||
return path;
|
||||
}
|
||||
|
135
archive-tar.c
135
archive-tar.c
@ -4,7 +4,6 @@
|
||||
#include "cache.h"
|
||||
#include "tar.h"
|
||||
#include "archive.h"
|
||||
#include "run-command.h"
|
||||
|
||||
#define RECORDSIZE (512)
|
||||
#define BLOCKSIZE (RECORDSIZE * 20)
|
||||
@ -14,9 +13,6 @@ static unsigned long offset;
|
||||
|
||||
static int tar_umask = 002;
|
||||
|
||||
static int write_tar_filter_archive(const struct archiver *ar,
|
||||
struct archiver_args *args);
|
||||
|
||||
/* writes out the whole block, but only if it is full */
|
||||
static void write_if_needed(void)
|
||||
{
|
||||
@ -224,67 +220,6 @@ static int write_global_extended_header(struct archiver_args *args)
|
||||
return err;
|
||||
}
|
||||
|
||||
static struct archiver **tar_filters;
|
||||
static int nr_tar_filters;
|
||||
static int alloc_tar_filters;
|
||||
|
||||
static struct archiver *find_tar_filter(const char *name, int len)
|
||||
{
|
||||
int i;
|
||||
for (i = 0; i < nr_tar_filters; i++) {
|
||||
struct archiver *ar = tar_filters[i];
|
||||
if (!strncmp(ar->name, name, len) && !ar->name[len])
|
||||
return ar;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int tar_filter_config(const char *var, const char *value, void *data)
|
||||
{
|
||||
struct archiver *ar;
|
||||
const char *dot;
|
||||
const char *name;
|
||||
const char *type;
|
||||
int namelen;
|
||||
|
||||
if (prefixcmp(var, "tar."))
|
||||
return 0;
|
||||
dot = strrchr(var, '.');
|
||||
if (dot == var + 9)
|
||||
return 0;
|
||||
|
||||
name = var + 4;
|
||||
namelen = dot - name;
|
||||
type = dot + 1;
|
||||
|
||||
ar = find_tar_filter(name, namelen);
|
||||
if (!ar) {
|
||||
ar = xcalloc(1, sizeof(*ar));
|
||||
ar->name = xmemdupz(name, namelen);
|
||||
ar->write_archive = write_tar_filter_archive;
|
||||
ar->flags = ARCHIVER_WANT_COMPRESSION_LEVELS;
|
||||
ALLOC_GROW(tar_filters, nr_tar_filters + 1, alloc_tar_filters);
|
||||
tar_filters[nr_tar_filters++] = ar;
|
||||
}
|
||||
|
||||
if (!strcmp(type, "command")) {
|
||||
if (!value)
|
||||
return config_error_nonbool(var);
|
||||
free(ar->data);
|
||||
ar->data = xstrdup(value);
|
||||
return 0;
|
||||
}
|
||||
if (!strcmp(type, "remote")) {
|
||||
if (git_config_bool(var, value))
|
||||
ar->flags |= ARCHIVER_REMOTE;
|
||||
else
|
||||
ar->flags &= ~ARCHIVER_REMOTE;
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int git_tar_config(const char *var, const char *value, void *cb)
|
||||
{
|
||||
if (!strcmp(var, "tar.umask")) {
|
||||
@ -296,15 +231,15 @@ static int git_tar_config(const char *var, const char *value, void *cb)
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
return tar_filter_config(var, value, cb);
|
||||
return git_default_config(var, value, cb);
|
||||
}
|
||||
|
||||
static int write_tar_archive(const struct archiver *ar,
|
||||
struct archiver_args *args)
|
||||
int write_tar_archive(struct archiver_args *args)
|
||||
{
|
||||
int err = 0;
|
||||
|
||||
git_config(git_tar_config, NULL);
|
||||
|
||||
if (args->commit_sha1)
|
||||
err = write_global_extended_header(args);
|
||||
if (!err)
|
||||
@ -313,65 +248,3 @@ static int write_tar_archive(const struct archiver *ar,
|
||||
write_trailer();
|
||||
return err;
|
||||
}
|
||||
|
||||
static int write_tar_filter_archive(const struct archiver *ar,
|
||||
struct archiver_args *args)
|
||||
{
|
||||
struct strbuf cmd = STRBUF_INIT;
|
||||
struct child_process filter;
|
||||
const char *argv[2];
|
||||
int r;
|
||||
|
||||
if (!ar->data)
|
||||
die("BUG: tar-filter archiver called with no filter defined");
|
||||
|
||||
strbuf_addstr(&cmd, ar->data);
|
||||
if (args->compression_level >= 0)
|
||||
strbuf_addf(&cmd, " -%d", args->compression_level);
|
||||
|
||||
memset(&filter, 0, sizeof(filter));
|
||||
argv[0] = cmd.buf;
|
||||
argv[1] = NULL;
|
||||
filter.argv = argv;
|
||||
filter.use_shell = 1;
|
||||
filter.in = -1;
|
||||
|
||||
if (start_command(&filter) < 0)
|
||||
die_errno("unable to start '%s' filter", argv[0]);
|
||||
close(1);
|
||||
if (dup2(filter.in, 1) < 0)
|
||||
die_errno("unable to redirect descriptor");
|
||||
close(filter.in);
|
||||
|
||||
r = write_tar_archive(ar, args);
|
||||
|
||||
close(1);
|
||||
if (finish_command(&filter) != 0)
|
||||
die("'%s' filter reported error", argv[0]);
|
||||
|
||||
strbuf_release(&cmd);
|
||||
return r;
|
||||
}
|
||||
|
||||
static struct archiver tar_archiver = {
|
||||
"tar",
|
||||
write_tar_archive,
|
||||
ARCHIVER_REMOTE
|
||||
};
|
||||
|
||||
void init_tar_archiver(void)
|
||||
{
|
||||
int i;
|
||||
register_archiver(&tar_archiver);
|
||||
|
||||
tar_filter_config("tar.tgz.command", "gzip -cn", NULL);
|
||||
tar_filter_config("tar.tgz.remote", "true", NULL);
|
||||
tar_filter_config("tar.tar.gz.command", "gzip -cn", NULL);
|
||||
tar_filter_config("tar.tar.gz.remote", "true", NULL);
|
||||
git_config(git_tar_config, NULL);
|
||||
for (i = 0; i < nr_tar_filters; i++) {
|
||||
/* omit any filters that never had a command configured */
|
||||
if (tar_filters[i]->data)
|
||||
register_archiver(tar_filters[i]);
|
||||
}
|
||||
}
|
||||
|
@ -261,8 +261,7 @@ static void dos_time(time_t *time, int *dos_date, int *dos_time)
|
||||
*dos_time = t->tm_sec / 2 + t->tm_min * 32 + t->tm_hour * 2048;
|
||||
}
|
||||
|
||||
static int write_zip_archive(const struct archiver *ar,
|
||||
struct archiver_args *args)
|
||||
int write_zip_archive(struct archiver_args *args)
|
||||
{
|
||||
int err;
|
||||
|
||||
@ -279,14 +278,3 @@ static int write_zip_archive(const struct archiver *ar,
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static struct archiver zip_archiver = {
|
||||
"zip",
|
||||
write_zip_archive,
|
||||
ARCHIVER_WANT_COMPRESSION_LEVELS|ARCHIVER_REMOTE
|
||||
};
|
||||
|
||||
void init_zip_archiver(void)
|
||||
{
|
||||
register_archiver(&zip_archiver);
|
||||
}
|
||||
|
92
archive.c
92
archive.c
@ -14,15 +14,16 @@ static char const * const archive_usage[] = {
|
||||
NULL
|
||||
};
|
||||
|
||||
static const struct archiver **archivers;
|
||||
static int nr_archivers;
|
||||
static int alloc_archivers;
|
||||
#define USES_ZLIB_COMPRESSION 1
|
||||
|
||||
void register_archiver(struct archiver *ar)
|
||||
{
|
||||
ALLOC_GROW(archivers, nr_archivers + 1, alloc_archivers);
|
||||
archivers[nr_archivers++] = ar;
|
||||
}
|
||||
static const struct archiver {
|
||||
const char *name;
|
||||
write_archive_fn_t write_archive;
|
||||
unsigned int flags;
|
||||
} archivers[] = {
|
||||
{ "tar", write_tar_archive },
|
||||
{ "zip", write_zip_archive, USES_ZLIB_COMPRESSION },
|
||||
};
|
||||
|
||||
static void format_subst(const struct commit *commit,
|
||||
const char *src, size_t len,
|
||||
@ -123,7 +124,7 @@ static int write_archive_entry(const unsigned char *sha1, const char *base,
|
||||
path_without_prefix = path.buf + args->baselen;
|
||||
|
||||
setup_archive_check(check);
|
||||
if (!git_check_attr(path_without_prefix, ARRAY_SIZE(check), check)) {
|
||||
if (!git_checkattr(path_without_prefix, ARRAY_SIZE(check), check)) {
|
||||
if (ATTR_TRUE(check[0].value))
|
||||
return 0;
|
||||
convert = ATTR_TRUE(check[1].value);
|
||||
@ -207,9 +208,9 @@ static const struct archiver *lookup_archiver(const char *name)
|
||||
if (!name)
|
||||
return NULL;
|
||||
|
||||
for (i = 0; i < nr_archivers; i++) {
|
||||
if (!strcmp(name, archivers[i]->name))
|
||||
return archivers[i];
|
||||
for (i = 0; i < ARRAY_SIZE(archivers); i++) {
|
||||
if (!strcmp(name, archivers[i].name))
|
||||
return &archivers[i];
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
@ -298,10 +299,9 @@ static void parse_treeish_arg(const char **argv,
|
||||
PARSE_OPT_NOARG | PARSE_OPT_NONEG | PARSE_OPT_HIDDEN, NULL, (p) }
|
||||
|
||||
static int parse_archive_args(int argc, const char **argv,
|
||||
const struct archiver **ar, struct archiver_args *args,
|
||||
const char *name_hint, int is_remote)
|
||||
const struct archiver **ar, struct archiver_args *args)
|
||||
{
|
||||
const char *format = NULL;
|
||||
const char *format = "tar";
|
||||
const char *base = NULL;
|
||||
const char *remote = NULL;
|
||||
const char *exec = NULL;
|
||||
@ -355,27 +355,21 @@ static int parse_archive_args(int argc, const char **argv,
|
||||
base = "";
|
||||
|
||||
if (list) {
|
||||
for (i = 0; i < nr_archivers; i++)
|
||||
if (!is_remote || archivers[i]->flags & ARCHIVER_REMOTE)
|
||||
printf("%s\n", archivers[i]->name);
|
||||
for (i = 0; i < ARRAY_SIZE(archivers); i++)
|
||||
printf("%s\n", archivers[i].name);
|
||||
exit(0);
|
||||
}
|
||||
|
||||
if (!format && name_hint)
|
||||
format = archive_format_from_filename(name_hint);
|
||||
if (!format)
|
||||
format = "tar";
|
||||
|
||||
/* We need at least one parameter -- tree-ish */
|
||||
if (argc < 1)
|
||||
usage_with_options(archive_usage, opts);
|
||||
*ar = lookup_archiver(format);
|
||||
if (!*ar || (is_remote && !((*ar)->flags & ARCHIVER_REMOTE)))
|
||||
if (!*ar)
|
||||
die("Unknown archive format '%s'", format);
|
||||
|
||||
args->compression_level = Z_DEFAULT_COMPRESSION;
|
||||
if (compression_level != -1) {
|
||||
if ((*ar)->flags & ARCHIVER_WANT_COMPRESSION_LEVELS)
|
||||
if ((*ar)->flags & USES_ZLIB_COMPRESSION)
|
||||
args->compression_level = compression_level;
|
||||
else {
|
||||
die("Argument not supported for format '%s': -%d",
|
||||
@ -391,55 +385,19 @@ static int parse_archive_args(int argc, const char **argv,
|
||||
}
|
||||
|
||||
int write_archive(int argc, const char **argv, const char *prefix,
|
||||
int setup_prefix, const char *name_hint, int remote)
|
||||
int setup_prefix)
|
||||
{
|
||||
int nongit = 0;
|
||||
const struct archiver *ar = NULL;
|
||||
struct archiver_args args;
|
||||
|
||||
argc = parse_archive_args(argc, argv, &ar, &args);
|
||||
if (setup_prefix && prefix == NULL)
|
||||
prefix = setup_git_directory_gently(&nongit);
|
||||
|
||||
git_config(git_default_config, NULL);
|
||||
init_tar_archiver();
|
||||
init_zip_archiver();
|
||||
|
||||
argc = parse_archive_args(argc, argv, &ar, &args, name_hint, remote);
|
||||
if (nongit) {
|
||||
/*
|
||||
* We know this will die() with an error, so we could just
|
||||
* die ourselves; but its error message will be more specific
|
||||
* than what we could write here.
|
||||
*/
|
||||
setup_git_directory();
|
||||
}
|
||||
prefix = setup_git_directory();
|
||||
|
||||
parse_treeish_arg(argv, &args, prefix);
|
||||
parse_pathspec_arg(argv + 1, &args);
|
||||
|
||||
return ar->write_archive(ar, &args);
|
||||
}
|
||||
|
||||
static int match_extension(const char *filename, const char *ext)
|
||||
{
|
||||
int prefixlen = strlen(filename) - strlen(ext);
|
||||
|
||||
/*
|
||||
* We need 1 character for the '.', and 1 character to ensure that the
|
||||
* prefix is non-empty (k.e., we don't match .tar.gz with no actual
|
||||
* filename).
|
||||
*/
|
||||
if (prefixlen < 2 || filename[prefixlen-1] != '.')
|
||||
return 0;
|
||||
return !strcmp(filename + prefixlen, ext);
|
||||
}
|
||||
|
||||
const char *archive_format_from_filename(const char *filename)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < nr_archivers; i++)
|
||||
if (match_extension(filename, archivers[i]->name))
|
||||
return archivers[i]->name;
|
||||
return NULL;
|
||||
git_config(git_default_config, NULL);
|
||||
|
||||
return ar->write_archive(&args);
|
||||
}
|
||||
|
23
archive.h
23
archive.h
@ -14,24 +14,17 @@ struct archiver_args {
|
||||
int compression_level;
|
||||
};
|
||||
|
||||
#define ARCHIVER_WANT_COMPRESSION_LEVELS 1
|
||||
#define ARCHIVER_REMOTE 2
|
||||
struct archiver {
|
||||
const char *name;
|
||||
int (*write_archive)(const struct archiver *, struct archiver_args *);
|
||||
unsigned flags;
|
||||
void *data;
|
||||
};
|
||||
extern void register_archiver(struct archiver *);
|
||||
|
||||
extern void init_tar_archiver(void);
|
||||
extern void init_zip_archiver(void);
|
||||
typedef int (*write_archive_fn_t)(struct archiver_args *);
|
||||
|
||||
typedef int (*write_archive_entry_fn_t)(struct archiver_args *args, const unsigned char *sha1, const char *path, size_t pathlen, unsigned int mode, void *buffer, unsigned long size);
|
||||
|
||||
extern int write_archive_entries(struct archiver_args *args, write_archive_entry_fn_t write_entry);
|
||||
extern int write_archive(int argc, const char **argv, const char *prefix, int setup_prefix, const char *name_hint, int remote);
|
||||
/*
|
||||
* Archive-format specific backends.
|
||||
*/
|
||||
extern int write_tar_archive(struct archiver_args *);
|
||||
extern int write_zip_archive(struct archiver_args *);
|
||||
|
||||
const char *archive_format_from_filename(const char *filename);
|
||||
extern int write_archive_entries(struct archiver_args *args, write_archive_entry_fn_t write_entry);
|
||||
extern int write_archive(int argc, const char **argv, const char *prefix, int setup_prefix);
|
||||
|
||||
#endif /* ARCHIVE_H */
|
||||
|
79
attr.c
79
attr.c
@ -36,11 +36,6 @@ static int attr_nr;
|
||||
static struct git_attr_check *check_all_attr;
|
||||
static struct git_attr *(git_attr_hash[HASHSIZE]);
|
||||
|
||||
char *git_attr_name(struct git_attr *attr)
|
||||
{
|
||||
return attr->name;
|
||||
}
|
||||
|
||||
static unsigned hash_name(const char *name, int namelen)
|
||||
{
|
||||
unsigned val = 0, c;
|
||||
@ -55,10 +50,12 @@ static unsigned hash_name(const char *name, int namelen)
|
||||
static int invalid_attr_name(const char *name, int namelen)
|
||||
{
|
||||
/*
|
||||
* Attribute name cannot begin with '-' and must consist of
|
||||
* characters from [-A-Za-z0-9_.].
|
||||
* Attribute name cannot begin with '-' and from
|
||||
* [-A-Za-z0-9_.]. We'd specifically exclude '=' for now,
|
||||
* as we might later want to allow non-binary value for
|
||||
* attributes, e.g. "*.svg merge=special-merge-program-for-svg"
|
||||
*/
|
||||
if (namelen <= 0 || *name == '-')
|
||||
if (*name == '-')
|
||||
return -1;
|
||||
while (namelen--) {
|
||||
char ch = *name++;
|
||||
@ -535,18 +532,11 @@ static void bootstrap_attr_stack(void)
|
||||
}
|
||||
}
|
||||
|
||||
static void prepare_attr_stack(const char *path)
|
||||
static void prepare_attr_stack(const char *path, int dirlen)
|
||||
{
|
||||
struct attr_stack *elem, *info;
|
||||
int dirlen, len;
|
||||
int len;
|
||||
struct strbuf pathbuf;
|
||||
const char *cp;
|
||||
|
||||
cp = strrchr(path, '/');
|
||||
if (!cp)
|
||||
dirlen = 0;
|
||||
else
|
||||
dirlen = cp - path;
|
||||
|
||||
strbuf_init(&pathbuf, dirlen+2+strlen(GITATTRIBUTES_FILE));
|
||||
|
||||
@ -565,7 +555,8 @@ static void prepare_attr_stack(const char *path)
|
||||
* .gitattributes in deeper directories to shallower ones,
|
||||
* and finally use the built-in set as the default.
|
||||
*/
|
||||
bootstrap_attr_stack();
|
||||
if (!attr_stack)
|
||||
bootstrap_attr_stack();
|
||||
|
||||
/*
|
||||
* Pop the "info" one that is always at the top of the stack.
|
||||
@ -712,30 +703,26 @@ static int macroexpand_one(int attr_nr, int rem)
|
||||
return rem;
|
||||
}
|
||||
|
||||
/*
|
||||
* Collect all attributes for path into the array pointed to by
|
||||
* check_all_attr.
|
||||
*/
|
||||
static void collect_all_attrs(const char *path)
|
||||
int git_checkattr(const char *path, int num, struct git_attr_check *check)
|
||||
{
|
||||
struct attr_stack *stk;
|
||||
int i, pathlen, rem;
|
||||
const char *cp;
|
||||
int dirlen, pathlen, i, rem;
|
||||
|
||||
prepare_attr_stack(path);
|
||||
bootstrap_attr_stack();
|
||||
for (i = 0; i < attr_nr; i++)
|
||||
check_all_attr[i].value = ATTR__UNKNOWN;
|
||||
|
||||
pathlen = strlen(path);
|
||||
cp = strrchr(path, '/');
|
||||
if (!cp)
|
||||
dirlen = 0;
|
||||
else
|
||||
dirlen = cp - path;
|
||||
prepare_attr_stack(path, dirlen);
|
||||
rem = attr_nr;
|
||||
for (stk = attr_stack; 0 < rem && stk; stk = stk->prev)
|
||||
rem = fill(path, pathlen, stk, rem);
|
||||
}
|
||||
|
||||
int git_check_attr(const char *path, int num, struct git_attr_check *check)
|
||||
{
|
||||
int i;
|
||||
|
||||
collect_all_attrs(path);
|
||||
|
||||
for (i = 0; i < num; i++) {
|
||||
const char *value = check_all_attr[check[i].attr->attr_nr].value;
|
||||
@ -747,34 +734,6 @@ int git_check_attr(const char *path, int num, struct git_attr_check *check)
|
||||
return 0;
|
||||
}
|
||||
|
||||
int git_all_attrs(const char *path, int *num, struct git_attr_check **check)
|
||||
{
|
||||
int i, count, j;
|
||||
|
||||
collect_all_attrs(path);
|
||||
|
||||
/* Count the number of attributes that are set. */
|
||||
count = 0;
|
||||
for (i = 0; i < attr_nr; i++) {
|
||||
const char *value = check_all_attr[i].value;
|
||||
if (value != ATTR__UNSET && value != ATTR__UNKNOWN)
|
||||
++count;
|
||||
}
|
||||
*num = count;
|
||||
*check = xmalloc(sizeof(**check) * count);
|
||||
j = 0;
|
||||
for (i = 0; i < attr_nr; i++) {
|
||||
const char *value = check_all_attr[i].value;
|
||||
if (value != ATTR__UNSET && value != ATTR__UNKNOWN) {
|
||||
(*check)[j].attr = check_all_attr[i].attr;
|
||||
(*check)[j].value = value;
|
||||
++j;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void git_attr_set_direction(enum git_attr_direction new, struct index_state *istate)
|
||||
{
|
||||
enum git_attr_direction old = direction;
|
||||
|
20
attr.h
20
attr.h
@ -20,7 +20,7 @@ extern const char git_attr__false[];
|
||||
#define ATTR_UNSET(v) ((v) == NULL)
|
||||
|
||||
/*
|
||||
* Send one or more git_attr_check to git_check_attr(), and
|
||||
* Send one or more git_attr_check to git_checkattr(), and
|
||||
* each 'value' member tells what its value is.
|
||||
* Unset one is returned as NULL.
|
||||
*/
|
||||
@ -29,23 +29,7 @@ struct git_attr_check {
|
||||
const char *value;
|
||||
};
|
||||
|
||||
/*
|
||||
* Return the name of the attribute represented by the argument. The
|
||||
* return value is a pointer to a null-delimited string that is part
|
||||
* of the internal data structure; it should not be modified or freed.
|
||||
*/
|
||||
char *git_attr_name(struct git_attr *);
|
||||
|
||||
int git_check_attr(const char *path, int, struct git_attr_check *);
|
||||
|
||||
/*
|
||||
* Retrieve all attributes that apply to the specified path. *num
|
||||
* will be set the the number of attributes on the path; **check will
|
||||
* be set to point at a newly-allocated array of git_attr_check
|
||||
* objects describing the attributes and their values. *check must be
|
||||
* free()ed by the caller.
|
||||
*/
|
||||
int git_all_attrs(const char *path, int *num, struct git_attr_check **check);
|
||||
int git_checkattr(const char *path, int, struct git_attr_check *);
|
||||
|
||||
enum git_attr_direction {
|
||||
GIT_ATTR_CHECKIN,
|
||||
|
33
bisect.c
33
bisect.c
@ -24,7 +24,6 @@ struct argv_array {
|
||||
|
||||
static const char *argv_checkout[] = {"checkout", "-q", NULL, "--", NULL};
|
||||
static const char *argv_show_branch[] = {"show-branch", NULL, NULL};
|
||||
static const char *argv_update_ref[] = {"update-ref", "--no-deref", "BISECT_HEAD", NULL, NULL};
|
||||
|
||||
/* bits #0-15 in revision.h */
|
||||
|
||||
@ -708,23 +707,16 @@ static void mark_expected_rev(char *bisect_rev_hex)
|
||||
die("closing file %s: %s", filename, strerror(errno));
|
||||
}
|
||||
|
||||
static int bisect_checkout(char *bisect_rev_hex, int no_checkout)
|
||||
static int bisect_checkout(char *bisect_rev_hex)
|
||||
{
|
||||
int res;
|
||||
|
||||
mark_expected_rev(bisect_rev_hex);
|
||||
|
||||
argv_checkout[2] = bisect_rev_hex;
|
||||
if (no_checkout) {
|
||||
argv_update_ref[3] = bisect_rev_hex;
|
||||
if (run_command_v_opt(argv_update_ref, RUN_GIT_CMD))
|
||||
die("update-ref --no-deref HEAD failed on %s",
|
||||
bisect_rev_hex);
|
||||
} else {
|
||||
res = run_command_v_opt(argv_checkout, RUN_GIT_CMD);
|
||||
if (res)
|
||||
exit(res);
|
||||
}
|
||||
res = run_command_v_opt(argv_checkout, RUN_GIT_CMD);
|
||||
if (res)
|
||||
exit(res);
|
||||
|
||||
argv_show_branch[1] = bisect_rev_hex;
|
||||
return run_command_v_opt(argv_show_branch, RUN_GIT_CMD);
|
||||
@ -796,7 +788,7 @@ static void handle_skipped_merge_base(const unsigned char *mb)
|
||||
* - If one is "skipped", we can't know but we should warn.
|
||||
* - If we don't know, we should check it out and ask the user to test.
|
||||
*/
|
||||
static void check_merge_bases(int no_checkout)
|
||||
static void check_merge_bases(void)
|
||||
{
|
||||
struct commit_list *result;
|
||||
int rev_nr;
|
||||
@ -814,7 +806,7 @@ static void check_merge_bases(int no_checkout)
|
||||
handle_skipped_merge_base(mb);
|
||||
} else {
|
||||
printf("Bisecting: a merge base must be tested\n");
|
||||
exit(bisect_checkout(sha1_to_hex(mb), no_checkout));
|
||||
exit(bisect_checkout(sha1_to_hex(mb)));
|
||||
}
|
||||
}
|
||||
|
||||
@ -857,7 +849,7 @@ static int check_ancestors(const char *prefix)
|
||||
* If a merge base must be tested by the user, its source code will be
|
||||
* checked out to be tested by the user and we will exit.
|
||||
*/
|
||||
static void check_good_are_ancestors_of_bad(const char *prefix, int no_checkout)
|
||||
static void check_good_are_ancestors_of_bad(const char *prefix)
|
||||
{
|
||||
const char *filename = git_path("BISECT_ANCESTORS_OK");
|
||||
struct stat st;
|
||||
@ -876,7 +868,7 @@ static void check_good_are_ancestors_of_bad(const char *prefix, int no_checkout)
|
||||
|
||||
/* Check if all good revs are ancestor of the bad rev. */
|
||||
if (check_ancestors(prefix))
|
||||
check_merge_bases(no_checkout);
|
||||
check_merge_bases();
|
||||
|
||||
/* Create file BISECT_ANCESTORS_OK. */
|
||||
fd = open(filename, O_CREAT | O_TRUNC | O_WRONLY, 0600);
|
||||
@ -916,11 +908,8 @@ static void show_diff_tree(const char *prefix, struct commit *commit)
|
||||
* We use the convention that exiting with an exit code 10 means that
|
||||
* the bisection process finished successfully.
|
||||
* In this case the calling shell script should exit 0.
|
||||
*
|
||||
* If no_checkout is non-zero, the bisection process does not
|
||||
* checkout the trial commit but instead simply updates BISECT_HEAD.
|
||||
*/
|
||||
int bisect_next_all(const char *prefix, int no_checkout)
|
||||
int bisect_next_all(const char *prefix)
|
||||
{
|
||||
struct rev_info revs;
|
||||
struct commit_list *tried;
|
||||
@ -931,7 +920,7 @@ int bisect_next_all(const char *prefix, int no_checkout)
|
||||
if (read_bisect_refs())
|
||||
die("reading bisect refs failed");
|
||||
|
||||
check_good_are_ancestors_of_bad(prefix, no_checkout);
|
||||
check_good_are_ancestors_of_bad(prefix);
|
||||
|
||||
bisect_rev_setup(&revs, prefix, "%s", "^%s", 1);
|
||||
revs.limited = 1;
|
||||
@ -977,6 +966,6 @@ int bisect_next_all(const char *prefix, int no_checkout)
|
||||
"(roughly %d step%s)\n", nr, (nr == 1 ? "" : "s"),
|
||||
steps, (steps == 1 ? "" : "s"));
|
||||
|
||||
return bisect_checkout(bisect_rev_hex, no_checkout);
|
||||
return bisect_checkout(bisect_rev_hex);
|
||||
}
|
||||
|
||||
|
2
bisect.h
2
bisect.h
@ -27,7 +27,7 @@ struct rev_list_info {
|
||||
const char *header_prefix;
|
||||
};
|
||||
|
||||
extern int bisect_next_all(const char *prefix, int no_checkout);
|
||||
extern int bisect_next_all(const char *prefix);
|
||||
|
||||
extern int estimate_bisect_steps(int all);
|
||||
|
||||
|
@ -24,8 +24,7 @@ static void create_output_file(const char *output_file)
|
||||
}
|
||||
|
||||
static int run_remote_archiver(int argc, const char **argv,
|
||||
const char *remote, const char *exec,
|
||||
const char *name_hint)
|
||||
const char *remote, const char *exec)
|
||||
{
|
||||
char buf[LARGE_PACKET_MAX];
|
||||
int fd[2], i, len, rv;
|
||||
@ -38,17 +37,6 @@ static int run_remote_archiver(int argc, const char **argv,
|
||||
transport = transport_get(_remote, _remote->url[0]);
|
||||
transport_connect(transport, "git-upload-archive", exec, fd);
|
||||
|
||||
/*
|
||||
* Inject a fake --format field at the beginning of the
|
||||
* arguments, with the format inferred from our output
|
||||
* filename. This way explicit --format options can override
|
||||
* it.
|
||||
*/
|
||||
if (name_hint) {
|
||||
const char *format = archive_format_from_filename(name_hint);
|
||||
if (format)
|
||||
packet_write(fd[1], "argument --format=%s\n", format);
|
||||
}
|
||||
for (i = 1; i < argc; i++)
|
||||
packet_write(fd[1], "argument %s\n", argv[i]);
|
||||
packet_flush(fd[1]);
|
||||
@ -75,6 +63,17 @@ static int run_remote_archiver(int argc, const char **argv,
|
||||
return !!rv;
|
||||
}
|
||||
|
||||
static const char *format_from_name(const char *filename)
|
||||
{
|
||||
const char *ext = strrchr(filename, '.');
|
||||
if (!ext)
|
||||
return NULL;
|
||||
ext++;
|
||||
if (!strcasecmp(ext, "zip"))
|
||||
return "--format=zip";
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#define PARSE_OPT_KEEP_ALL ( PARSE_OPT_KEEP_DASHDASH | \
|
||||
PARSE_OPT_KEEP_ARGV0 | \
|
||||
PARSE_OPT_KEEP_UNKNOWN | \
|
||||
@ -85,6 +84,7 @@ int cmd_archive(int argc, const char **argv, const char *prefix)
|
||||
const char *exec = "git-upload-archive";
|
||||
const char *output = NULL;
|
||||
const char *remote = NULL;
|
||||
const char *format_option = NULL;
|
||||
struct option local_opts[] = {
|
||||
OPT_STRING('o', "output", &output, "file",
|
||||
"write the archive to this file"),
|
||||
@ -98,13 +98,32 @@ int cmd_archive(int argc, const char **argv, const char *prefix)
|
||||
argc = parse_options(argc, argv, prefix, local_opts, NULL,
|
||||
PARSE_OPT_KEEP_ALL);
|
||||
|
||||
if (output)
|
||||
if (output) {
|
||||
create_output_file(output);
|
||||
format_option = format_from_name(output);
|
||||
}
|
||||
|
||||
/*
|
||||
* We have enough room in argv[] to muck it in place, because
|
||||
* --output must have been given on the original command line
|
||||
* if we get to this point, and parse_options() must have eaten
|
||||
* it, i.e. we can add back one element to the array.
|
||||
*
|
||||
* We add a fake --format option at the beginning, with the
|
||||
* format inferred from our output filename. This way explicit
|
||||
* --format options can override it, and the fake option is
|
||||
* inserted before any "--" that might have been given.
|
||||
*/
|
||||
if (format_option) {
|
||||
memmove(argv + 2, argv + 1, sizeof(*argv) * argc);
|
||||
argv[1] = format_option;
|
||||
argv[++argc] = NULL;
|
||||
}
|
||||
|
||||
if (remote)
|
||||
return run_remote_archiver(argc, argv, remote, exec, output);
|
||||
return run_remote_archiver(argc, argv, remote, exec);
|
||||
|
||||
setvbuf(stderr, NULL, _IOLBF, BUFSIZ);
|
||||
|
||||
return write_archive(argc, argv, prefix, 1, output, 0);
|
||||
return write_archive(argc, argv, prefix, 1);
|
||||
}
|
||||
|
@ -4,19 +4,16 @@
|
||||
#include "bisect.h"
|
||||
|
||||
static const char * const git_bisect_helper_usage[] = {
|
||||
"git bisect--helper --next-all [--no-checkout]",
|
||||
"git bisect--helper --next-all",
|
||||
NULL
|
||||
};
|
||||
|
||||
int cmd_bisect__helper(int argc, const char **argv, const char *prefix)
|
||||
{
|
||||
int next_all = 0;
|
||||
int no_checkout = 0;
|
||||
struct option options[] = {
|
||||
OPT_BOOLEAN(0, "next-all", &next_all,
|
||||
"perform 'git bisect next'"),
|
||||
OPT_BOOLEAN(0, "no-checkout", &no_checkout,
|
||||
"update BISECT_HEAD instead of checking out the current commit"),
|
||||
OPT_END()
|
||||
};
|
||||
|
||||
@ -27,5 +24,5 @@ int cmd_bisect__helper(int argc, const char **argv, const char *prefix)
|
||||
usage_with_options(git_bisect_helper_usage, options);
|
||||
|
||||
/* next-all */
|
||||
return bisect_next_all(prefix, no_checkout);
|
||||
return bisect_next_all(prefix);
|
||||
}
|
||||
|
@ -4,28 +4,28 @@
|
||||
#include "quote.h"
|
||||
#include "parse-options.h"
|
||||
|
||||
static int all_attrs;
|
||||
static int stdin_paths;
|
||||
static const char * const check_attr_usage[] = {
|
||||
"git check-attr [-a | --all | attr...] [--] pathname...",
|
||||
"git check-attr --stdin [-a | --all | attr...] < <list-of-paths>",
|
||||
"git check-attr attr... [--] pathname...",
|
||||
"git check-attr --stdin attr... < <list-of-paths>",
|
||||
NULL
|
||||
};
|
||||
|
||||
static int null_term_line;
|
||||
|
||||
static const struct option check_attr_options[] = {
|
||||
OPT_BOOLEAN('a', "all", &all_attrs, "report all attributes set on file"),
|
||||
OPT_BOOLEAN(0 , "stdin", &stdin_paths, "read file names from stdin"),
|
||||
OPT_BOOLEAN('z', NULL, &null_term_line,
|
||||
"input paths are terminated by a null character"),
|
||||
OPT_END()
|
||||
};
|
||||
|
||||
static void output_attr(int cnt, struct git_attr_check *check,
|
||||
const char *file)
|
||||
static void check_attr(int cnt, struct git_attr_check *check,
|
||||
const char** name, const char *file)
|
||||
{
|
||||
int j;
|
||||
if (git_checkattr(file, cnt, check))
|
||||
die("git_checkattr died");
|
||||
for (j = 0; j < cnt; j++) {
|
||||
const char *value = check[j].value;
|
||||
|
||||
@ -37,30 +37,12 @@ static void output_attr(int cnt, struct git_attr_check *check,
|
||||
value = "unspecified";
|
||||
|
||||
quote_c_style(file, NULL, stdout, 0);
|
||||
printf(": %s: %s\n", git_attr_name(check[j].attr), value);
|
||||
printf(": %s: %s\n", name[j], value);
|
||||
}
|
||||
}
|
||||
|
||||
static void check_attr(const char *prefix, int cnt,
|
||||
struct git_attr_check *check, const char *file)
|
||||
{
|
||||
char *full_path =
|
||||
prefix_path(prefix, prefix ? strlen(prefix) : 0, file);
|
||||
if (check != NULL) {
|
||||
if (git_check_attr(full_path, cnt, check))
|
||||
die("git_check_attr died");
|
||||
output_attr(cnt, check, file);
|
||||
} else {
|
||||
if (git_all_attrs(full_path, &cnt, &check))
|
||||
die("git_all_attrs died");
|
||||
output_attr(cnt, check, file);
|
||||
free(check);
|
||||
}
|
||||
free(full_path);
|
||||
}
|
||||
|
||||
static void check_attr_stdin_paths(const char *prefix, int cnt,
|
||||
struct git_attr_check *check)
|
||||
static void check_attr_stdin_paths(int cnt, struct git_attr_check *check,
|
||||
const char** name)
|
||||
{
|
||||
struct strbuf buf, nbuf;
|
||||
int line_termination = null_term_line ? 0 : '\n';
|
||||
@ -74,26 +56,23 @@ static void check_attr_stdin_paths(const char *prefix, int cnt,
|
||||
die("line is badly quoted");
|
||||
strbuf_swap(&buf, &nbuf);
|
||||
}
|
||||
check_attr(prefix, cnt, check, buf.buf);
|
||||
check_attr(cnt, check, name, buf.buf);
|
||||
maybe_flush_or_die(stdout, "attribute to stdout");
|
||||
}
|
||||
strbuf_release(&buf);
|
||||
strbuf_release(&nbuf);
|
||||
}
|
||||
|
||||
static NORETURN void error_with_usage(const char *msg)
|
||||
{
|
||||
error("%s", msg);
|
||||
usage_with_options(check_attr_usage, check_attr_options);
|
||||
}
|
||||
|
||||
int cmd_check_attr(int argc, const char **argv, const char *prefix)
|
||||
{
|
||||
struct git_attr_check *check;
|
||||
int cnt, i, doubledash, filei;
|
||||
int cnt, i, doubledash;
|
||||
const char *errstr = NULL;
|
||||
|
||||
argc = parse_options(argc, argv, prefix, check_attr_options,
|
||||
check_attr_usage, PARSE_OPT_KEEP_DASHDASH);
|
||||
if (!argc)
|
||||
usage_with_options(check_attr_usage, check_attr_options);
|
||||
|
||||
if (read_cache() < 0) {
|
||||
die("invalid cache");
|
||||
@ -105,63 +84,39 @@ int cmd_check_attr(int argc, const char **argv, const char *prefix)
|
||||
doubledash = i;
|
||||
}
|
||||
|
||||
/* Process --all and/or attribute arguments: */
|
||||
if (all_attrs) {
|
||||
if (doubledash >= 1)
|
||||
error_with_usage("Attributes and --all both specified");
|
||||
|
||||
cnt = 0;
|
||||
filei = doubledash + 1;
|
||||
} else if (doubledash == 0) {
|
||||
error_with_usage("No attribute specified");
|
||||
} else if (doubledash < 0) {
|
||||
if (!argc)
|
||||
error_with_usage("No attribute specified");
|
||||
|
||||
if (stdin_paths) {
|
||||
/* Treat all arguments as attribute names. */
|
||||
cnt = argc;
|
||||
filei = argc;
|
||||
} else {
|
||||
/* Treat exactly one argument as an attribute name. */
|
||||
cnt = 1;
|
||||
filei = 1;
|
||||
}
|
||||
} else {
|
||||
/* If there is no double dash, we handle only one attribute */
|
||||
if (doubledash < 0) {
|
||||
cnt = 1;
|
||||
doubledash = 0;
|
||||
} else
|
||||
cnt = doubledash;
|
||||
filei = doubledash + 1;
|
||||
doubledash++;
|
||||
|
||||
if (cnt <= 0)
|
||||
errstr = "No attribute specified";
|
||||
else if (stdin_paths && doubledash < argc)
|
||||
errstr = "Can't specify files with --stdin";
|
||||
if (errstr) {
|
||||
error("%s", errstr);
|
||||
usage_with_options(check_attr_usage, check_attr_options);
|
||||
}
|
||||
|
||||
/* Check file argument(s): */
|
||||
if (stdin_paths) {
|
||||
if (filei < argc)
|
||||
error_with_usage("Can't specify files with --stdin");
|
||||
} else {
|
||||
if (filei >= argc)
|
||||
error_with_usage("No file specified");
|
||||
}
|
||||
|
||||
if (all_attrs) {
|
||||
check = NULL;
|
||||
} else {
|
||||
check = xcalloc(cnt, sizeof(*check));
|
||||
for (i = 0; i < cnt; i++) {
|
||||
const char *name;
|
||||
struct git_attr *a;
|
||||
name = argv[i];
|
||||
a = git_attr(name);
|
||||
if (!a)
|
||||
return error("%s: not a valid attribute name",
|
||||
name);
|
||||
check[i].attr = a;
|
||||
}
|
||||
check = xcalloc(cnt, sizeof(*check));
|
||||
for (i = 0; i < cnt; i++) {
|
||||
const char *name;
|
||||
struct git_attr *a;
|
||||
name = argv[i];
|
||||
a = git_attr(name);
|
||||
if (!a)
|
||||
return error("%s: not a valid attribute name", name);
|
||||
check[i].attr = a;
|
||||
}
|
||||
|
||||
if (stdin_paths)
|
||||
check_attr_stdin_paths(prefix, cnt, check);
|
||||
check_attr_stdin_paths(cnt, check, argv);
|
||||
else {
|
||||
for (i = filei; i < argc; i++)
|
||||
check_attr(prefix, cnt, check, argv[i]);
|
||||
for (i = doubledash; i < argc; i++)
|
||||
check_attr(cnt, check, argv, argv[i]);
|
||||
maybe_flush_or_die(stdout, "attribute to stdout");
|
||||
}
|
||||
return 0;
|
||||
|
@ -12,8 +12,8 @@ static const char builtin_check_ref_format_usage[] =
|
||||
" or: git check-ref-format --branch <branchname-shorthand>";
|
||||
|
||||
/*
|
||||
* Replace each run of adjacent slashes in src with a single slash,
|
||||
* and write the result to dst.
|
||||
* Remove leading slashes and replace each run of adjacent slashes in
|
||||
* src with a single slash, and write the result to dst.
|
||||
*
|
||||
* This function is similar to normalize_path_copy(), but stripped down
|
||||
* to meet check_ref_format's simpler needs.
|
||||
@ -21,7 +21,7 @@ static const char builtin_check_ref_format_usage[] =
|
||||
static void collapse_slashes(char *dst, const char *src)
|
||||
{
|
||||
char ch;
|
||||
char prev = '\0';
|
||||
char prev = '/';
|
||||
|
||||
while ((ch = *src++) != '\0') {
|
||||
if (prev == '/' && ch == prev)
|
||||
|
@ -657,25 +657,24 @@ static void suggest_reattach(struct commit *commit, struct rev_info *revs)
|
||||
"Warning: you are leaving %d commit behind, "
|
||||
"not connected to\n"
|
||||
"any of your branches:\n\n"
|
||||
"%s\n",
|
||||
"%s\n"
|
||||
"If you want to keep it by creating a new branch, "
|
||||
"this may be a good time\nto do so with:\n\n"
|
||||
" git branch new_branch_name %s\n\n",
|
||||
/* The plural version */
|
||||
"Warning: you are leaving %d commits behind, "
|
||||
"not connected to\n"
|
||||
"any of your branches:\n\n"
|
||||
"%s\n",
|
||||
"%s\n"
|
||||
"If you want to keep them by creating a new branch, "
|
||||
"this may be a good time\nto do so with:\n\n"
|
||||
" git branch new_branch_name %s\n\n",
|
||||
/* Give ngettext() the count */
|
||||
lost),
|
||||
lost,
|
||||
sb.buf);
|
||||
sb.buf,
|
||||
sha1_to_hex(commit->object.sha1));
|
||||
strbuf_release(&sb);
|
||||
|
||||
if (advice_detached_head)
|
||||
fprintf(stderr,
|
||||
_(
|
||||
"If you want to keep them by creating a new branch, "
|
||||
"this may be a good time\nto do so with:\n\n"
|
||||
" git branch new_branch_name %s\n\n"),
|
||||
sha1_to_hex(commit->object.sha1));
|
||||
}
|
||||
|
||||
/*
|
||||
|
171
builtin/clone.c
171
builtin/clone.c
@ -39,14 +39,23 @@ static const char * const builtin_clone_usage[] = {
|
||||
|
||||
static int option_no_checkout, option_bare, option_mirror;
|
||||
static int option_local, option_no_hardlinks, option_shared, option_recursive;
|
||||
static char *option_template, *option_reference, *option_depth;
|
||||
static char *option_template, *option_depth;
|
||||
static char *option_origin = NULL;
|
||||
static char *option_branch = NULL;
|
||||
static const char *real_git_dir;
|
||||
static char *option_upload_pack = "git-upload-pack";
|
||||
static int option_verbosity;
|
||||
static int option_progress;
|
||||
static struct string_list option_config;
|
||||
static struct string_list option_reference;
|
||||
|
||||
static int opt_parse_reference(const struct option *opt, const char *arg, int unset)
|
||||
{
|
||||
struct string_list *option_reference = opt->value;
|
||||
if (!arg)
|
||||
return -1;
|
||||
string_list_append(option_reference, arg);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct option builtin_clone_options[] = {
|
||||
OPT__VERBOSITY(&option_verbosity),
|
||||
@ -72,8 +81,8 @@ static struct option builtin_clone_options[] = {
|
||||
"initialize submodules in the clone"),
|
||||
OPT_STRING(0, "template", &option_template, "template-directory",
|
||||
"directory from which templates will be used"),
|
||||
OPT_STRING(0, "reference", &option_reference, "repo",
|
||||
"reference repository"),
|
||||
OPT_CALLBACK(0 , "reference", &option_reference, "repo",
|
||||
"reference repository", &opt_parse_reference),
|
||||
OPT_STRING('o', "origin", &option_origin, "branch",
|
||||
"use <branch> instead of 'origin' to track upstream"),
|
||||
OPT_STRING('b', "branch", &option_branch, "branch",
|
||||
@ -84,8 +93,7 @@ static struct option builtin_clone_options[] = {
|
||||
"create a shallow clone of that depth"),
|
||||
OPT_STRING(0, "separate-git-dir", &real_git_dir, "gitdir",
|
||||
"separate git dir from working tree"),
|
||||
OPT_STRING_LIST('c', "config", &option_config, "key=value",
|
||||
"set config inside the new repository"),
|
||||
|
||||
OPT_END()
|
||||
};
|
||||
|
||||
@ -103,9 +111,26 @@ static char *get_repo_path(const char *repo, int *is_bundle)
|
||||
for (i = 0; i < ARRAY_SIZE(suffix); i++) {
|
||||
const char *path;
|
||||
path = mkpath("%s%s", repo, suffix[i]);
|
||||
if (is_directory(path)) {
|
||||
if (stat(path, &st))
|
||||
continue;
|
||||
if (S_ISDIR(st.st_mode)) {
|
||||
*is_bundle = 0;
|
||||
return xstrdup(absolute_path(path));
|
||||
} else if (S_ISREG(st.st_mode) && st.st_size > 8) {
|
||||
/* Is it a "gitfile"? */
|
||||
char signature[8];
|
||||
int len, fd = open(path, O_RDONLY);
|
||||
if (fd < 0)
|
||||
continue;
|
||||
len = read_in_full(fd, signature, 8);
|
||||
close(fd);
|
||||
if (len != 8 || strncmp(signature, "gitdir: ", 8))
|
||||
continue;
|
||||
path = read_gitfile(path);
|
||||
if (path) {
|
||||
*is_bundle = 0;
|
||||
return xstrdup(absolute_path(path));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -199,39 +224,80 @@ static void strip_trailing_slashes(char *dir)
|
||||
*end = '\0';
|
||||
}
|
||||
|
||||
static void setup_reference(const char *repo)
|
||||
static int add_one_reference(struct string_list_item *item, void *cb_data)
|
||||
{
|
||||
const char *ref_git;
|
||||
char *ref_git_copy;
|
||||
|
||||
char *ref_git;
|
||||
struct strbuf alternate = STRBUF_INIT;
|
||||
struct remote *remote;
|
||||
struct transport *transport;
|
||||
const struct ref *extra;
|
||||
|
||||
ref_git = real_path(option_reference);
|
||||
|
||||
if (is_directory(mkpath("%s/.git/objects", ref_git)))
|
||||
ref_git = mkpath("%s/.git", ref_git);
|
||||
else if (!is_directory(mkpath("%s/objects", ref_git)))
|
||||
/* Beware: real_path() and mkpath() return static buffer */
|
||||
ref_git = xstrdup(real_path(item->string));
|
||||
if (is_directory(mkpath("%s/.git/objects", ref_git))) {
|
||||
char *ref_git_git = xstrdup(mkpath("%s/.git", ref_git));
|
||||
free(ref_git);
|
||||
ref_git = ref_git_git;
|
||||
} else if (!is_directory(mkpath("%s/objects", ref_git)))
|
||||
die(_("reference repository '%s' is not a local directory."),
|
||||
option_reference);
|
||||
item->string);
|
||||
|
||||
ref_git_copy = xstrdup(ref_git);
|
||||
strbuf_addf(&alternate, "%s/objects", ref_git);
|
||||
add_to_alternates_file(alternate.buf);
|
||||
strbuf_release(&alternate);
|
||||
|
||||
add_to_alternates_file(ref_git_copy);
|
||||
|
||||
remote = remote_get(ref_git_copy);
|
||||
transport = transport_get(remote, ref_git_copy);
|
||||
remote = remote_get(ref_git);
|
||||
transport = transport_get(remote, ref_git);
|
||||
for (extra = transport_get_remote_refs(transport); extra;
|
||||
extra = extra->next)
|
||||
add_extra_ref(extra->name, extra->old_sha1, 0);
|
||||
|
||||
transport_disconnect(transport);
|
||||
|
||||
free(ref_git_copy);
|
||||
free(ref_git);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void copy_or_link_directory(struct strbuf *src, struct strbuf *dest)
|
||||
static void setup_reference(void)
|
||||
{
|
||||
for_each_string_list(&option_reference, add_one_reference, NULL);
|
||||
}
|
||||
|
||||
static void copy_alternates(struct strbuf *src, struct strbuf *dst,
|
||||
const char *src_repo)
|
||||
{
|
||||
/*
|
||||
* Read from the source objects/info/alternates file
|
||||
* and copy the entries to corresponding file in the
|
||||
* destination repository with add_to_alternates_file().
|
||||
* Both src and dst have "$path/objects/info/alternates".
|
||||
*
|
||||
* Instead of copying bit-for-bit from the original,
|
||||
* we need to append to existing one so that the already
|
||||
* created entry via "clone -s" is not lost, and also
|
||||
* to turn entries with paths relative to the original
|
||||
* absolute, so that they can be used in the new repository.
|
||||
*/
|
||||
FILE *in = fopen(src->buf, "r");
|
||||
struct strbuf line = STRBUF_INIT;
|
||||
|
||||
while (strbuf_getline(&line, in, '\n') != EOF) {
|
||||
char *abs_path, abs_buf[PATH_MAX];
|
||||
if (!line.len || line.buf[0] == '#')
|
||||
continue;
|
||||
if (is_absolute_path(line.buf)) {
|
||||
add_to_alternates_file(line.buf);
|
||||
continue;
|
||||
}
|
||||
abs_path = mkpath("%s/objects/%s", src_repo, line.buf);
|
||||
normalize_path_copy(abs_buf, abs_path);
|
||||
add_to_alternates_file(abs_buf);
|
||||
}
|
||||
strbuf_release(&line);
|
||||
fclose(in);
|
||||
}
|
||||
|
||||
static void copy_or_link_directory(struct strbuf *src, struct strbuf *dest,
|
||||
const char *src_repo, int src_baselen)
|
||||
{
|
||||
struct dirent *de;
|
||||
struct stat buf;
|
||||
@ -267,7 +333,14 @@ static void copy_or_link_directory(struct strbuf *src, struct strbuf *dest)
|
||||
}
|
||||
if (S_ISDIR(buf.st_mode)) {
|
||||
if (de->d_name[0] != '.')
|
||||
copy_or_link_directory(src, dest);
|
||||
copy_or_link_directory(src, dest,
|
||||
src_repo, src_baselen);
|
||||
continue;
|
||||
}
|
||||
|
||||
/* Files that cannot be copied bit-for-bit... */
|
||||
if (!strcmp(src->buf + src_baselen, "/info/alternates")) {
|
||||
copy_alternates(src, dest, src_repo);
|
||||
continue;
|
||||
}
|
||||
|
||||
@ -290,17 +363,20 @@ static const struct ref *clone_local(const char *src_repo,
|
||||
const char *dest_repo)
|
||||
{
|
||||
const struct ref *ret;
|
||||
struct strbuf src = STRBUF_INIT;
|
||||
struct strbuf dest = STRBUF_INIT;
|
||||
struct remote *remote;
|
||||
struct transport *transport;
|
||||
|
||||
if (option_shared)
|
||||
add_to_alternates_file(src_repo);
|
||||
else {
|
||||
if (option_shared) {
|
||||
struct strbuf alt = STRBUF_INIT;
|
||||
strbuf_addf(&alt, "%s/objects", src_repo);
|
||||
add_to_alternates_file(alt.buf);
|
||||
strbuf_release(&alt);
|
||||
} else {
|
||||
struct strbuf src = STRBUF_INIT;
|
||||
struct strbuf dest = STRBUF_INIT;
|
||||
strbuf_addf(&src, "%s/objects", src_repo);
|
||||
strbuf_addf(&dest, "%s/objects", dest_repo);
|
||||
copy_or_link_directory(&src, &dest);
|
||||
copy_or_link_directory(&src, &dest, src_repo, src.len);
|
||||
strbuf_release(&src);
|
||||
strbuf_release(&dest);
|
||||
}
|
||||
@ -345,9 +421,8 @@ static void remove_junk_on_signal(int signo)
|
||||
static struct ref *wanted_peer_refs(const struct ref *refs,
|
||||
struct refspec *refspec)
|
||||
{
|
||||
struct ref *head = copy_ref(find_ref_by_name(refs, "HEAD"));
|
||||
struct ref *local_refs = head;
|
||||
struct ref **tail = head ? &head->next : &local_refs;
|
||||
struct ref *local_refs = NULL;
|
||||
struct ref **tail = &local_refs;
|
||||
|
||||
get_fetch_map(refs, refspec, &tail, 0);
|
||||
if (!option_mirror)
|
||||
@ -360,32 +435,13 @@ static void write_remote_refs(const struct ref *local_refs)
|
||||
{
|
||||
const struct ref *r;
|
||||
|
||||
for (r = local_refs; r; r = r->next) {
|
||||
if (!r->peer_ref)
|
||||
continue;
|
||||
for (r = local_refs; r; r = r->next)
|
||||
add_extra_ref(r->peer_ref->name, r->old_sha1, 0);
|
||||
}
|
||||
|
||||
pack_refs(PACK_REFS_ALL);
|
||||
clear_extra_refs();
|
||||
}
|
||||
|
||||
static int write_one_config(const char *key, const char *value, void *data)
|
||||
{
|
||||
return git_config_set_multivar(key, value ? value : "true", "^$", 0);
|
||||
}
|
||||
|
||||
static void write_config(struct string_list *config)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < config->nr; i++) {
|
||||
if (git_config_parse_parameter(config->items[i].string,
|
||||
write_one_config, NULL) < 0)
|
||||
die("unable to write parameters to config file");
|
||||
}
|
||||
}
|
||||
|
||||
int cmd_clone(int argc, const char **argv, const char *prefix)
|
||||
{
|
||||
int is_bundle = 0, is_local;
|
||||
@ -504,7 +560,6 @@ int cmd_clone(int argc, const char **argv, const char *prefix)
|
||||
printf(_("Cloning into %s...\n"), dir);
|
||||
}
|
||||
init_db(option_template, INIT_DB_QUIET);
|
||||
write_config(&option_config);
|
||||
|
||||
/*
|
||||
* At this point, the config exists, so we do not need the
|
||||
@ -544,8 +599,8 @@ int cmd_clone(int argc, const char **argv, const char *prefix)
|
||||
git_config_set(key.buf, repo);
|
||||
strbuf_reset(&key);
|
||||
|
||||
if (option_reference)
|
||||
setup_reference(git_dir);
|
||||
if (option_reference.nr)
|
||||
setup_reference();
|
||||
|
||||
fetch_pattern = value.buf;
|
||||
refspec = parse_fetch_refspec(1, &fetch_pattern);
|
||||
|
@ -62,6 +62,8 @@ N_("The previous cherry-pick is now empty, possibly due to conflict resolution.\
|
||||
"\n"
|
||||
"Otherwise, please use 'git reset'\n");
|
||||
|
||||
static unsigned char head_sha1[20];
|
||||
|
||||
static const char *use_message_buffer;
|
||||
static const char commit_editmsg[] = "COMMIT_EDITMSG";
|
||||
static struct lock_file index_lock; /* real index */
|
||||
@ -100,7 +102,7 @@ static enum {
|
||||
static char *cleanup_arg;
|
||||
|
||||
static enum commit_whence whence;
|
||||
static int use_editor = 1, include_status = 1;
|
||||
static int use_editor = 1, initial_commit, include_status = 1;
|
||||
static int show_ignored_in_status;
|
||||
static const char *only_include_assumed;
|
||||
static struct strbuf message;
|
||||
@ -254,10 +256,8 @@ static int list_paths(struct string_list *list, const char *with_tree,
|
||||
;
|
||||
m = xcalloc(1, i);
|
||||
|
||||
if (with_tree) {
|
||||
const char *max_prefix = pathspec_prefix(prefix, pattern);
|
||||
overlay_tree_on_cache(with_tree, max_prefix);
|
||||
}
|
||||
if (with_tree)
|
||||
overlay_tree_on_cache(with_tree, prefix);
|
||||
|
||||
for (i = 0; i < active_nr; i++) {
|
||||
struct cache_entry *ce = active_cache[i];
|
||||
@ -294,13 +294,13 @@ static void add_remove_files(struct string_list *list)
|
||||
}
|
||||
}
|
||||
|
||||
static void create_base_index(const struct commit *current_head)
|
||||
static void create_base_index(void)
|
||||
{
|
||||
struct tree *tree;
|
||||
struct unpack_trees_options opts;
|
||||
struct tree_desc t;
|
||||
|
||||
if (!current_head) {
|
||||
if (initial_commit) {
|
||||
discard_cache();
|
||||
return;
|
||||
}
|
||||
@ -313,7 +313,7 @@ static void create_base_index(const struct commit *current_head)
|
||||
opts.dst_index = &the_index;
|
||||
|
||||
opts.fn = oneway_merge;
|
||||
tree = parse_tree_indirect(current_head->object.sha1);
|
||||
tree = parse_tree_indirect(head_sha1);
|
||||
if (!tree)
|
||||
die(_("failed to unpack HEAD tree object"));
|
||||
parse_tree(tree);
|
||||
@ -332,8 +332,7 @@ static void refresh_cache_or_die(int refresh_flags)
|
||||
die_resolve_conflict("commit");
|
||||
}
|
||||
|
||||
static char *prepare_index(int argc, const char **argv, const char *prefix,
|
||||
const struct commit *current_head, int is_status)
|
||||
static char *prepare_index(int argc, const char **argv, const char *prefix, int is_status)
|
||||
{
|
||||
int fd;
|
||||
struct string_list partial;
|
||||
@ -449,7 +448,7 @@ static char *prepare_index(int argc, const char **argv, const char *prefix,
|
||||
|
||||
memset(&partial, 0, sizeof(partial));
|
||||
partial.strdup_strings = 1;
|
||||
if (list_paths(&partial, !current_head ? NULL : "HEAD", prefix, pathspec))
|
||||
if (list_paths(&partial, initial_commit ? NULL : "HEAD", prefix, pathspec))
|
||||
exit(1);
|
||||
|
||||
discard_cache();
|
||||
@ -468,7 +467,7 @@ static char *prepare_index(int argc, const char **argv, const char *prefix,
|
||||
(uintmax_t) getpid()),
|
||||
LOCK_DIE_ON_ERROR);
|
||||
|
||||
create_base_index(current_head);
|
||||
create_base_index();
|
||||
add_remove_files(&partial);
|
||||
refresh_cache(REFRESH_QUIET);
|
||||
|
||||
@ -517,9 +516,12 @@ static int run_status(FILE *fp, const char *index_file, const char *prefix, int
|
||||
return s->commitable;
|
||||
}
|
||||
|
||||
static int is_a_merge(const struct commit *current_head)
|
||||
static int is_a_merge(const unsigned char *sha1)
|
||||
{
|
||||
return !!(current_head->parents && current_head->parents->next);
|
||||
struct commit *commit = lookup_commit(sha1);
|
||||
if (!commit || parse_commit(commit))
|
||||
die(_("could not parse HEAD commit"));
|
||||
return !!(commit->parents && commit->parents->next);
|
||||
}
|
||||
|
||||
static const char sign_off_header[] = "Signed-off-by: ";
|
||||
@ -623,7 +625,6 @@ static char *cut_ident_timestamp_part(char *string)
|
||||
}
|
||||
|
||||
static int prepare_to_commit(const char *index_file, const char *prefix,
|
||||
struct commit *current_head,
|
||||
struct wt_status *s,
|
||||
struct strbuf *author_ident)
|
||||
{
|
||||
@ -845,7 +846,7 @@ static int prepare_to_commit(const char *index_file, const char *prefix,
|
||||
* empty due to conflict resolution, which the user should okay.
|
||||
*/
|
||||
if (!commitable && whence != FROM_MERGE && !allow_empty &&
|
||||
!(amend && is_a_merge(current_head))) {
|
||||
!(amend && is_a_merge(head_sha1))) {
|
||||
run_status(stdout, index_file, prefix, 0, s);
|
||||
if (amend)
|
||||
fputs(_(empty_amend_advice), stderr);
|
||||
@ -1003,7 +1004,6 @@ static const char *read_commit_message(const char *name)
|
||||
static int parse_and_validate_options(int argc, const char *argv[],
|
||||
const char * const usage[],
|
||||
const char *prefix,
|
||||
struct commit *current_head,
|
||||
struct wt_status *s)
|
||||
{
|
||||
int f = 0;
|
||||
@ -1024,8 +1024,11 @@ static int parse_and_validate_options(int argc, const char *argv[],
|
||||
if (!use_editor)
|
||||
setenv("GIT_EDITOR", ":", 1);
|
||||
|
||||
if (get_sha1("HEAD", head_sha1))
|
||||
initial_commit = 1;
|
||||
|
||||
/* Sanity check options */
|
||||
if (amend && !current_head)
|
||||
if (amend && initial_commit)
|
||||
die(_("You have nothing to amend."));
|
||||
if (amend && whence != FROM_COMMIT)
|
||||
die(_("You are in the middle of a %s -- cannot amend."), whence_s());
|
||||
@ -1097,12 +1100,12 @@ static int parse_and_validate_options(int argc, const char *argv[],
|
||||
}
|
||||
|
||||
static int dry_run_commit(int argc, const char **argv, const char *prefix,
|
||||
const struct commit *current_head, struct wt_status *s)
|
||||
struct wt_status *s)
|
||||
{
|
||||
int commitable;
|
||||
const char *index_file;
|
||||
|
||||
index_file = prepare_index(argc, argv, prefix, current_head, 1);
|
||||
index_file = prepare_index(argc, argv, prefix, 1);
|
||||
commitable = run_status(stdout, index_file, prefix, 0, s);
|
||||
rollback_index_files();
|
||||
|
||||
@ -1255,8 +1258,7 @@ int cmd_status(int argc, const char **argv, const char *prefix)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void print_summary(const char *prefix, const unsigned char *sha1,
|
||||
int initial_commit)
|
||||
static void print_summary(const char *prefix, const unsigned char *sha1)
|
||||
{
|
||||
struct rev_info rev;
|
||||
struct commit *commit;
|
||||
@ -1378,13 +1380,12 @@ int cmd_commit(int argc, const char **argv, const char *prefix)
|
||||
struct strbuf author_ident = STRBUF_INIT;
|
||||
const char *index_file, *reflog_msg;
|
||||
char *nl, *p;
|
||||
unsigned char sha1[20];
|
||||
unsigned char commit_sha1[20];
|
||||
struct ref_lock *ref_lock;
|
||||
struct commit_list *parents = NULL, **pptr = &parents;
|
||||
struct stat statbuf;
|
||||
int allow_fast_forward = 1;
|
||||
struct wt_status s;
|
||||
struct commit *current_head = NULL;
|
||||
|
||||
if (argc == 2 && !strcmp(argv[1], "-h"))
|
||||
usage_with_options(builtin_commit_usage, builtin_commit_options);
|
||||
@ -1395,41 +1396,38 @@ int cmd_commit(int argc, const char **argv, const char *prefix)
|
||||
|
||||
if (s.use_color == -1)
|
||||
s.use_color = git_use_color_default;
|
||||
if (get_sha1("HEAD", sha1))
|
||||
current_head = NULL;
|
||||
else {
|
||||
current_head = lookup_commit(sha1);
|
||||
if (!current_head || parse_commit(current_head))
|
||||
die(_("could not parse HEAD commit"));
|
||||
}
|
||||
argc = parse_and_validate_options(argc, argv, builtin_commit_usage,
|
||||
prefix, current_head, &s);
|
||||
prefix, &s);
|
||||
if (dry_run) {
|
||||
if (diff_use_color_default == -1)
|
||||
diff_use_color_default = git_use_color_default;
|
||||
return dry_run_commit(argc, argv, prefix, current_head, &s);
|
||||
return dry_run_commit(argc, argv, prefix, &s);
|
||||
}
|
||||
index_file = prepare_index(argc, argv, prefix, current_head, 0);
|
||||
index_file = prepare_index(argc, argv, prefix, 0);
|
||||
|
||||
/* Set up everything for writing the commit object. This includes
|
||||
running hooks, writing the trees, and interacting with the user. */
|
||||
if (!prepare_to_commit(index_file, prefix,
|
||||
current_head, &s, &author_ident)) {
|
||||
if (!prepare_to_commit(index_file, prefix, &s, &author_ident)) {
|
||||
rollback_index_files();
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Determine parents */
|
||||
reflog_msg = getenv("GIT_REFLOG_ACTION");
|
||||
if (!current_head) {
|
||||
if (initial_commit) {
|
||||
if (!reflog_msg)
|
||||
reflog_msg = "commit (initial)";
|
||||
} else if (amend) {
|
||||
struct commit_list *c;
|
||||
struct commit *commit;
|
||||
|
||||
if (!reflog_msg)
|
||||
reflog_msg = "commit (amend)";
|
||||
for (c = current_head->parents; c; c = c->next)
|
||||
commit = lookup_commit(head_sha1);
|
||||
if (!commit || parse_commit(commit))
|
||||
die(_("could not parse HEAD commit"));
|
||||
|
||||
for (c = commit->parents; c; c = c->next)
|
||||
pptr = &commit_list_insert(c->item, pptr)->next;
|
||||
} else if (whence == FROM_MERGE) {
|
||||
struct strbuf m = STRBUF_INIT;
|
||||
@ -1437,7 +1435,7 @@ int cmd_commit(int argc, const char **argv, const char *prefix)
|
||||
|
||||
if (!reflog_msg)
|
||||
reflog_msg = "commit (merge)";
|
||||
pptr = &commit_list_insert(current_head, pptr)->next;
|
||||
pptr = &commit_list_insert(lookup_commit(head_sha1), pptr)->next;
|
||||
fp = fopen(git_path("MERGE_HEAD"), "r");
|
||||
if (fp == NULL)
|
||||
die_errno(_("could not open '%s' for reading"),
|
||||
@ -1463,7 +1461,7 @@ int cmd_commit(int argc, const char **argv, const char *prefix)
|
||||
reflog_msg = (whence == FROM_CHERRY_PICK)
|
||||
? "commit (cherry-pick)"
|
||||
: "commit";
|
||||
pptr = &commit_list_insert(current_head, pptr)->next;
|
||||
pptr = &commit_list_insert(lookup_commit(head_sha1), pptr)->next;
|
||||
}
|
||||
|
||||
/* Finally, get the commit message */
|
||||
@ -1489,7 +1487,7 @@ int cmd_commit(int argc, const char **argv, const char *prefix)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (commit_tree(sb.buf, active_cache_tree->sha1, parents, sha1,
|
||||
if (commit_tree(sb.buf, active_cache_tree->sha1, parents, commit_sha1,
|
||||
author_ident.buf)) {
|
||||
rollback_index_files();
|
||||
die(_("failed to write commit object"));
|
||||
@ -1497,9 +1495,7 @@ int cmd_commit(int argc, const char **argv, const char *prefix)
|
||||
strbuf_release(&author_ident);
|
||||
|
||||
ref_lock = lock_any_ref_for_update("HEAD",
|
||||
!current_head
|
||||
? NULL
|
||||
: current_head->object.sha1,
|
||||
initial_commit ? NULL : head_sha1,
|
||||
0);
|
||||
|
||||
nl = strchr(sb.buf, '\n');
|
||||
@ -1514,7 +1510,7 @@ int cmd_commit(int argc, const char **argv, const char *prefix)
|
||||
rollback_index_files();
|
||||
die(_("cannot lock HEAD ref"));
|
||||
}
|
||||
if (write_ref_sha1(ref_lock, sha1, sb.buf) < 0) {
|
||||
if (write_ref_sha1(ref_lock, commit_sha1, sb.buf) < 0) {
|
||||
rollback_index_files();
|
||||
die(_("cannot update HEAD ref"));
|
||||
}
|
||||
@ -1536,14 +1532,13 @@ int cmd_commit(int argc, const char **argv, const char *prefix)
|
||||
struct notes_rewrite_cfg *cfg;
|
||||
cfg = init_copy_notes_for_rewrite("amend");
|
||||
if (cfg) {
|
||||
/* we are amending, so current_head is not NULL */
|
||||
copy_note_for_rewrite(cfg, current_head->object.sha1, sha1);
|
||||
copy_note_for_rewrite(cfg, head_sha1, commit_sha1);
|
||||
finish_copy_notes_for_rewrite(cfg);
|
||||
}
|
||||
run_rewrite_hook(current_head->object.sha1, sha1);
|
||||
run_rewrite_hook(head_sha1, commit_sha1);
|
||||
}
|
||||
if (!quiet)
|
||||
print_summary(prefix, sha1, !current_head);
|
||||
print_summary(prefix, commit_sha1);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -27,7 +27,6 @@ static int progress;
|
||||
static enum { ABORT, VERBATIM, WARN, STRIP } signed_tag_mode = ABORT;
|
||||
static enum { ERROR, DROP, REWRITE } tag_of_filtered_mode = ABORT;
|
||||
static int fake_missing_tagger;
|
||||
static int use_done_feature;
|
||||
static int no_data;
|
||||
static int full_tree;
|
||||
|
||||
@ -645,8 +644,6 @@ int cmd_fast_export(int argc, const char **argv, const char *prefix)
|
||||
"Fake a tagger when tags lack one"),
|
||||
OPT_BOOLEAN(0, "full-tree", &full_tree,
|
||||
"Output full tree for each commit"),
|
||||
OPT_BOOLEAN(0, "use-done-feature", &use_done_feature,
|
||||
"Use the done feature to terminate the stream"),
|
||||
{ OPTION_NEGBIT, 0, "data", &no_data, NULL,
|
||||
"Skip output of blob data",
|
||||
PARSE_OPT_NOARG | PARSE_OPT_NEGHELP, NULL, 1 },
|
||||
@ -668,9 +665,6 @@ int cmd_fast_export(int argc, const char **argv, const char *prefix)
|
||||
if (argc > 1)
|
||||
usage_with_options (fast_export_usage, options);
|
||||
|
||||
if (use_done_feature)
|
||||
printf("feature done\n");
|
||||
|
||||
if (import_filename)
|
||||
import_marks(import_filename);
|
||||
|
||||
@ -698,8 +692,5 @@ int cmd_fast_export(int argc, const char **argv, const char *prefix)
|
||||
if (export_filename)
|
||||
export_marks(export_filename);
|
||||
|
||||
if (use_done_feature)
|
||||
printf("done\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -941,6 +941,15 @@ int cmd_fetch(int argc, const char **argv, const char *prefix)
|
||||
argc = parse_options(argc, argv, prefix,
|
||||
builtin_fetch_options, builtin_fetch_usage, 0);
|
||||
|
||||
if (recurse_submodules != RECURSE_SUBMODULES_OFF) {
|
||||
if (recurse_submodules_default) {
|
||||
int arg = parse_fetch_recurse_submodules_arg("--recurse-submodules-default", recurse_submodules_default);
|
||||
set_config_fetch_recurse_submodules(arg);
|
||||
}
|
||||
gitmodules_config();
|
||||
git_config(submodule_config, NULL);
|
||||
}
|
||||
|
||||
if (all) {
|
||||
if (argc == 1)
|
||||
die(_("fetch --all does not take a repository argument"));
|
||||
@ -976,12 +985,6 @@ int cmd_fetch(int argc, const char **argv, const char *prefix)
|
||||
if (!result && (recurse_submodules != RECURSE_SUBMODULES_OFF)) {
|
||||
const char *options[10];
|
||||
int num_options = 0;
|
||||
if (recurse_submodules_default) {
|
||||
int arg = parse_fetch_recurse_submodules_arg("--recurse-submodules-default", recurse_submodules_default);
|
||||
set_config_fetch_recurse_submodules(arg);
|
||||
}
|
||||
gitmodules_config();
|
||||
git_config(submodule_config, NULL);
|
||||
add_options_to_argv(&num_options, options);
|
||||
result = fetch_populated_submodules(num_options, options,
|
||||
submodule_prefix,
|
||||
|
@ -93,7 +93,8 @@ static pthread_cond_t cond_write;
|
||||
/* Signalled when we are finished with everything. */
|
||||
static pthread_cond_t cond_result;
|
||||
|
||||
static int skip_first_line;
|
||||
static int print_hunk_marks_between_files;
|
||||
static int printed_something;
|
||||
|
||||
static void add_work(enum work_type type, char *name, void *id)
|
||||
{
|
||||
@ -159,20 +160,10 @@ static void work_done(struct work_item *w)
|
||||
todo_done = (todo_done+1) % ARRAY_SIZE(todo)) {
|
||||
w = &todo[todo_done];
|
||||
if (w->out.len) {
|
||||
const char *p = w->out.buf;
|
||||
size_t len = w->out.len;
|
||||
|
||||
/* Skip the leading hunk mark of the first file. */
|
||||
if (skip_first_line) {
|
||||
while (len) {
|
||||
len--;
|
||||
if (*p++ == '\n')
|
||||
break;
|
||||
}
|
||||
skip_first_line = 0;
|
||||
}
|
||||
|
||||
write_or_die(1, p, len);
|
||||
if (print_hunk_marks_between_files && printed_something)
|
||||
write_or_die(1, "--\n", 3);
|
||||
write_or_die(1, w->out.buf, w->out.len);
|
||||
printed_something = 1;
|
||||
}
|
||||
free(w->name);
|
||||
free(w->identifier);
|
||||
@ -822,24 +813,18 @@ int cmd_grep(int argc, const char **argv, const char *prefix)
|
||||
OPT_BOOLEAN('c', "count", &opt.count,
|
||||
"show the number of matches instead of matching lines"),
|
||||
OPT__COLOR(&opt.color, "highlight matches"),
|
||||
OPT_BOOLEAN(0, "break", &opt.file_break,
|
||||
"print empty line between matches from different files"),
|
||||
OPT_BOOLEAN(0, "heading", &opt.heading,
|
||||
"show filename only once above matches from same file"),
|
||||
OPT_GROUP(""),
|
||||
OPT_CALLBACK('C', "context", &opt, "n",
|
||||
OPT_CALLBACK('C', NULL, &opt, "n",
|
||||
"show <n> context lines before and after matches",
|
||||
context_callback),
|
||||
OPT_INTEGER('B', "before-context", &opt.pre_context,
|
||||
OPT_INTEGER('B', NULL, &opt.pre_context,
|
||||
"show <n> context lines before matches"),
|
||||
OPT_INTEGER('A', "after-context", &opt.post_context,
|
||||
OPT_INTEGER('A', NULL, &opt.post_context,
|
||||
"show <n> context lines after matches"),
|
||||
OPT_NUMBER_CALLBACK(&opt, "shortcut for -C NUM",
|
||||
context_callback),
|
||||
OPT_BOOLEAN('p', "show-function", &opt.funcname,
|
||||
"show a line with the function name before matches"),
|
||||
OPT_BOOLEAN('W', "function-context", &opt.funcbody,
|
||||
"show the surrounding function"),
|
||||
OPT_GROUP(""),
|
||||
OPT_CALLBACK('f', NULL, &opt, "file",
|
||||
"read patterns from file", file_callback),
|
||||
@ -982,9 +967,8 @@ int cmd_grep(int argc, const char **argv, const char *prefix)
|
||||
use_threads = 0;
|
||||
|
||||
if (use_threads) {
|
||||
if (opt.pre_context || opt.post_context || opt.file_break ||
|
||||
opt.funcbody)
|
||||
skip_first_line = 1;
|
||||
if (opt.pre_context || opt.post_context)
|
||||
print_hunk_marks_between_files = 1;
|
||||
start_threads(&opt);
|
||||
}
|
||||
#else
|
||||
|
@ -11,7 +11,7 @@
|
||||
#include "exec_cmd.h"
|
||||
|
||||
static const char index_pack_usage[] =
|
||||
"git index-pack [-v] [-o <index-file>] [--keep | --keep=<msg>] [--verify] [--strict] (<pack-file> | --stdin [--fix-thin] [<pack-file>])";
|
||||
"git index-pack [-v] [-o <index-file>] [ --keep | --keep=<msg> ] [--strict] (<pack-file> | --stdin [--fix-thin] [<pack-file>])";
|
||||
|
||||
struct object_entry {
|
||||
struct pack_idx_entry idx;
|
||||
@ -19,8 +19,6 @@ struct object_entry {
|
||||
unsigned int hdr_size;
|
||||
enum object_type type;
|
||||
enum object_type real_type;
|
||||
unsigned delta_depth;
|
||||
int base_object_no;
|
||||
};
|
||||
|
||||
union delta_base {
|
||||
@ -68,7 +66,6 @@ static struct progress *progress;
|
||||
static unsigned char input_buffer[4096];
|
||||
static unsigned int input_offset, input_len;
|
||||
static off_t consumed_bytes;
|
||||
static unsigned deepest_delta;
|
||||
static git_SHA_CTX input_ctx;
|
||||
static uint32_t input_crc32;
|
||||
static int input_fd, output_fd, pack_fd;
|
||||
@ -392,18 +389,7 @@ static void *get_data_from_pack(struct object_entry *obj)
|
||||
return data;
|
||||
}
|
||||
|
||||
static int compare_delta_bases(const union delta_base *base1,
|
||||
const union delta_base *base2,
|
||||
enum object_type type1,
|
||||
enum object_type type2)
|
||||
{
|
||||
int cmp = type1 - type2;
|
||||
if (cmp)
|
||||
return cmp;
|
||||
return memcmp(base1, base2, UNION_BASE_SZ);
|
||||
}
|
||||
|
||||
static int find_delta(const union delta_base *base, enum object_type type)
|
||||
static int find_delta(const union delta_base *base)
|
||||
{
|
||||
int first = 0, last = nr_deltas;
|
||||
|
||||
@ -412,8 +398,7 @@ static int find_delta(const union delta_base *base, enum object_type type)
|
||||
struct delta_entry *delta = &deltas[next];
|
||||
int cmp;
|
||||
|
||||
cmp = compare_delta_bases(base, &delta->base,
|
||||
type, objects[delta->obj_no].type);
|
||||
cmp = memcmp(base, &delta->base, UNION_BASE_SZ);
|
||||
if (!cmp)
|
||||
return next;
|
||||
if (cmp < 0) {
|
||||
@ -426,10 +411,9 @@ static int find_delta(const union delta_base *base, enum object_type type)
|
||||
}
|
||||
|
||||
static void find_delta_children(const union delta_base *base,
|
||||
int *first_index, int *last_index,
|
||||
enum object_type type)
|
||||
int *first_index, int *last_index)
|
||||
{
|
||||
int first = find_delta(base, type);
|
||||
int first = find_delta(base);
|
||||
int last = first;
|
||||
int end = nr_deltas - 1;
|
||||
|
||||
@ -499,17 +483,12 @@ static void sha1_object(const void *data, unsigned long size,
|
||||
}
|
||||
}
|
||||
|
||||
static int is_delta_type(enum object_type type)
|
||||
{
|
||||
return (type == OBJ_REF_DELTA || type == OBJ_OFS_DELTA);
|
||||
}
|
||||
|
||||
static void *get_base_data(struct base_data *c)
|
||||
{
|
||||
if (!c->data) {
|
||||
struct object_entry *obj = c->obj;
|
||||
|
||||
if (is_delta_type(obj->type)) {
|
||||
if (obj->type == OBJ_REF_DELTA || obj->type == OBJ_OFS_DELTA) {
|
||||
void *base = get_base_data(c->base);
|
||||
void *raw = get_data_from_pack(obj);
|
||||
c->data = patch_delta(
|
||||
@ -536,10 +515,6 @@ static void resolve_delta(struct object_entry *delta_obj,
|
||||
void *base_data, *delta_data;
|
||||
|
||||
delta_obj->real_type = base->obj->real_type;
|
||||
delta_obj->delta_depth = base->obj->delta_depth + 1;
|
||||
if (deepest_delta < delta_obj->delta_depth)
|
||||
deepest_delta = delta_obj->delta_depth;
|
||||
delta_obj->base_object_no = base->obj - objects;
|
||||
delta_data = get_data_from_pack(delta_obj);
|
||||
base_data = get_base_data(base);
|
||||
result->obj = delta_obj;
|
||||
@ -566,13 +541,11 @@ static void find_unresolved_deltas(struct base_data *base,
|
||||
union delta_base base_spec;
|
||||
|
||||
hashcpy(base_spec.sha1, base->obj->idx.sha1);
|
||||
find_delta_children(&base_spec,
|
||||
&ref_first, &ref_last, OBJ_REF_DELTA);
|
||||
find_delta_children(&base_spec, &ref_first, &ref_last);
|
||||
|
||||
memset(&base_spec, 0, sizeof(base_spec));
|
||||
base_spec.offset = base->obj->idx.offset;
|
||||
find_delta_children(&base_spec,
|
||||
&ofs_first, &ofs_last, OBJ_OFS_DELTA);
|
||||
find_delta_children(&base_spec, &ofs_first, &ofs_last);
|
||||
}
|
||||
|
||||
if (ref_last == -1 && ofs_last == -1) {
|
||||
@ -584,24 +557,24 @@ static void find_unresolved_deltas(struct base_data *base,
|
||||
|
||||
for (i = ref_first; i <= ref_last; i++) {
|
||||
struct object_entry *child = objects + deltas[i].obj_no;
|
||||
struct base_data result;
|
||||
|
||||
assert(child->real_type == OBJ_REF_DELTA);
|
||||
resolve_delta(child, base, &result);
|
||||
if (i == ref_last && ofs_last == -1)
|
||||
free_base_data(base);
|
||||
find_unresolved_deltas(&result, base);
|
||||
if (child->real_type == OBJ_REF_DELTA) {
|
||||
struct base_data result;
|
||||
resolve_delta(child, base, &result);
|
||||
if (i == ref_last && ofs_last == -1)
|
||||
free_base_data(base);
|
||||
find_unresolved_deltas(&result, base);
|
||||
}
|
||||
}
|
||||
|
||||
for (i = ofs_first; i <= ofs_last; i++) {
|
||||
struct object_entry *child = objects + deltas[i].obj_no;
|
||||
struct base_data result;
|
||||
|
||||
assert(child->real_type == OBJ_OFS_DELTA);
|
||||
resolve_delta(child, base, &result);
|
||||
if (i == ofs_last)
|
||||
free_base_data(base);
|
||||
find_unresolved_deltas(&result, base);
|
||||
if (child->real_type == OBJ_OFS_DELTA) {
|
||||
struct base_data result;
|
||||
resolve_delta(child, base, &result);
|
||||
if (i == ofs_last)
|
||||
free_base_data(base);
|
||||
find_unresolved_deltas(&result, base);
|
||||
}
|
||||
}
|
||||
|
||||
unlink_base_data(base);
|
||||
@ -611,11 +584,7 @@ static int compare_delta_entry(const void *a, const void *b)
|
||||
{
|
||||
const struct delta_entry *delta_a = a;
|
||||
const struct delta_entry *delta_b = b;
|
||||
|
||||
/* group by type (ref vs ofs) and then by value (sha-1 or offset) */
|
||||
return compare_delta_bases(&delta_a->base, &delta_b->base,
|
||||
objects[delta_a->obj_no].type,
|
||||
objects[delta_b->obj_no].type);
|
||||
return memcmp(&delta_a->base, &delta_b->base, UNION_BASE_SZ);
|
||||
}
|
||||
|
||||
/* Parse all objects and return the pack content SHA1 hash */
|
||||
@ -639,7 +608,7 @@ static void parse_pack_objects(unsigned char *sha1)
|
||||
struct object_entry *obj = &objects[i];
|
||||
void *data = unpack_raw_entry(obj, &delta->base);
|
||||
obj->real_type = obj->type;
|
||||
if (is_delta_type(obj->type)) {
|
||||
if (obj->type == OBJ_REF_DELTA || obj->type == OBJ_OFS_DELTA) {
|
||||
nr_deltas++;
|
||||
delta->obj_no = i;
|
||||
delta++;
|
||||
@ -686,7 +655,7 @@ static void parse_pack_objects(unsigned char *sha1)
|
||||
struct object_entry *obj = &objects[i];
|
||||
struct base_data base_obj;
|
||||
|
||||
if (is_delta_type(obj->type))
|
||||
if (obj->type == OBJ_REF_DELTA || obj->type == OBJ_OFS_DELTA)
|
||||
continue;
|
||||
base_obj.obj = obj;
|
||||
base_obj.data = NULL;
|
||||
@ -890,137 +859,24 @@ static void final(const char *final_pack_name, const char *curr_pack_name,
|
||||
|
||||
static int git_index_pack_config(const char *k, const char *v, void *cb)
|
||||
{
|
||||
struct pack_idx_option *opts = cb;
|
||||
|
||||
if (!strcmp(k, "pack.indexversion")) {
|
||||
opts->version = git_config_int(k, v);
|
||||
if (opts->version > 2)
|
||||
die("bad pack.indexversion=%"PRIu32, opts->version);
|
||||
pack_idx_default_version = git_config_int(k, v);
|
||||
if (pack_idx_default_version > 2)
|
||||
die("bad pack.indexversion=%"PRIu32,
|
||||
pack_idx_default_version);
|
||||
return 0;
|
||||
}
|
||||
return git_default_config(k, v, cb);
|
||||
}
|
||||
|
||||
static int cmp_uint32(const void *a_, const void *b_)
|
||||
{
|
||||
uint32_t a = *((uint32_t *)a_);
|
||||
uint32_t b = *((uint32_t *)b_);
|
||||
|
||||
return (a < b) ? -1 : (a != b);
|
||||
}
|
||||
|
||||
static void read_v2_anomalous_offsets(struct packed_git *p,
|
||||
struct pack_idx_option *opts)
|
||||
{
|
||||
const uint32_t *idx1, *idx2;
|
||||
uint32_t i;
|
||||
|
||||
/* The address of the 4-byte offset table */
|
||||
idx1 = (((const uint32_t *)p->index_data)
|
||||
+ 2 /* 8-byte header */
|
||||
+ 256 /* fan out */
|
||||
+ 5 * p->num_objects /* 20-byte SHA-1 table */
|
||||
+ p->num_objects /* CRC32 table */
|
||||
);
|
||||
|
||||
/* The address of the 8-byte offset table */
|
||||
idx2 = idx1 + p->num_objects;
|
||||
|
||||
for (i = 0; i < p->num_objects; i++) {
|
||||
uint32_t off = ntohl(idx1[i]);
|
||||
if (!(off & 0x80000000))
|
||||
continue;
|
||||
off = off & 0x7fffffff;
|
||||
if (idx2[off * 2])
|
||||
continue;
|
||||
/*
|
||||
* The real offset is ntohl(idx2[off * 2]) in high 4
|
||||
* octets, and ntohl(idx2[off * 2 + 1]) in low 4
|
||||
* octets. But idx2[off * 2] is Zero!!!
|
||||
*/
|
||||
ALLOC_GROW(opts->anomaly, opts->anomaly_nr + 1, opts->anomaly_alloc);
|
||||
opts->anomaly[opts->anomaly_nr++] = ntohl(idx2[off * 2 + 1]);
|
||||
}
|
||||
|
||||
if (1 < opts->anomaly_nr)
|
||||
qsort(opts->anomaly, opts->anomaly_nr, sizeof(uint32_t), cmp_uint32);
|
||||
}
|
||||
|
||||
static void read_idx_option(struct pack_idx_option *opts, const char *pack_name)
|
||||
{
|
||||
struct packed_git *p = add_packed_git(pack_name, strlen(pack_name), 1);
|
||||
|
||||
if (!p)
|
||||
die("Cannot open existing pack file '%s'", pack_name);
|
||||
if (open_pack_index(p))
|
||||
die("Cannot open existing pack idx file for '%s'", pack_name);
|
||||
|
||||
/* Read the attributes from the existing idx file */
|
||||
opts->version = p->index_version;
|
||||
|
||||
if (opts->version == 2)
|
||||
read_v2_anomalous_offsets(p, opts);
|
||||
|
||||
/*
|
||||
* Get rid of the idx file as we do not need it anymore.
|
||||
* NEEDSWORK: extract this bit from free_pack_by_name() in
|
||||
* sha1_file.c, perhaps? It shouldn't matter very much as we
|
||||
* know we haven't installed this pack (hence we never have
|
||||
* read anything from it).
|
||||
*/
|
||||
close_pack_index(p);
|
||||
free(p);
|
||||
}
|
||||
|
||||
static void show_pack_info(int stat_only)
|
||||
{
|
||||
int i, baseobjects = nr_objects - nr_deltas;
|
||||
unsigned long *chain_histogram = NULL;
|
||||
|
||||
if (deepest_delta)
|
||||
chain_histogram = xcalloc(deepest_delta, sizeof(unsigned long));
|
||||
|
||||
for (i = 0; i < nr_objects; i++) {
|
||||
struct object_entry *obj = &objects[i];
|
||||
|
||||
if (is_delta_type(obj->type))
|
||||
chain_histogram[obj->delta_depth - 1]++;
|
||||
if (stat_only)
|
||||
continue;
|
||||
printf("%s %-6s %lu %lu %"PRIuMAX,
|
||||
sha1_to_hex(obj->idx.sha1),
|
||||
typename(obj->real_type), obj->size,
|
||||
(unsigned long)(obj[1].idx.offset - obj->idx.offset),
|
||||
(uintmax_t)obj->idx.offset);
|
||||
if (is_delta_type(obj->type)) {
|
||||
struct object_entry *bobj = &objects[obj->base_object_no];
|
||||
printf(" %u %s", obj->delta_depth, sha1_to_hex(bobj->idx.sha1));
|
||||
}
|
||||
putchar('\n');
|
||||
}
|
||||
|
||||
if (baseobjects)
|
||||
printf("non delta: %d object%s\n",
|
||||
baseobjects, baseobjects > 1 ? "s" : "");
|
||||
for (i = 0; i < deepest_delta; i++) {
|
||||
if (!chain_histogram[i])
|
||||
continue;
|
||||
printf("chain length = %d: %lu object%s\n",
|
||||
i + 1,
|
||||
chain_histogram[i],
|
||||
chain_histogram[i] > 1 ? "s" : "");
|
||||
}
|
||||
}
|
||||
|
||||
int cmd_index_pack(int argc, const char **argv, const char *prefix)
|
||||
{
|
||||
int i, fix_thin_pack = 0, verify = 0, stat_only = 0, stat = 0;
|
||||
int i, fix_thin_pack = 0;
|
||||
const char *curr_pack, *curr_index;
|
||||
const char *index_name = NULL, *pack_name = NULL;
|
||||
const char *keep_name = NULL, *keep_msg = NULL;
|
||||
char *index_name_buf = NULL, *keep_name_buf = NULL;
|
||||
struct pack_idx_entry **idx_objects;
|
||||
struct pack_idx_option opts;
|
||||
unsigned char pack_sha1[20];
|
||||
|
||||
if (argc == 2 && !strcmp(argv[1], "-h"))
|
||||
@ -1028,8 +884,7 @@ int cmd_index_pack(int argc, const char **argv, const char *prefix)
|
||||
|
||||
read_replace_refs = 0;
|
||||
|
||||
reset_pack_idx_option(&opts);
|
||||
git_config(git_index_pack_config, &opts);
|
||||
git_config(git_index_pack_config, NULL);
|
||||
if (prefix && chdir(prefix))
|
||||
die("Cannot come back to cwd");
|
||||
|
||||
@ -1043,15 +898,6 @@ int cmd_index_pack(int argc, const char **argv, const char *prefix)
|
||||
fix_thin_pack = 1;
|
||||
} else if (!strcmp(arg, "--strict")) {
|
||||
strict = 1;
|
||||
} else if (!strcmp(arg, "--verify")) {
|
||||
verify = 1;
|
||||
} else if (!strcmp(arg, "--verify-stat")) {
|
||||
verify = 1;
|
||||
stat = 1;
|
||||
} else if (!strcmp(arg, "--verify-stat-only")) {
|
||||
verify = 1;
|
||||
stat = 1;
|
||||
stat_only = 1;
|
||||
} else if (!strcmp(arg, "--keep")) {
|
||||
keep_msg = "";
|
||||
} else if (!prefixcmp(arg, "--keep=")) {
|
||||
@ -1077,12 +923,12 @@ int cmd_index_pack(int argc, const char **argv, const char *prefix)
|
||||
index_name = argv[++i];
|
||||
} else if (!prefixcmp(arg, "--index-version=")) {
|
||||
char *c;
|
||||
opts.version = strtoul(arg + 16, &c, 10);
|
||||
if (opts.version > 2)
|
||||
pack_idx_default_version = strtoul(arg + 16, &c, 10);
|
||||
if (pack_idx_default_version > 2)
|
||||
die("bad %s", arg);
|
||||
if (*c == ',')
|
||||
opts.off32_limit = strtoul(c+1, &c, 0);
|
||||
if (*c || opts.off32_limit & 0x80000000)
|
||||
pack_idx_off32_limit = strtoul(c+1, &c, 0);
|
||||
if (*c || pack_idx_off32_limit & 0x80000000)
|
||||
die("bad %s", arg);
|
||||
} else
|
||||
usage(index_pack_usage);
|
||||
@ -1118,17 +964,11 @@ int cmd_index_pack(int argc, const char **argv, const char *prefix)
|
||||
strcpy(keep_name_buf + len - 5, ".keep");
|
||||
keep_name = keep_name_buf;
|
||||
}
|
||||
if (verify) {
|
||||
if (!index_name)
|
||||
die("--verify with no packfile name given");
|
||||
read_idx_option(&opts, index_name);
|
||||
opts.flags |= WRITE_IDX_VERIFY;
|
||||
}
|
||||
|
||||
curr_pack = open_pack_file(pack_name);
|
||||
parse_pack_header();
|
||||
objects = xcalloc(nr_objects + 1, sizeof(struct object_entry));
|
||||
deltas = xcalloc(nr_objects, sizeof(struct delta_entry));
|
||||
objects = xmalloc((nr_objects + 1) * sizeof(struct object_entry));
|
||||
deltas = xmalloc(nr_objects * sizeof(struct delta_entry));
|
||||
parse_pack_objects(pack_sha1);
|
||||
if (nr_deltas == nr_resolved_deltas) {
|
||||
stop_progress(&progress);
|
||||
@ -1168,22 +1008,16 @@ int cmd_index_pack(int argc, const char **argv, const char *prefix)
|
||||
if (strict)
|
||||
check_objects();
|
||||
|
||||
if (stat)
|
||||
show_pack_info(stat_only);
|
||||
|
||||
idx_objects = xmalloc((nr_objects) * sizeof(struct pack_idx_entry *));
|
||||
for (i = 0; i < nr_objects; i++)
|
||||
idx_objects[i] = &objects[i].idx;
|
||||
curr_index = write_idx_file(index_name, idx_objects, nr_objects, &opts, pack_sha1);
|
||||
curr_index = write_idx_file(index_name, idx_objects, nr_objects, pack_sha1);
|
||||
free(idx_objects);
|
||||
|
||||
if (!verify)
|
||||
final(pack_name, curr_pack,
|
||||
index_name, curr_index,
|
||||
keep_name, keep_msg,
|
||||
pack_sha1);
|
||||
else
|
||||
close(input_fd);
|
||||
final(pack_name, curr_pack,
|
||||
index_name, curr_index,
|
||||
keep_name, keep_msg,
|
||||
pack_sha1);
|
||||
free(objects);
|
||||
free(index_name_buf);
|
||||
free(keep_name_buf);
|
||||
|
@ -347,7 +347,7 @@ static void separate_git_dir(const char *git_dir)
|
||||
const char *src;
|
||||
|
||||
if (S_ISREG(st.st_mode))
|
||||
src = read_gitfile_gently(git_link);
|
||||
src = read_gitfile(git_link);
|
||||
else if (S_ISDIR(st.st_mode))
|
||||
src = git_link;
|
||||
else
|
||||
|
@ -276,6 +276,41 @@ static void prune_cache(const char *prefix)
|
||||
active_nr = last;
|
||||
}
|
||||
|
||||
static const char *pathspec_prefix(const char *prefix)
|
||||
{
|
||||
const char **p, *n, *prev;
|
||||
unsigned long max;
|
||||
|
||||
if (!pathspec) {
|
||||
max_prefix_len = prefix ? strlen(prefix) : 0;
|
||||
return prefix;
|
||||
}
|
||||
|
||||
prev = NULL;
|
||||
max = PATH_MAX;
|
||||
for (p = pathspec; (n = *p) != NULL; p++) {
|
||||
int i, len = 0;
|
||||
for (i = 0; i < max; i++) {
|
||||
char c = n[i];
|
||||
if (prev && prev[i] != c)
|
||||
break;
|
||||
if (!c || c == '*' || c == '?')
|
||||
break;
|
||||
if (c == '/')
|
||||
len = i+1;
|
||||
}
|
||||
prev = n;
|
||||
if (len < max) {
|
||||
max = len;
|
||||
if (!max)
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
max_prefix_len = max;
|
||||
return max ? xmemdupz(prev, max) : NULL;
|
||||
}
|
||||
|
||||
static void strip_trailing_slash_from_submodules(void)
|
||||
{
|
||||
const char **p;
|
||||
@ -545,8 +580,7 @@ int cmd_ls_files(int argc, const char **argv, const char *cmd_prefix)
|
||||
strip_trailing_slash_from_submodules();
|
||||
|
||||
/* Find common prefix for all pathspec's */
|
||||
max_prefix = pathspec_prefix(prefix, pathspec);
|
||||
max_prefix_len = max_prefix ? strlen(max_prefix) : 0;
|
||||
max_prefix = pathspec_prefix(prefix);
|
||||
|
||||
/* Treat unmatching pathspec elements as errors */
|
||||
if (pathspec && error_unmatch) {
|
||||
|
@ -903,7 +903,7 @@ static int finish_automerge(struct commit_list *common,
|
||||
strbuf_addch(&merge_msg, '\n');
|
||||
run_prepare_commit_msg();
|
||||
commit_tree(merge_msg.buf, result_tree, parents, result_commit, NULL);
|
||||
strbuf_addf(&buf, "Merge made by the '%s' strategy.", wt_strategy);
|
||||
strbuf_addf(&buf, "Merge made by %s.", wt_strategy);
|
||||
finish(result_commit, buf.buf);
|
||||
strbuf_release(&buf);
|
||||
drop_save();
|
||||
|
@ -51,8 +51,6 @@ struct object_entry {
|
||||
* objects against.
|
||||
*/
|
||||
unsigned char no_try_delta;
|
||||
unsigned char tagged; /* near the very tip of refs */
|
||||
unsigned char filled; /* assigned write-order */
|
||||
};
|
||||
|
||||
/*
|
||||
@ -72,7 +70,6 @@ static int local;
|
||||
static int incremental;
|
||||
static int ignore_packed_keep;
|
||||
static int allow_ofs_delta;
|
||||
static struct pack_idx_option pack_idx_opts;
|
||||
static const char *base_name;
|
||||
static int progress = 1;
|
||||
static int window = 10;
|
||||
@ -98,7 +95,6 @@ static unsigned long window_memory_limit = 0;
|
||||
*/
|
||||
static int *object_ix;
|
||||
static int object_ix_hashsz;
|
||||
static struct object_entry *locate_object_entry(const unsigned char *sha1);
|
||||
|
||||
/*
|
||||
* stats
|
||||
@ -203,7 +199,6 @@ static void copy_pack_data(struct sha1file *f,
|
||||
}
|
||||
}
|
||||
|
||||
/* Return 0 if we will bust the pack-size limit */
|
||||
static unsigned long write_object(struct sha1file *f,
|
||||
struct object_entry *entry,
|
||||
off_t write_offset)
|
||||
@ -438,134 +433,6 @@ static int write_one(struct sha1file *f,
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int mark_tagged(const char *path, const unsigned char *sha1, int flag,
|
||||
void *cb_data)
|
||||
{
|
||||
unsigned char peeled[20];
|
||||
struct object_entry *entry = locate_object_entry(sha1);
|
||||
|
||||
if (entry)
|
||||
entry->tagged = 1;
|
||||
if (!peel_ref(path, peeled)) {
|
||||
entry = locate_object_entry(peeled);
|
||||
if (entry)
|
||||
entry->tagged = 1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void add_to_write_order(struct object_entry **wo,
|
||||
int *endp,
|
||||
struct object_entry *e)
|
||||
{
|
||||
if (e->filled)
|
||||
return;
|
||||
wo[(*endp)++] = e;
|
||||
e->filled = 1;
|
||||
}
|
||||
|
||||
static void add_descendants_to_write_order(struct object_entry **wo,
|
||||
int *endp,
|
||||
struct object_entry *e)
|
||||
{
|
||||
struct object_entry *child;
|
||||
|
||||
for (child = e->delta_child; child; child = child->delta_sibling)
|
||||
add_to_write_order(wo, endp, child);
|
||||
for (child = e->delta_child; child; child = child->delta_sibling)
|
||||
add_descendants_to_write_order(wo, endp, child);
|
||||
}
|
||||
|
||||
static void add_family_to_write_order(struct object_entry **wo,
|
||||
int *endp,
|
||||
struct object_entry *e)
|
||||
{
|
||||
struct object_entry *root;
|
||||
|
||||
for (root = e; root->delta; root = root->delta)
|
||||
; /* nothing */
|
||||
add_to_write_order(wo, endp, root);
|
||||
add_descendants_to_write_order(wo, endp, root);
|
||||
}
|
||||
|
||||
static struct object_entry **compute_write_order(void)
|
||||
{
|
||||
int i, wo_end;
|
||||
|
||||
struct object_entry **wo = xmalloc(nr_objects * sizeof(*wo));
|
||||
|
||||
for (i = 0; i < nr_objects; i++) {
|
||||
objects[i].tagged = 0;
|
||||
objects[i].filled = 0;
|
||||
objects[i].delta_child = NULL;
|
||||
objects[i].delta_sibling = NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Fully connect delta_child/delta_sibling network.
|
||||
* Make sure delta_sibling is sorted in the original
|
||||
* recency order.
|
||||
*/
|
||||
for (i = nr_objects - 1; 0 <= i; i--) {
|
||||
struct object_entry *e = &objects[i];
|
||||
if (!e->delta)
|
||||
continue;
|
||||
/* Mark me as the first child */
|
||||
e->delta_sibling = e->delta->delta_child;
|
||||
e->delta->delta_child = e;
|
||||
}
|
||||
|
||||
/*
|
||||
* Mark objects that are at the tip of tags.
|
||||
*/
|
||||
for_each_tag_ref(mark_tagged, NULL);
|
||||
|
||||
/*
|
||||
* Give the commits in the original recency order until
|
||||
* we see a tagged tip.
|
||||
*/
|
||||
for (i = wo_end = 0; i < nr_objects; i++) {
|
||||
if (objects[i].tagged)
|
||||
break;
|
||||
add_to_write_order(wo, &wo_end, &objects[i]);
|
||||
}
|
||||
|
||||
/*
|
||||
* Then fill all the tagged tips.
|
||||
*/
|
||||
for (; i < nr_objects; i++) {
|
||||
if (objects[i].tagged)
|
||||
add_to_write_order(wo, &wo_end, &objects[i]);
|
||||
}
|
||||
|
||||
/*
|
||||
* And then all remaining commits and tags.
|
||||
*/
|
||||
for (i = 0; i < nr_objects; i++) {
|
||||
if (objects[i].type != OBJ_COMMIT &&
|
||||
objects[i].type != OBJ_TAG)
|
||||
continue;
|
||||
add_to_write_order(wo, &wo_end, &objects[i]);
|
||||
}
|
||||
|
||||
/*
|
||||
* And then all the trees.
|
||||
*/
|
||||
for (i = 0; i < nr_objects; i++) {
|
||||
if (objects[i].type != OBJ_TREE)
|
||||
continue;
|
||||
add_to_write_order(wo, &wo_end, &objects[i]);
|
||||
}
|
||||
|
||||
/*
|
||||
* Finally all the rest in really tight order
|
||||
*/
|
||||
for (i = 0; i < nr_objects; i++)
|
||||
add_family_to_write_order(wo, &wo_end, &objects[i]);
|
||||
|
||||
return wo;
|
||||
}
|
||||
|
||||
static void write_pack_file(void)
|
||||
{
|
||||
uint32_t i = 0, j;
|
||||
@ -574,12 +441,10 @@ static void write_pack_file(void)
|
||||
struct pack_header hdr;
|
||||
uint32_t nr_remaining = nr_result;
|
||||
time_t last_mtime = 0;
|
||||
struct object_entry **write_order;
|
||||
|
||||
if (progress > pack_to_stdout)
|
||||
progress_state = start_progress("Writing objects", nr_result);
|
||||
written_list = xmalloc(nr_objects * sizeof(*written_list));
|
||||
write_order = compute_write_order();
|
||||
|
||||
do {
|
||||
unsigned char sha1[20];
|
||||
@ -603,8 +468,7 @@ static void write_pack_file(void)
|
||||
offset = sizeof(hdr);
|
||||
nr_written = 0;
|
||||
for (; i < nr_objects; i++) {
|
||||
struct object_entry *e = write_order[i];
|
||||
if (!write_one(f, e, &offset))
|
||||
if (!write_one(f, objects + i, &offset))
|
||||
break;
|
||||
display_progress(progress_state, written);
|
||||
}
|
||||
@ -629,8 +493,8 @@ static void write_pack_file(void)
|
||||
const char *idx_tmp_name;
|
||||
char tmpname[PATH_MAX];
|
||||
|
||||
idx_tmp_name = write_idx_file(NULL, written_list, nr_written,
|
||||
&pack_idx_opts, sha1);
|
||||
idx_tmp_name = write_idx_file(NULL, written_list,
|
||||
nr_written, sha1);
|
||||
|
||||
snprintf(tmpname, sizeof(tmpname), "%s-%s.pack",
|
||||
base_name, sha1_to_hex(sha1));
|
||||
@ -681,7 +545,6 @@ static void write_pack_file(void)
|
||||
} while (nr_remaining && i < nr_objects);
|
||||
|
||||
free(written_list);
|
||||
free(write_order);
|
||||
stop_progress(&progress_state);
|
||||
if (written != nr_result)
|
||||
die("wrote %"PRIu32" objects while expecting %"PRIu32,
|
||||
@ -770,7 +633,7 @@ static int no_try_delta(const char *path)
|
||||
struct git_attr_check check[1];
|
||||
|
||||
setup_delta_attr_check(check);
|
||||
if (git_check_attr(path, ARRAY_SIZE(check), check))
|
||||
if (git_checkattr(path, ARRAY_SIZE(check), check))
|
||||
return 0;
|
||||
if (ATTR_FALSE(check->value))
|
||||
return 1;
|
||||
@ -2021,10 +1884,10 @@ static int git_pack_config(const char *k, const char *v, void *cb)
|
||||
return 0;
|
||||
}
|
||||
if (!strcmp(k, "pack.indexversion")) {
|
||||
pack_idx_opts.version = git_config_int(k, v);
|
||||
if (pack_idx_opts.version > 2)
|
||||
pack_idx_default_version = git_config_int(k, v);
|
||||
if (pack_idx_default_version > 2)
|
||||
die("bad pack.indexversion=%"PRIu32,
|
||||
pack_idx_opts.version);
|
||||
pack_idx_default_version);
|
||||
return 0;
|
||||
}
|
||||
if (!strcmp(k, "pack.packsizelimit")) {
|
||||
@ -2271,7 +2134,6 @@ int cmd_pack_objects(int argc, const char **argv, const char *prefix)
|
||||
rp_av[1] = "--objects"; /* --thin will make it --objects-edge */
|
||||
rp_ac = 2;
|
||||
|
||||
reset_pack_idx_option(&pack_idx_opts);
|
||||
git_config(git_pack_config, NULL);
|
||||
if (!pack_compression_seen && core_compression_seen)
|
||||
pack_compression_level = core_compression_level;
|
||||
@ -2416,12 +2278,12 @@ int cmd_pack_objects(int argc, const char **argv, const char *prefix)
|
||||
}
|
||||
if (!prefixcmp(arg, "--index-version=")) {
|
||||
char *c;
|
||||
pack_idx_opts.version = strtoul(arg + 16, &c, 10);
|
||||
if (pack_idx_opts.version > 2)
|
||||
pack_idx_default_version = strtoul(arg + 16, &c, 10);
|
||||
if (pack_idx_default_version > 2)
|
||||
die("bad %s", arg);
|
||||
if (*c == ',')
|
||||
pack_idx_opts.off32_limit = strtoul(c+1, &c, 0);
|
||||
if (*c || pack_idx_opts.off32_limit & 0x80000000)
|
||||
pack_idx_off32_limit = strtoul(c+1, &c, 0);
|
||||
if (*c || pack_idx_off32_limit & 0x80000000)
|
||||
die("bad %s", arg);
|
||||
continue;
|
||||
}
|
||||
|
@ -120,25 +120,9 @@ static int show_ref(const char *path, const unsigned char *sha1, int flag, void
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int show_ref_cb(const char *path, const unsigned char *sha1, int flag, void *cb_data)
|
||||
{
|
||||
path = strip_namespace(path);
|
||||
/*
|
||||
* Advertise refs outside our current namespace as ".have"
|
||||
* refs, so that the client can use them to minimize data
|
||||
* transfer but will otherwise ignore them. This happens to
|
||||
* cover ".have" that are thrown in by add_one_alternate_ref()
|
||||
* to mark histories that are complete in our alternates as
|
||||
* well.
|
||||
*/
|
||||
if (!path)
|
||||
path = ".have";
|
||||
return show_ref(path, sha1, flag, cb_data);
|
||||
}
|
||||
|
||||
static void write_head_info(void)
|
||||
{
|
||||
for_each_ref(show_ref_cb, NULL);
|
||||
for_each_ref(show_ref, NULL);
|
||||
if (!sent_capabilities)
|
||||
show_ref("capabilities^{}", null_sha1, 0, NULL);
|
||||
|
||||
@ -349,8 +333,6 @@ static void refuse_unconfigured_deny_delete_current(void)
|
||||
static const char *update(struct command *cmd)
|
||||
{
|
||||
const char *name = cmd->ref_name;
|
||||
struct strbuf namespaced_name_buf = STRBUF_INIT;
|
||||
const char *namespaced_name;
|
||||
unsigned char *old_sha1 = cmd->old_sha1;
|
||||
unsigned char *new_sha1 = cmd->new_sha1;
|
||||
struct ref_lock *lock;
|
||||
@ -361,10 +343,7 @@ static const char *update(struct command *cmd)
|
||||
return "funny refname";
|
||||
}
|
||||
|
||||
strbuf_addf(&namespaced_name_buf, "%s%s", get_git_namespace(), name);
|
||||
namespaced_name = strbuf_detach(&namespaced_name_buf, NULL);
|
||||
|
||||
if (is_ref_checked_out(namespaced_name)) {
|
||||
if (is_ref_checked_out(name)) {
|
||||
switch (deny_current_branch) {
|
||||
case DENY_IGNORE:
|
||||
break;
|
||||
@ -392,7 +371,7 @@ static const char *update(struct command *cmd)
|
||||
return "deletion prohibited";
|
||||
}
|
||||
|
||||
if (!strcmp(namespaced_name, head_name)) {
|
||||
if (!strcmp(name, head_name)) {
|
||||
switch (deny_delete_current) {
|
||||
case DENY_IGNORE:
|
||||
break;
|
||||
@ -448,14 +427,14 @@ static const char *update(struct command *cmd)
|
||||
rp_warning("Allowing deletion of corrupt ref.");
|
||||
old_sha1 = NULL;
|
||||
}
|
||||
if (delete_ref(namespaced_name, old_sha1, 0)) {
|
||||
if (delete_ref(name, old_sha1, 0)) {
|
||||
rp_error("failed to delete %s", name);
|
||||
return "failed to delete";
|
||||
}
|
||||
return NULL; /* good */
|
||||
}
|
||||
else {
|
||||
lock = lock_any_ref_for_update(namespaced_name, old_sha1, 0);
|
||||
lock = lock_any_ref_for_update(name, old_sha1, 0);
|
||||
if (!lock) {
|
||||
rp_error("failed to lock %s", name);
|
||||
return "failed to lock";
|
||||
@ -512,29 +491,17 @@ static void run_update_post_hook(struct command *commands)
|
||||
|
||||
static void check_aliased_update(struct command *cmd, struct string_list *list)
|
||||
{
|
||||
struct strbuf buf = STRBUF_INIT;
|
||||
const char *dst_name;
|
||||
struct string_list_item *item;
|
||||
struct command *dst_cmd;
|
||||
unsigned char sha1[20];
|
||||
char cmd_oldh[41], cmd_newh[41], dst_oldh[41], dst_newh[41];
|
||||
int flag;
|
||||
|
||||
strbuf_addf(&buf, "%s%s", get_git_namespace(), cmd->ref_name);
|
||||
dst_name = resolve_ref(buf.buf, sha1, 0, &flag);
|
||||
strbuf_release(&buf);
|
||||
const char *dst_name = resolve_ref(cmd->ref_name, sha1, 0, &flag);
|
||||
|
||||
if (!(flag & REF_ISSYMREF))
|
||||
return;
|
||||
|
||||
dst_name = strip_namespace(dst_name);
|
||||
if (!dst_name) {
|
||||
rp_error("refusing update to broken symref '%s'", cmd->ref_name);
|
||||
cmd->skip_update = 1;
|
||||
cmd->error_string = "broken symref";
|
||||
return;
|
||||
}
|
||||
|
||||
if ((item = string_list_lookup(list, dst_name)) == NULL)
|
||||
return;
|
||||
|
||||
@ -669,7 +636,7 @@ static const char *parse_pack_header(struct pack_header *hdr)
|
||||
|
||||
static const char *pack_lockfile;
|
||||
|
||||
static const char *unpack(int quiet)
|
||||
static const char *unpack(void)
|
||||
{
|
||||
struct pack_header hdr;
|
||||
const char *hdr_err;
|
||||
@ -684,10 +651,8 @@ static const char *unpack(int quiet)
|
||||
|
||||
if (ntohl(hdr.hdr_entries) < unpack_limit) {
|
||||
int code, i = 0;
|
||||
const char *unpacker[5];
|
||||
const char *unpacker[4];
|
||||
unpacker[i++] = "unpack-objects";
|
||||
if (quiet)
|
||||
unpacker[i++] = "-q";
|
||||
if (receive_fsck_objects)
|
||||
unpacker[i++] = "--strict";
|
||||
unpacker[i++] = hdr_arg;
|
||||
@ -788,7 +753,6 @@ static void add_alternate_refs(void)
|
||||
|
||||
int cmd_receive_pack(int argc, const char **argv, const char *prefix)
|
||||
{
|
||||
int quiet = 0;
|
||||
int advertise_refs = 0;
|
||||
int stateless_rpc = 0;
|
||||
int i;
|
||||
@ -802,11 +766,6 @@ int cmd_receive_pack(int argc, const char **argv, const char *prefix)
|
||||
const char *arg = *argv++;
|
||||
|
||||
if (*arg == '-') {
|
||||
if (!strcmp(arg, "--quiet")) {
|
||||
quiet = 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!strcmp(arg, "--advertise-refs")) {
|
||||
advertise_refs = 1;
|
||||
continue;
|
||||
@ -855,7 +814,7 @@ int cmd_receive_pack(int argc, const char **argv, const char *prefix)
|
||||
const char *unpack_status = NULL;
|
||||
|
||||
if (!delete_only(commands))
|
||||
unpack_status = unpack(quiet);
|
||||
unpack_status = unpack();
|
||||
execute_commands(commands, unpack_status);
|
||||
if (pack_lockfile)
|
||||
unlink_or_warn(pack_lockfile);
|
||||
|
@ -88,6 +88,16 @@ static inline int postfixcmp(const char *string, const char *postfix)
|
||||
return strcmp(string + len1 - len2, postfix);
|
||||
}
|
||||
|
||||
static int opt_parse_track(const struct option *opt, const char *arg, int not)
|
||||
{
|
||||
struct string_list *list = opt->value;
|
||||
if (not)
|
||||
string_list_clear(list, 0);
|
||||
else
|
||||
string_list_append(list, arg);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int fetch_remote(const char *name)
|
||||
{
|
||||
const char *argv[] = { "fetch", name, NULL, NULL };
|
||||
@ -166,8 +176,8 @@ static int add(int argc, const char **argv)
|
||||
TAGS_SET),
|
||||
OPT_SET_INT(0, NULL, &fetch_tags,
|
||||
"or do not fetch any tag at all (--no-tags)", TAGS_UNSET),
|
||||
OPT_STRING_LIST('t', "track", &track, "branch",
|
||||
"branch(es) to track"),
|
||||
OPT_CALLBACK('t', "track", &track, "branch",
|
||||
"branch(es) to track", opt_parse_track),
|
||||
OPT_STRING('m', "master", &master, "branch", "master branch"),
|
||||
{ OPTION_CALLBACK, 0, "mirror", &mirror, "push|fetch",
|
||||
"set up remote as a mirror to push to or fetch from",
|
||||
|
@ -258,7 +258,12 @@ static void write_message(struct strbuf *msgbuf, const char *filename)
|
||||
|
||||
static struct tree *empty_tree(void)
|
||||
{
|
||||
return lookup_tree((const unsigned char *)EMPTY_TREE_SHA1_BIN);
|
||||
struct tree *tree = xcalloc(1, sizeof(struct tree));
|
||||
|
||||
tree->object.parsed = 1;
|
||||
tree->object.type = OBJ_TREE;
|
||||
pretend_sha1_file(NULL, 0, OBJ_TREE, tree->object.sha1);
|
||||
return tree;
|
||||
}
|
||||
|
||||
static NORETURN void die_dirty_index(const char *me)
|
||||
|
@ -439,10 +439,6 @@ int cmd_send_pack(int argc, const char **argv, const char *prefix)
|
||||
args.force_update = 1;
|
||||
continue;
|
||||
}
|
||||
if (!strcmp(arg, "--quiet")) {
|
||||
args.quiet = 1;
|
||||
continue;
|
||||
}
|
||||
if (!strcmp(arg, "--verbose")) {
|
||||
args.verbose = 1;
|
||||
continue;
|
||||
@ -492,13 +488,8 @@ int cmd_send_pack(int argc, const char **argv, const char *prefix)
|
||||
fd[0] = 0;
|
||||
fd[1] = 1;
|
||||
} else {
|
||||
struct strbuf sb = STRBUF_INIT;
|
||||
strbuf_addstr(&sb, receivepack);
|
||||
if (args.quiet)
|
||||
strbuf_addstr(&sb, " --quiet");
|
||||
conn = git_connect(fd, dest, sb.buf,
|
||||
conn = git_connect(fd, dest, receivepack,
|
||||
args.verbose ? CONNECT_VERBOSE : 0);
|
||||
strbuf_release(&sb);
|
||||
}
|
||||
|
||||
memset(&extra_have, 0, sizeof(extra_have));
|
||||
|
@ -64,7 +64,7 @@ static int run_upload_archive(int argc, const char **argv, const char *prefix)
|
||||
sent_argv[sent_argc] = NULL;
|
||||
|
||||
/* parse all options sent by the client */
|
||||
return write_archive(sent_argc, sent_argv, prefix, 0, NULL, 1);
|
||||
return write_archive(sent_argc, sent_argv, prefix, 0);
|
||||
}
|
||||
|
||||
__attribute__((format (printf, 1, 2)))
|
||||
|
@ -1,53 +1,134 @@
|
||||
#include "builtin.h"
|
||||
#include "cache.h"
|
||||
#include "run-command.h"
|
||||
#include "pack.h"
|
||||
#include "pack-revindex.h"
|
||||
#include "parse-options.h"
|
||||
|
||||
#define MAX_CHAIN 50
|
||||
|
||||
#define VERIFY_PACK_VERBOSE 01
|
||||
#define VERIFY_PACK_STAT_ONLY 02
|
||||
|
||||
static void show_pack_info(struct packed_git *p, unsigned int flags)
|
||||
{
|
||||
uint32_t nr_objects, i;
|
||||
int cnt;
|
||||
int stat_only = flags & VERIFY_PACK_STAT_ONLY;
|
||||
unsigned long chain_histogram[MAX_CHAIN+1], baseobjects;
|
||||
|
||||
nr_objects = p->num_objects;
|
||||
memset(chain_histogram, 0, sizeof(chain_histogram));
|
||||
baseobjects = 0;
|
||||
|
||||
for (i = 0; i < nr_objects; i++) {
|
||||
const unsigned char *sha1;
|
||||
unsigned char base_sha1[20];
|
||||
const char *type;
|
||||
unsigned long size;
|
||||
unsigned long store_size;
|
||||
off_t offset;
|
||||
unsigned int delta_chain_length;
|
||||
|
||||
sha1 = nth_packed_object_sha1(p, i);
|
||||
if (!sha1)
|
||||
die("internal error pack-check nth-packed-object");
|
||||
offset = nth_packed_object_offset(p, i);
|
||||
type = packed_object_info_detail(p, offset, &size, &store_size,
|
||||
&delta_chain_length,
|
||||
base_sha1);
|
||||
if (!stat_only)
|
||||
printf("%s ", sha1_to_hex(sha1));
|
||||
if (!delta_chain_length) {
|
||||
if (!stat_only)
|
||||
printf("%-6s %lu %lu %"PRIuMAX"\n",
|
||||
type, size, store_size, (uintmax_t)offset);
|
||||
baseobjects++;
|
||||
}
|
||||
else {
|
||||
if (!stat_only)
|
||||
printf("%-6s %lu %lu %"PRIuMAX" %u %s\n",
|
||||
type, size, store_size, (uintmax_t)offset,
|
||||
delta_chain_length, sha1_to_hex(base_sha1));
|
||||
if (delta_chain_length <= MAX_CHAIN)
|
||||
chain_histogram[delta_chain_length]++;
|
||||
else
|
||||
chain_histogram[0]++;
|
||||
}
|
||||
}
|
||||
|
||||
if (baseobjects)
|
||||
printf("non delta: %lu object%s\n",
|
||||
baseobjects, baseobjects > 1 ? "s" : "");
|
||||
|
||||
for (cnt = 1; cnt <= MAX_CHAIN; cnt++) {
|
||||
if (!chain_histogram[cnt])
|
||||
continue;
|
||||
printf("chain length = %d: %lu object%s\n", cnt,
|
||||
chain_histogram[cnt],
|
||||
chain_histogram[cnt] > 1 ? "s" : "");
|
||||
}
|
||||
if (chain_histogram[0])
|
||||
printf("chain length > %d: %lu object%s\n", MAX_CHAIN,
|
||||
chain_histogram[0],
|
||||
chain_histogram[0] > 1 ? "s" : "");
|
||||
}
|
||||
|
||||
static int verify_one_pack(const char *path, unsigned int flags)
|
||||
{
|
||||
struct child_process index_pack;
|
||||
const char *argv[] = {"index-pack", NULL, NULL, NULL };
|
||||
struct strbuf arg = STRBUF_INIT;
|
||||
char arg[PATH_MAX];
|
||||
int len;
|
||||
int verbose = flags & VERIFY_PACK_VERBOSE;
|
||||
int stat_only = flags & VERIFY_PACK_STAT_ONLY;
|
||||
struct packed_git *pack;
|
||||
int err;
|
||||
|
||||
if (stat_only)
|
||||
argv[1] = "--verify-stat-only";
|
||||
else if (verbose)
|
||||
argv[1] = "--verify-stat";
|
||||
else
|
||||
argv[1] = "--verify";
|
||||
len = strlcpy(arg, path, PATH_MAX);
|
||||
if (len >= PATH_MAX)
|
||||
return error("name too long: %s", path);
|
||||
|
||||
/*
|
||||
* In addition to "foo.pack" we accept "foo.idx" and "foo";
|
||||
* normalize these forms to "foo.pack" for "index-pack --verify".
|
||||
* In addition to "foo.idx" we accept "foo.pack" and "foo";
|
||||
* normalize these forms to "foo.idx" for add_packed_git().
|
||||
*/
|
||||
strbuf_addstr(&arg, path);
|
||||
if (has_extension(arg.buf, ".idx"))
|
||||
strbuf_splice(&arg, arg.len - 3, 3, "pack", 4);
|
||||
else if (!has_extension(arg.buf, ".pack"))
|
||||
strbuf_add(&arg, ".pack", 5);
|
||||
argv[2] = arg.buf;
|
||||
if (has_extension(arg, ".pack")) {
|
||||
strcpy(arg + len - 5, ".idx");
|
||||
len--;
|
||||
} else if (!has_extension(arg, ".idx")) {
|
||||
if (len + 4 >= PATH_MAX)
|
||||
return error("name too long: %s.idx", arg);
|
||||
strcpy(arg + len, ".idx");
|
||||
len += 4;
|
||||
}
|
||||
|
||||
memset(&index_pack, 0, sizeof(index_pack));
|
||||
index_pack.argv = argv;
|
||||
index_pack.git_cmd = 1;
|
||||
/*
|
||||
* add_packed_git() uses our buffer (containing "foo.idx") to
|
||||
* build the pack filename ("foo.pack"). Make sure it fits.
|
||||
*/
|
||||
if (len + 1 >= PATH_MAX) {
|
||||
arg[len - 4] = '\0';
|
||||
return error("name too long: %s.pack", arg);
|
||||
}
|
||||
|
||||
err = run_command(&index_pack);
|
||||
pack = add_packed_git(arg, len, 1);
|
||||
if (!pack)
|
||||
return error("packfile %s not found.", arg);
|
||||
|
||||
install_packed_git(pack);
|
||||
|
||||
if (!stat_only)
|
||||
err = verify_pack(pack);
|
||||
else
|
||||
err = open_pack_index(pack);
|
||||
|
||||
if (verbose || stat_only) {
|
||||
if (err)
|
||||
printf("%s: bad\n", arg.buf);
|
||||
printf("%s: bad\n", pack->pack_name);
|
||||
else {
|
||||
show_pack_info(pack, flags);
|
||||
if (!stat_only)
|
||||
printf("%s: ok\n", arg.buf);
|
||||
printf("%s: ok\n", pack->pack_name);
|
||||
}
|
||||
}
|
||||
strbuf_release(&arg);
|
||||
|
||||
return err;
|
||||
}
|
||||
@ -78,6 +159,7 @@ int cmd_verify_pack(int argc, const char **argv, const char *prefix)
|
||||
for (i = 0; i < argc; i++) {
|
||||
if (verify_one_pack(argv[i], flags))
|
||||
err = 1;
|
||||
discard_revindex();
|
||||
}
|
||||
|
||||
return err;
|
||||
|
84
cache.h
84
cache.h
@ -6,7 +6,6 @@
|
||||
#include "hash.h"
|
||||
#include "advice.h"
|
||||
#include "gettext.h"
|
||||
#include "convert.h"
|
||||
|
||||
#include SHA1_HEADER
|
||||
#ifndef git_SHA_CTX
|
||||
@ -394,7 +393,6 @@ static inline enum object_type object_type(unsigned int mode)
|
||||
}
|
||||
|
||||
#define GIT_DIR_ENVIRONMENT "GIT_DIR"
|
||||
#define GIT_NAMESPACE_ENVIRONMENT "GIT_NAMESPACE"
|
||||
#define GIT_WORK_TREE_ENVIRONMENT "GIT_WORK_TREE"
|
||||
#define DEFAULT_GIT_DIR_ENVIRONMENT ".git"
|
||||
#define DB_ENVIRONMENT "GIT_OBJECT_DIRECTORY"
|
||||
@ -435,16 +433,13 @@ extern char *get_object_directory(void);
|
||||
extern char *get_index_file(void);
|
||||
extern char *get_graft_file(void);
|
||||
extern int set_git_dir(const char *path);
|
||||
extern const char *get_git_namespace(void);
|
||||
extern const char *strip_namespace(const char *namespaced_ref);
|
||||
extern const char *get_git_work_tree(void);
|
||||
extern const char *read_gitfile_gently(const char *path);
|
||||
extern const char *read_gitfile(const char *path);
|
||||
extern void set_git_work_tree(const char *tree);
|
||||
|
||||
#define ALTERNATE_DB_ENVIRONMENT "GIT_ALTERNATE_OBJECT_DIRECTORIES"
|
||||
|
||||
extern const char **get_pathspec(const char *prefix, const char **pathspec);
|
||||
extern const char *pathspec_prefix(const char *prefix, const char **pathspec);
|
||||
extern void setup_work_tree(void);
|
||||
extern const char *setup_git_directory_gently(int *);
|
||||
extern const char *setup_git_directory(void);
|
||||
@ -601,6 +596,35 @@ extern int fsync_object_files;
|
||||
extern int core_preload_index;
|
||||
extern int core_apply_sparse_checkout;
|
||||
|
||||
enum safe_crlf {
|
||||
SAFE_CRLF_FALSE = 0,
|
||||
SAFE_CRLF_FAIL = 1,
|
||||
SAFE_CRLF_WARN = 2
|
||||
};
|
||||
|
||||
extern enum safe_crlf safe_crlf;
|
||||
|
||||
enum auto_crlf {
|
||||
AUTO_CRLF_FALSE = 0,
|
||||
AUTO_CRLF_TRUE = 1,
|
||||
AUTO_CRLF_INPUT = -1
|
||||
};
|
||||
|
||||
extern enum auto_crlf auto_crlf;
|
||||
|
||||
enum eol {
|
||||
EOL_UNSET,
|
||||
EOL_CRLF,
|
||||
EOL_LF,
|
||||
#ifdef NATIVE_CRLF
|
||||
EOL_NATIVE = EOL_CRLF
|
||||
#else
|
||||
EOL_NATIVE = EOL_LF
|
||||
#endif
|
||||
};
|
||||
|
||||
extern enum eol core_eol;
|
||||
|
||||
enum branch_track {
|
||||
BRANCH_TRACK_UNSPECIFIED = -1,
|
||||
BRANCH_TRACK_NEVER = 0,
|
||||
@ -737,7 +761,7 @@ extern char *expand_user_path(const char *path);
|
||||
char *enter_repo(char *path, int strict);
|
||||
static inline int is_absolute_path(const char *path)
|
||||
{
|
||||
return is_dir_sep(path[0]) || has_dos_drive_prefix(path);
|
||||
return path[0] == '/' || has_dos_drive_prefix(path);
|
||||
}
|
||||
int is_directory(const char *);
|
||||
const char *real_path(const char *path);
|
||||
@ -770,16 +794,10 @@ extern int hash_sha1_file(const void *buf, unsigned long len, const char *type,
|
||||
extern int write_sha1_file(const void *buf, unsigned long len, const char *type, unsigned char *return_sha1);
|
||||
extern int pretend_sha1_file(void *, unsigned long, enum object_type, unsigned char *);
|
||||
extern int force_object_loose(const unsigned char *sha1, time_t mtime);
|
||||
extern void *map_sha1_file(const unsigned char *sha1, unsigned long *size);
|
||||
extern int unpack_sha1_header(git_zstream *stream, unsigned char *map, unsigned long mapsize, void *buffer, unsigned long bufsiz);
|
||||
extern int parse_sha1_header(const char *hdr, unsigned long *sizep);
|
||||
|
||||
/* global flag to enable extra checks when accessing packed objects */
|
||||
extern int do_check_packed_object_crc;
|
||||
|
||||
/* for development: log offset of pack access */
|
||||
extern const char *log_pack_access;
|
||||
|
||||
extern int check_sha1_signature(const unsigned char *sha1, void *buf, unsigned long size, const char *type);
|
||||
|
||||
extern int move_temp_to_file(const char *tmpfile, const char *filename);
|
||||
@ -1017,36 +1035,7 @@ extern off_t find_pack_entry_one(const unsigned char *, struct packed_git *);
|
||||
extern void *unpack_entry(struct packed_git *, off_t, enum object_type *, unsigned long *);
|
||||
extern unsigned long unpack_object_header_buffer(const unsigned char *buf, unsigned long len, enum object_type *type, unsigned long *sizep);
|
||||
extern unsigned long get_size_from_delta(struct packed_git *, struct pack_window **, off_t);
|
||||
extern int unpack_object_header(struct packed_git *, struct pack_window **, off_t *, unsigned long *);
|
||||
|
||||
struct object_info {
|
||||
/* Request */
|
||||
unsigned long *sizep;
|
||||
|
||||
/* Response */
|
||||
enum {
|
||||
OI_CACHED,
|
||||
OI_LOOSE,
|
||||
OI_PACKED,
|
||||
OI_DBCACHED
|
||||
} whence;
|
||||
union {
|
||||
/*
|
||||
* struct {
|
||||
* ... Nothing to expose in this case
|
||||
* } cached;
|
||||
* struct {
|
||||
* ... Nothing to expose in this case
|
||||
* } loose;
|
||||
*/
|
||||
struct {
|
||||
struct packed_git *pack;
|
||||
off_t offset;
|
||||
unsigned int is_delta;
|
||||
} packed;
|
||||
} u;
|
||||
};
|
||||
extern int sha1_object_info_extended(const unsigned char *, struct object_info *);
|
||||
extern const char *packed_object_info_detail(struct packed_git *, off_t, unsigned long *, unsigned long *, unsigned int *, unsigned char *);
|
||||
|
||||
/* Dumb servers support */
|
||||
extern int update_server_info(int);
|
||||
@ -1088,8 +1077,6 @@ extern int config_error_nonbool(const char *);
|
||||
extern const char *get_log_output_encoding(void);
|
||||
extern const char *get_commit_output_encoding(void);
|
||||
|
||||
extern int git_config_parse_parameter(const char *, config_fn_t fn, void *data);
|
||||
|
||||
extern const char *config_exclusive_filename;
|
||||
|
||||
#define MAX_GITNAME (1000)
|
||||
@ -1156,6 +1143,13 @@ extern void trace_strbuf(const char *key, const struct strbuf *buf);
|
||||
|
||||
void packet_trace_identity(const char *prog);
|
||||
|
||||
/* convert.c */
|
||||
/* returns 1 if *dst was used */
|
||||
extern int convert_to_git(const char *path, const char *src, size_t len,
|
||||
struct strbuf *dst, enum safe_crlf checksafe);
|
||||
extern int convert_to_working_tree(const char *path, const char *src, size_t len, struct strbuf *dst);
|
||||
extern int renormalize_buffer(const char *path, const char *src, size_t len, struct strbuf *dst);
|
||||
|
||||
/* add */
|
||||
/*
|
||||
* return 0 if success, 1 - if addition of a file failed and
|
||||
|
@ -114,7 +114,8 @@ static int git_cygwin_config(const char *var, const char *value, void *cb)
|
||||
|
||||
static int init_stat(void)
|
||||
{
|
||||
if (have_git_dir() && git_config(git_cygwin_config,NULL)) {
|
||||
if (have_git_dir()) {
|
||||
git_config(git_cygwin_config, NULL);
|
||||
if (!core_filemode && native_stat) {
|
||||
cygwin_stat_fn = cygwin_stat;
|
||||
cygwin_lstat_fn = cygwin_lstat;
|
||||
|
@ -178,7 +178,7 @@ static int ask_yes_no_if_possible(const char *format, ...)
|
||||
vsnprintf(question, sizeof(question), format, args);
|
||||
va_end(args);
|
||||
|
||||
if ((retry_hook[0] = mingw_getenv("GIT_ASK_YESNO"))) {
|
||||
if ((retry_hook[0] = getenv("GIT_ASK_YESNO"))) {
|
||||
retry_hook[1] = question;
|
||||
return !run_command_v_opt(retry_hook, 0);
|
||||
}
|
||||
@ -599,6 +599,19 @@ char *mingw_getcwd(char *pointer, int len)
|
||||
return ret;
|
||||
}
|
||||
|
||||
#undef getenv
|
||||
char *mingw_getenv(const char *name)
|
||||
{
|
||||
char *result = getenv(name);
|
||||
if (!result && !strcmp(name, "TMPDIR")) {
|
||||
/* on Windows it is TMP and TEMP */
|
||||
result = getenv("TMP");
|
||||
if (!result)
|
||||
result = getenv("TEMP");
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
/*
|
||||
* See http://msdn2.microsoft.com/en-us/library/17w5ykft(vs.71).aspx
|
||||
* (Parsing C++ Command-Line Arguments)
|
||||
@ -698,7 +711,7 @@ static const char *parse_interpreter(const char *cmd)
|
||||
*/
|
||||
static char **get_path_split(void)
|
||||
{
|
||||
char *p, **path, *envpath = mingw_getenv("PATH");
|
||||
char *p, **path, *envpath = getenv("PATH");
|
||||
int i, n = 0;
|
||||
|
||||
if (!envpath || !*envpath)
|
||||
@ -1115,36 +1128,6 @@ char **make_augmented_environ(const char *const *vars)
|
||||
return env;
|
||||
}
|
||||
|
||||
#undef getenv
|
||||
|
||||
/*
|
||||
* The system's getenv looks up the name in a case-insensitive manner.
|
||||
* This version tries a case-sensitive lookup and falls back to
|
||||
* case-insensitive if nothing was found. This is necessary because,
|
||||
* as a prominent example, CMD sets 'Path', but not 'PATH'.
|
||||
* Warning: not thread-safe.
|
||||
*/
|
||||
static char *getenv_cs(const char *name)
|
||||
{
|
||||
size_t len = strlen(name);
|
||||
int i = lookup_env(environ, name, len);
|
||||
if (i >= 0)
|
||||
return environ[i] + len + 1; /* skip past name and '=' */
|
||||
return getenv(name);
|
||||
}
|
||||
|
||||
char *mingw_getenv(const char *name)
|
||||
{
|
||||
char *result = getenv_cs(name);
|
||||
if (!result && !strcmp(name, "TMPDIR")) {
|
||||
/* on Windows it is TMP and TEMP */
|
||||
result = getenv_cs("TMP");
|
||||
if (!result)
|
||||
result = getenv_cs("TEMP");
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
/*
|
||||
* Note, this isn't a complete replacement for getaddrinfo. It assumes
|
||||
* that service contains a numerical port, or that it is null. It
|
||||
|
@ -300,15 +300,6 @@ int winansi_fprintf(FILE *stream, const char *format, ...) __attribute__((format
|
||||
|
||||
#define has_dos_drive_prefix(path) (isalpha(*(path)) && (path)[1] == ':')
|
||||
#define is_dir_sep(c) ((c) == '/' || (c) == '\\')
|
||||
static inline char *mingw_find_last_dir_sep(const char *path)
|
||||
{
|
||||
char *ret = NULL;
|
||||
for (; *path; ++path)
|
||||
if (is_dir_sep(*path))
|
||||
ret = (char *)path;
|
||||
return ret;
|
||||
}
|
||||
#define find_last_dir_sep mingw_find_last_dir_sep
|
||||
#define PATH_SEP ';'
|
||||
#define PRIuMAX "I64u"
|
||||
|
||||
|
87
config.c
87
config.c
@ -12,18 +12,10 @@
|
||||
|
||||
#define MAXNAME (256)
|
||||
|
||||
typedef struct config_file {
|
||||
struct config_file *prev;
|
||||
FILE *f;
|
||||
const char *name;
|
||||
int linenr;
|
||||
int eof;
|
||||
struct strbuf value;
|
||||
char var[MAXNAME];
|
||||
} config_file;
|
||||
|
||||
static config_file *cf;
|
||||
|
||||
static FILE *config_file;
|
||||
static const char *config_file_name;
|
||||
static int config_linenr;
|
||||
static int config_file_eof;
|
||||
static int zlib_compression_seen;
|
||||
|
||||
const char *config_exclusive_filename = NULL;
|
||||
@ -47,8 +39,8 @@ void git_config_push_parameter(const char *text)
|
||||
strbuf_release(&env);
|
||||
}
|
||||
|
||||
int git_config_parse_parameter(const char *text,
|
||||
config_fn_t fn, void *data)
|
||||
static int git_config_parse_parameter(const char *text,
|
||||
config_fn_t fn, void *data)
|
||||
{
|
||||
struct strbuf **pair;
|
||||
pair = strbuf_split_str(text, '=', 2);
|
||||
@ -107,7 +99,7 @@ static int get_next_char(void)
|
||||
FILE *f;
|
||||
|
||||
c = '\n';
|
||||
if (cf && ((f = cf->f) != NULL)) {
|
||||
if ((f = config_file) != NULL) {
|
||||
c = fgetc(f);
|
||||
if (c == '\r') {
|
||||
/* DOS like systems */
|
||||
@ -118,9 +110,9 @@ static int get_next_char(void)
|
||||
}
|
||||
}
|
||||
if (c == '\n')
|
||||
cf->linenr++;
|
||||
config_linenr++;
|
||||
if (c == EOF) {
|
||||
cf->eof = 1;
|
||||
config_file_eof = 1;
|
||||
c = '\n';
|
||||
}
|
||||
}
|
||||
@ -129,20 +121,21 @@ static int get_next_char(void)
|
||||
|
||||
static char *parse_value(void)
|
||||
{
|
||||
static struct strbuf value = STRBUF_INIT;
|
||||
int quote = 0, comment = 0, space = 0;
|
||||
|
||||
strbuf_reset(&cf->value);
|
||||
strbuf_reset(&value);
|
||||
for (;;) {
|
||||
int c = get_next_char();
|
||||
if (c == '\n') {
|
||||
if (quote)
|
||||
return NULL;
|
||||
return cf->value.buf;
|
||||
return value.buf;
|
||||
}
|
||||
if (comment)
|
||||
continue;
|
||||
if (isspace(c) && !quote) {
|
||||
if (cf->value.len)
|
||||
if (value.len)
|
||||
space++;
|
||||
continue;
|
||||
}
|
||||
@ -153,7 +146,7 @@ static char *parse_value(void)
|
||||
}
|
||||
}
|
||||
for (; space; space--)
|
||||
strbuf_addch(&cf->value, ' ');
|
||||
strbuf_addch(&value, ' ');
|
||||
if (c == '\\') {
|
||||
c = get_next_char();
|
||||
switch (c) {
|
||||
@ -175,14 +168,14 @@ static char *parse_value(void)
|
||||
default:
|
||||
return NULL;
|
||||
}
|
||||
strbuf_addch(&cf->value, c);
|
||||
strbuf_addch(&value, c);
|
||||
continue;
|
||||
}
|
||||
if (c == '"') {
|
||||
quote = 1-quote;
|
||||
continue;
|
||||
}
|
||||
strbuf_addch(&cf->value, c);
|
||||
strbuf_addch(&value, c);
|
||||
}
|
||||
}
|
||||
|
||||
@ -199,7 +192,7 @@ static int get_value(config_fn_t fn, void *data, char *name, unsigned int len)
|
||||
/* Get the full name */
|
||||
for (;;) {
|
||||
c = get_next_char();
|
||||
if (cf->eof)
|
||||
if (config_file_eof)
|
||||
break;
|
||||
if (!iskeychar(c))
|
||||
break;
|
||||
@ -263,7 +256,7 @@ static int get_base_var(char *name)
|
||||
|
||||
for (;;) {
|
||||
int c = get_next_char();
|
||||
if (cf->eof)
|
||||
if (config_file_eof)
|
||||
return -1;
|
||||
if (c == ']')
|
||||
return baselen;
|
||||
@ -281,7 +274,7 @@ static int git_parse_file(config_fn_t fn, void *data)
|
||||
{
|
||||
int comment = 0;
|
||||
int baselen = 0;
|
||||
char *var = cf->var;
|
||||
static char var[MAXNAME];
|
||||
|
||||
/* U+FEFF Byte Order Mark in UTF8 */
|
||||
static const unsigned char *utf8_bom = (unsigned char *) "\xef\xbb\xbf";
|
||||
@ -305,7 +298,7 @@ static int git_parse_file(config_fn_t fn, void *data)
|
||||
}
|
||||
}
|
||||
if (c == '\n') {
|
||||
if (cf->eof)
|
||||
if (config_file_eof)
|
||||
return 0;
|
||||
comment = 0;
|
||||
continue;
|
||||
@ -330,7 +323,7 @@ static int git_parse_file(config_fn_t fn, void *data)
|
||||
if (get_value(fn, data, var, baselen+1) < 0)
|
||||
break;
|
||||
}
|
||||
die("bad config file line %d in %s", cf->linenr, cf->name);
|
||||
die("bad config file line %d in %s", config_linenr, config_file_name);
|
||||
}
|
||||
|
||||
static int parse_unit_factor(const char *end, unsigned long *val)
|
||||
@ -381,8 +374,8 @@ int git_parse_ulong(const char *value, unsigned long *ret)
|
||||
|
||||
static void die_bad_config(const char *name)
|
||||
{
|
||||
if (cf && cf->name)
|
||||
die("bad config value for '%s' in %s", name, cf->name);
|
||||
if (config_file_name)
|
||||
die("bad config value for '%s' in %s", name, config_file_name);
|
||||
die("bad config value for '%s'", name);
|
||||
}
|
||||
|
||||
@ -576,9 +569,6 @@ static int git_default_core_config(const char *var, const char *value)
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!strcmp(var, "core.logpackaccess"))
|
||||
return git_config_string(&log_pack_access, var, value);
|
||||
|
||||
if (!strcmp(var, "core.autocrlf")) {
|
||||
if (value && !strcasecmp(value, "input")) {
|
||||
if (core_eol == EOL_CRLF)
|
||||
@ -805,24 +795,13 @@ int git_config_from_file(config_fn_t fn, const char *filename, void *data)
|
||||
|
||||
ret = -1;
|
||||
if (f) {
|
||||
config_file top;
|
||||
|
||||
/* push config-file parsing state stack */
|
||||
top.prev = cf;
|
||||
top.f = f;
|
||||
top.name = filename;
|
||||
top.linenr = 1;
|
||||
top.eof = 0;
|
||||
strbuf_init(&top.value, 1024);
|
||||
cf = ⊤
|
||||
|
||||
config_file = f;
|
||||
config_file_name = filename;
|
||||
config_linenr = 1;
|
||||
config_file_eof = 0;
|
||||
ret = git_parse_file(fn, data);
|
||||
|
||||
/* pop config-file parsing state stack */
|
||||
strbuf_release(&top.value);
|
||||
cf = top.prev;
|
||||
|
||||
fclose(f);
|
||||
config_file_name = NULL;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
@ -930,7 +909,6 @@ static int store_aux(const char *key, const char *value, void *cb)
|
||||
{
|
||||
const char *ep;
|
||||
size_t section_len;
|
||||
FILE *f = cf->f;
|
||||
|
||||
switch (store.state) {
|
||||
case KEY_SEEN:
|
||||
@ -942,7 +920,7 @@ static int store_aux(const char *key, const char *value, void *cb)
|
||||
return 1;
|
||||
}
|
||||
|
||||
store.offset[store.seen] = ftell(f);
|
||||
store.offset[store.seen] = ftell(config_file);
|
||||
store.seen++;
|
||||
}
|
||||
break;
|
||||
@ -969,19 +947,19 @@ static int store_aux(const char *key, const char *value, void *cb)
|
||||
* Do not increment matches: this is no match, but we
|
||||
* just made sure we are in the desired section.
|
||||
*/
|
||||
store.offset[store.seen] = ftell(f);
|
||||
store.offset[store.seen] = ftell(config_file);
|
||||
/* fallthru */
|
||||
case SECTION_END_SEEN:
|
||||
case START:
|
||||
if (matches(key, value)) {
|
||||
store.offset[store.seen] = ftell(f);
|
||||
store.offset[store.seen] = ftell(config_file);
|
||||
store.state = KEY_SEEN;
|
||||
store.seen++;
|
||||
} else {
|
||||
if (strrchr(key, '.') - key == store.baselen &&
|
||||
!strncmp(key, store.key, store.baselen)) {
|
||||
store.state = SECTION_SEEN;
|
||||
store.offset[store.seen] = ftell(f);
|
||||
store.offset[store.seen] = ftell(config_file);
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -1437,7 +1415,6 @@ int git_config_rename_section(const char *old_name, const char *new_name)
|
||||
struct lock_file *lock = xcalloc(sizeof(struct lock_file), 1);
|
||||
int out_fd;
|
||||
char buf[1024];
|
||||
FILE *config_file;
|
||||
|
||||
if (config_exclusive_filename)
|
||||
config_filename = xstrdup(config_exclusive_filename);
|
||||
|
23
connect.c
23
connect.c
@ -254,8 +254,7 @@ static int git_tcp_connect_sock(char *host, int flags)
|
||||
*/
|
||||
static int git_tcp_connect_sock(char *host, int flags)
|
||||
{
|
||||
struct strbuf error_message = STRBUF_INIT;
|
||||
int sockfd = -1;
|
||||
int sockfd = -1, saved_errno = 0;
|
||||
const char *port = STR(DEFAULT_GIT_PORT);
|
||||
char *ep;
|
||||
struct hostent *he;
|
||||
@ -285,21 +284,25 @@ static int git_tcp_connect_sock(char *host, int flags)
|
||||
fprintf(stderr, "done.\nConnecting to %s (port %s) ... ", host, port);
|
||||
|
||||
for (cnt = 0, ap = he->h_addr_list; *ap; ap++, cnt++) {
|
||||
sockfd = socket(he->h_addrtype, SOCK_STREAM, 0);
|
||||
if (sockfd < 0) {
|
||||
saved_errno = errno;
|
||||
continue;
|
||||
}
|
||||
|
||||
memset(&sa, 0, sizeof sa);
|
||||
sa.sin_family = he->h_addrtype;
|
||||
sa.sin_port = htons(nport);
|
||||
memcpy(&sa.sin_addr, *ap, he->h_length);
|
||||
|
||||
sockfd = socket(he->h_addrtype, SOCK_STREAM, 0);
|
||||
if ((sockfd < 0) ||
|
||||
connect(sockfd, (struct sockaddr *)&sa, sizeof sa) < 0) {
|
||||
strbuf_addf(&error_message, "%s[%d: %s]: errno=%s\n",
|
||||
if (connect(sockfd, (struct sockaddr *)&sa, sizeof sa) < 0) {
|
||||
saved_errno = errno;
|
||||
fprintf(stderr, "%s[%d: %s]: errno=%s\n",
|
||||
host,
|
||||
cnt,
|
||||
inet_ntoa(*(struct in_addr *)&sa.sin_addr),
|
||||
strerror(errno));
|
||||
if (0 <= sockfd)
|
||||
close(sockfd);
|
||||
strerror(saved_errno));
|
||||
close(sockfd);
|
||||
sockfd = -1;
|
||||
continue;
|
||||
}
|
||||
@ -310,7 +313,7 @@ static int git_tcp_connect_sock(char *host, int flags)
|
||||
}
|
||||
|
||||
if (sockfd < 0)
|
||||
die("unable to connect to %s:\n%s", host, error_message.buf);
|
||||
die("unable to connect a socket (%s)", strerror(saved_errno));
|
||||
|
||||
if (flags & CONNECT_VERBOSE)
|
||||
fprintf(stderr, "done.\n");
|
||||
|
@ -1469,7 +1469,7 @@ _git_help ()
|
||||
__gitcomp "$__git_all_commands $(__git_aliases)
|
||||
attributes cli core-tutorial cvs-migration
|
||||
diffcore gitk glossary hooks ignore modules
|
||||
namespaces repository-layout tutorial tutorial-2
|
||||
repository-layout tutorial tutorial-2
|
||||
workflows
|
||||
"
|
||||
}
|
||||
@ -2640,7 +2640,6 @@ _git ()
|
||||
--exec-path
|
||||
--html-path
|
||||
--work-tree=
|
||||
--namespace=
|
||||
--help
|
||||
"
|
||||
;;
|
||||
|
@ -1649,8 +1649,7 @@ class P4Sync(Command, P4UserMap):
|
||||
def importHeadRevision(self, revision):
|
||||
print "Doing initial import of %s from revision %s into %s" % (' '.join(self.depotPaths), revision, self.branch)
|
||||
|
||||
details = {}
|
||||
details["user"] = "git perforce import user"
|
||||
details = { "user" : "git perforce import user", "time" : int(time.time()) }
|
||||
details["desc"] = ("Initial import of %s from the state at revision %s\n"
|
||||
% (' '.join(self.depotPaths), revision))
|
||||
details["change"] = revision
|
||||
@ -1690,18 +1689,6 @@ class P4Sync(Command, P4UserMap):
|
||||
fileCnt = fileCnt + 1
|
||||
|
||||
details["change"] = newestRevision
|
||||
|
||||
# Use time from top-most change so that all git-p4 clones of
|
||||
# the same p4 repo have the same commit SHA1s.
|
||||
res = p4CmdList("describe -s %d" % newestRevision)
|
||||
newestTime = None
|
||||
for r in res:
|
||||
if r.has_key('time'):
|
||||
newestTime = int(r['time'])
|
||||
if newestTime is None:
|
||||
die("\"describe -s\" on newest change %d did not give a time")
|
||||
details["time"] = newestTime
|
||||
|
||||
self.updateOptionDict(details)
|
||||
try:
|
||||
self.commit(details, self.extractFilesFromCommit(details), self.branch, self.depotPaths)
|
||||
|
@ -60,11 +60,6 @@
|
||||
# email body. If not specified, there is no limit.
|
||||
# Lines beyond the limit are suppressed and counted, and a final
|
||||
# line is added indicating the number of suppressed lines.
|
||||
# hooks.diffopts
|
||||
# Alternate options for the git diff-tree invocation that shows changes.
|
||||
# Default is "--stat --summary --find-copies-harder". Add -p to those
|
||||
# options to include a unified diff of changes in addition to the usual
|
||||
# summary output.
|
||||
#
|
||||
# Notes
|
||||
# -----
|
||||
@ -451,7 +446,7 @@ generate_update_branch_email()
|
||||
# non-fast-forward updates.
|
||||
echo ""
|
||||
echo "Summary of changes:"
|
||||
git diff-tree $diffopts $oldrev..$newrev
|
||||
git diff-tree --stat --summary --find-copies-harder $oldrev..$newrev
|
||||
}
|
||||
|
||||
#
|
||||
@ -728,8 +723,6 @@ envelopesender=$(git config hooks.envelopesender)
|
||||
emailprefix=$(git config hooks.emailprefix || echo '[SCM] ')
|
||||
custom_showrev=$(git config hooks.showrev)
|
||||
maxlines=$(git config hooks.emailmaxlines)
|
||||
diffopts=$(git config hooks.diffopts)
|
||||
: ${diffopts:="--stat --summary --find-copies-harder"}
|
||||
|
||||
# --- Main loop
|
||||
# Allow dual mode: run from the command line just like the update hook, or
|
||||
|
399
convert.c
399
convert.c
@ -727,7 +727,7 @@ static void convert_attrs(struct conv_attrs *ca, const char *path)
|
||||
git_config(read_convert_config, NULL);
|
||||
}
|
||||
|
||||
if (!git_check_attr(path, NUM_CONV_ATTRS, ccheck)) {
|
||||
if (!git_checkattr(path, NUM_CONV_ATTRS, ccheck)) {
|
||||
ca->crlf_action = git_path_check_crlf(path, ccheck + 4);
|
||||
if (ca->crlf_action == CRLF_GUESS)
|
||||
ca->crlf_action = git_path_check_crlf(path, ccheck + 0);
|
||||
@ -813,400 +813,3 @@ int renormalize_buffer(const char *path, const char *src, size_t len, struct str
|
||||
}
|
||||
return ret | convert_to_git(path, src, len, dst, 0);
|
||||
}
|
||||
|
||||
/*****************************************************************
|
||||
*
|
||||
* Streaming converison support
|
||||
*
|
||||
*****************************************************************/
|
||||
|
||||
typedef int (*filter_fn)(struct stream_filter *,
|
||||
const char *input, size_t *isize_p,
|
||||
char *output, size_t *osize_p);
|
||||
typedef void (*free_fn)(struct stream_filter *);
|
||||
|
||||
struct stream_filter_vtbl {
|
||||
filter_fn filter;
|
||||
free_fn free;
|
||||
};
|
||||
|
||||
struct stream_filter {
|
||||
struct stream_filter_vtbl *vtbl;
|
||||
};
|
||||
|
||||
static int null_filter_fn(struct stream_filter *filter,
|
||||
const char *input, size_t *isize_p,
|
||||
char *output, size_t *osize_p)
|
||||
{
|
||||
size_t count;
|
||||
|
||||
if (!input)
|
||||
return 0; /* we do not keep any states */
|
||||
count = *isize_p;
|
||||
if (*osize_p < count)
|
||||
count = *osize_p;
|
||||
if (count) {
|
||||
memmove(output, input, count);
|
||||
*isize_p -= count;
|
||||
*osize_p -= count;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void null_free_fn(struct stream_filter *filter)
|
||||
{
|
||||
; /* nothing -- null instances are shared */
|
||||
}
|
||||
|
||||
static struct stream_filter_vtbl null_vtbl = {
|
||||
null_filter_fn,
|
||||
null_free_fn,
|
||||
};
|
||||
|
||||
static struct stream_filter null_filter_singleton = {
|
||||
&null_vtbl,
|
||||
};
|
||||
|
||||
int is_null_stream_filter(struct stream_filter *filter)
|
||||
{
|
||||
return filter == &null_filter_singleton;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* LF-to-CRLF filter
|
||||
*/
|
||||
static int lf_to_crlf_filter_fn(struct stream_filter *filter,
|
||||
const char *input, size_t *isize_p,
|
||||
char *output, size_t *osize_p)
|
||||
{
|
||||
size_t count;
|
||||
|
||||
if (!input)
|
||||
return 0; /* we do not keep any states */
|
||||
count = *isize_p;
|
||||
if (count) {
|
||||
size_t i, o;
|
||||
for (i = o = 0; o < *osize_p && i < count; i++) {
|
||||
char ch = input[i];
|
||||
if (ch == '\n') {
|
||||
if (o + 1 < *osize_p)
|
||||
output[o++] = '\r';
|
||||
else
|
||||
break;
|
||||
}
|
||||
output[o++] = ch;
|
||||
}
|
||||
|
||||
*osize_p -= o;
|
||||
*isize_p -= i;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct stream_filter_vtbl lf_to_crlf_vtbl = {
|
||||
lf_to_crlf_filter_fn,
|
||||
null_free_fn,
|
||||
};
|
||||
|
||||
static struct stream_filter lf_to_crlf_filter_singleton = {
|
||||
&lf_to_crlf_vtbl,
|
||||
};
|
||||
|
||||
|
||||
/*
|
||||
* Cascade filter
|
||||
*/
|
||||
#define FILTER_BUFFER 1024
|
||||
struct cascade_filter {
|
||||
struct stream_filter filter;
|
||||
struct stream_filter *one;
|
||||
struct stream_filter *two;
|
||||
char buf[FILTER_BUFFER];
|
||||
int end, ptr;
|
||||
};
|
||||
|
||||
static int cascade_filter_fn(struct stream_filter *filter,
|
||||
const char *input, size_t *isize_p,
|
||||
char *output, size_t *osize_p)
|
||||
{
|
||||
struct cascade_filter *cas = (struct cascade_filter *) filter;
|
||||
size_t filled = 0;
|
||||
size_t sz = *osize_p;
|
||||
size_t to_feed, remaining;
|
||||
|
||||
/*
|
||||
* input -- (one) --> buf -- (two) --> output
|
||||
*/
|
||||
while (filled < sz) {
|
||||
remaining = sz - filled;
|
||||
|
||||
/* do we already have something to feed two with? */
|
||||
if (cas->ptr < cas->end) {
|
||||
to_feed = cas->end - cas->ptr;
|
||||
if (stream_filter(cas->two,
|
||||
cas->buf + cas->ptr, &to_feed,
|
||||
output + filled, &remaining))
|
||||
return -1;
|
||||
cas->ptr += (cas->end - cas->ptr) - to_feed;
|
||||
filled = sz - remaining;
|
||||
continue;
|
||||
}
|
||||
|
||||
/* feed one from upstream and have it emit into our buffer */
|
||||
to_feed = input ? *isize_p : 0;
|
||||
if (input && !to_feed)
|
||||
break;
|
||||
remaining = sizeof(cas->buf);
|
||||
if (stream_filter(cas->one,
|
||||
input, &to_feed,
|
||||
cas->buf, &remaining))
|
||||
return -1;
|
||||
cas->end = sizeof(cas->buf) - remaining;
|
||||
cas->ptr = 0;
|
||||
if (input) {
|
||||
size_t fed = *isize_p - to_feed;
|
||||
*isize_p -= fed;
|
||||
input += fed;
|
||||
}
|
||||
|
||||
/* do we know that we drained one completely? */
|
||||
if (input || cas->end)
|
||||
continue;
|
||||
|
||||
/* tell two to drain; we have nothing more to give it */
|
||||
to_feed = 0;
|
||||
remaining = sz - filled;
|
||||
if (stream_filter(cas->two,
|
||||
NULL, &to_feed,
|
||||
output + filled, &remaining))
|
||||
return -1;
|
||||
if (remaining == (sz - filled))
|
||||
break; /* completely drained two */
|
||||
filled = sz - remaining;
|
||||
}
|
||||
*osize_p -= filled;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void cascade_free_fn(struct stream_filter *filter)
|
||||
{
|
||||
struct cascade_filter *cas = (struct cascade_filter *)filter;
|
||||
free_stream_filter(cas->one);
|
||||
free_stream_filter(cas->two);
|
||||
free(filter);
|
||||
}
|
||||
|
||||
static struct stream_filter_vtbl cascade_vtbl = {
|
||||
cascade_filter_fn,
|
||||
cascade_free_fn,
|
||||
};
|
||||
|
||||
static struct stream_filter *cascade_filter(struct stream_filter *one,
|
||||
struct stream_filter *two)
|
||||
{
|
||||
struct cascade_filter *cascade;
|
||||
|
||||
if (!one || is_null_stream_filter(one))
|
||||
return two;
|
||||
if (!two || is_null_stream_filter(two))
|
||||
return one;
|
||||
|
||||
cascade = xmalloc(sizeof(*cascade));
|
||||
cascade->one = one;
|
||||
cascade->two = two;
|
||||
cascade->end = cascade->ptr = 0;
|
||||
cascade->filter.vtbl = &cascade_vtbl;
|
||||
return (struct stream_filter *)cascade;
|
||||
}
|
||||
|
||||
/*
|
||||
* ident filter
|
||||
*/
|
||||
#define IDENT_DRAINING (-1)
|
||||
#define IDENT_SKIPPING (-2)
|
||||
struct ident_filter {
|
||||
struct stream_filter filter;
|
||||
struct strbuf left;
|
||||
int state;
|
||||
char ident[45]; /* ": x40 $" */
|
||||
};
|
||||
|
||||
static int is_foreign_ident(const char *str)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (prefixcmp(str, "$Id: "))
|
||||
return 0;
|
||||
for (i = 5; str[i]; i++) {
|
||||
if (isspace(str[i]) && str[i+1] != '$')
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ident_drain(struct ident_filter *ident, char **output_p, size_t *osize_p)
|
||||
{
|
||||
size_t to_drain = ident->left.len;
|
||||
|
||||
if (*osize_p < to_drain)
|
||||
to_drain = *osize_p;
|
||||
if (to_drain) {
|
||||
memcpy(*output_p, ident->left.buf, to_drain);
|
||||
strbuf_remove(&ident->left, 0, to_drain);
|
||||
*output_p += to_drain;
|
||||
*osize_p -= to_drain;
|
||||
}
|
||||
if (!ident->left.len)
|
||||
ident->state = 0;
|
||||
}
|
||||
|
||||
static int ident_filter_fn(struct stream_filter *filter,
|
||||
const char *input, size_t *isize_p,
|
||||
char *output, size_t *osize_p)
|
||||
{
|
||||
struct ident_filter *ident = (struct ident_filter *)filter;
|
||||
static const char head[] = "$Id";
|
||||
|
||||
if (!input) {
|
||||
/* drain upon eof */
|
||||
switch (ident->state) {
|
||||
default:
|
||||
strbuf_add(&ident->left, head, ident->state);
|
||||
case IDENT_SKIPPING:
|
||||
/* fallthru */
|
||||
case IDENT_DRAINING:
|
||||
ident_drain(ident, &output, osize_p);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
while (*isize_p || (ident->state == IDENT_DRAINING)) {
|
||||
int ch;
|
||||
|
||||
if (ident->state == IDENT_DRAINING) {
|
||||
ident_drain(ident, &output, osize_p);
|
||||
if (!*osize_p)
|
||||
break;
|
||||
continue;
|
||||
}
|
||||
|
||||
ch = *(input++);
|
||||
(*isize_p)--;
|
||||
|
||||
if (ident->state == IDENT_SKIPPING) {
|
||||
/*
|
||||
* Skipping until '$' or LF, but keeping them
|
||||
* in case it is a foreign ident.
|
||||
*/
|
||||
strbuf_addch(&ident->left, ch);
|
||||
if (ch != '\n' && ch != '$')
|
||||
continue;
|
||||
if (ch == '$' && !is_foreign_ident(ident->left.buf)) {
|
||||
strbuf_setlen(&ident->left, sizeof(head) - 1);
|
||||
strbuf_addstr(&ident->left, ident->ident);
|
||||
}
|
||||
ident->state = IDENT_DRAINING;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (ident->state < sizeof(head) &&
|
||||
head[ident->state] == ch) {
|
||||
ident->state++;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (ident->state)
|
||||
strbuf_add(&ident->left, head, ident->state);
|
||||
if (ident->state == sizeof(head) - 1) {
|
||||
if (ch != ':' && ch != '$') {
|
||||
strbuf_addch(&ident->left, ch);
|
||||
ident->state = 0;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (ch == ':') {
|
||||
strbuf_addch(&ident->left, ch);
|
||||
ident->state = IDENT_SKIPPING;
|
||||
} else {
|
||||
strbuf_addstr(&ident->left, ident->ident);
|
||||
ident->state = IDENT_DRAINING;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
strbuf_addch(&ident->left, ch);
|
||||
ident->state = IDENT_DRAINING;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ident_free_fn(struct stream_filter *filter)
|
||||
{
|
||||
struct ident_filter *ident = (struct ident_filter *)filter;
|
||||
strbuf_release(&ident->left);
|
||||
free(filter);
|
||||
}
|
||||
|
||||
static struct stream_filter_vtbl ident_vtbl = {
|
||||
ident_filter_fn,
|
||||
ident_free_fn,
|
||||
};
|
||||
|
||||
static struct stream_filter *ident_filter(const unsigned char *sha1)
|
||||
{
|
||||
struct ident_filter *ident = xmalloc(sizeof(*ident));
|
||||
|
||||
sprintf(ident->ident, ": %s $", sha1_to_hex(sha1));
|
||||
strbuf_init(&ident->left, 0);
|
||||
ident->filter.vtbl = &ident_vtbl;
|
||||
ident->state = 0;
|
||||
return (struct stream_filter *)ident;
|
||||
}
|
||||
|
||||
/*
|
||||
* Return an appropriately constructed filter for the path, or NULL if
|
||||
* the contents cannot be filtered without reading the whole thing
|
||||
* in-core.
|
||||
*
|
||||
* Note that you would be crazy to set CRLF, smuge/clean or ident to a
|
||||
* large binary blob you would want us not to slurp into the memory!
|
||||
*/
|
||||
struct stream_filter *get_stream_filter(const char *path, const unsigned char *sha1)
|
||||
{
|
||||
struct conv_attrs ca;
|
||||
enum crlf_action crlf_action;
|
||||
struct stream_filter *filter = NULL;
|
||||
|
||||
convert_attrs(&ca, path);
|
||||
|
||||
if (ca.drv && (ca.drv->smudge || ca.drv->clean))
|
||||
return filter;
|
||||
|
||||
if (ca.ident)
|
||||
filter = ident_filter(sha1);
|
||||
|
||||
crlf_action = input_crlf_action(ca.crlf_action, ca.eol_attr);
|
||||
|
||||
if ((crlf_action == CRLF_BINARY) || (crlf_action == CRLF_INPUT) ||
|
||||
(crlf_action == CRLF_GUESS && auto_crlf == AUTO_CRLF_FALSE))
|
||||
filter = cascade_filter(filter, &null_filter_singleton);
|
||||
|
||||
else if (output_eol(crlf_action) == EOL_CRLF &&
|
||||
!(crlf_action == CRLF_AUTO || crlf_action == CRLF_GUESS))
|
||||
filter = cascade_filter(filter, &lf_to_crlf_filter_singleton);
|
||||
|
||||
return filter;
|
||||
}
|
||||
|
||||
void free_stream_filter(struct stream_filter *filter)
|
||||
{
|
||||
filter->vtbl->free(filter);
|
||||
}
|
||||
|
||||
int stream_filter(struct stream_filter *filter,
|
||||
const char *input, size_t *isize_p,
|
||||
char *output, size_t *osize_p)
|
||||
{
|
||||
return filter->vtbl->filter(filter, input, isize_p, output, osize_p);
|
||||
}
|
||||
|
72
convert.h
72
convert.h
@ -1,72 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2011, Google Inc.
|
||||
*/
|
||||
#ifndef CONVERT_H
|
||||
#define CONVERT_H
|
||||
|
||||
enum safe_crlf {
|
||||
SAFE_CRLF_FALSE = 0,
|
||||
SAFE_CRLF_FAIL = 1,
|
||||
SAFE_CRLF_WARN = 2
|
||||
};
|
||||
|
||||
extern enum safe_crlf safe_crlf;
|
||||
|
||||
enum auto_crlf {
|
||||
AUTO_CRLF_FALSE = 0,
|
||||
AUTO_CRLF_TRUE = 1,
|
||||
AUTO_CRLF_INPUT = -1
|
||||
};
|
||||
|
||||
extern enum auto_crlf auto_crlf;
|
||||
|
||||
enum eol {
|
||||
EOL_UNSET,
|
||||
EOL_CRLF,
|
||||
EOL_LF,
|
||||
#ifdef NATIVE_CRLF
|
||||
EOL_NATIVE = EOL_CRLF
|
||||
#else
|
||||
EOL_NATIVE = EOL_LF
|
||||
#endif
|
||||
};
|
||||
|
||||
extern enum eol core_eol;
|
||||
|
||||
/* returns 1 if *dst was used */
|
||||
extern int convert_to_git(const char *path, const char *src, size_t len,
|
||||
struct strbuf *dst, enum safe_crlf checksafe);
|
||||
extern int convert_to_working_tree(const char *path, const char *src,
|
||||
size_t len, struct strbuf *dst);
|
||||
extern int renormalize_buffer(const char *path, const char *src, size_t len,
|
||||
struct strbuf *dst);
|
||||
|
||||
/*****************************************************************
|
||||
*
|
||||
* Streaming converison support
|
||||
*
|
||||
*****************************************************************/
|
||||
|
||||
struct stream_filter; /* opaque */
|
||||
|
||||
extern struct stream_filter *get_stream_filter(const char *path, const unsigned char *);
|
||||
extern void free_stream_filter(struct stream_filter *);
|
||||
extern int is_null_stream_filter(struct stream_filter *);
|
||||
|
||||
/*
|
||||
* Use as much input up to *isize_p and fill output up to *osize_p;
|
||||
* update isize_p and osize_p to indicate how much buffer space was
|
||||
* consumed and filled. Return 0 on success, non-zero on error.
|
||||
*
|
||||
* Some filters may need to buffer the input and look-ahead inside it
|
||||
* to decide what to output, and they may consume more than zero bytes
|
||||
* of input and still not produce any output. After feeding all the
|
||||
* input, pass NULL as input and keep calling this function, to let
|
||||
* such filters know there is no more input coming and it is time for
|
||||
* them to produce the remaining output based on the buffered input.
|
||||
*/
|
||||
extern int stream_filter(struct stream_filter *,
|
||||
const char *input, size_t *isize_p,
|
||||
char *output, size_t *osize_p);
|
||||
|
||||
#endif /* CONVERT_H */
|
46
csum-file.c
46
csum-file.c
@ -11,20 +11,8 @@
|
||||
#include "progress.h"
|
||||
#include "csum-file.h"
|
||||
|
||||
static void flush(struct sha1file *f, void *buf, unsigned int count)
|
||||
static void flush(struct sha1file *f, void * buf, unsigned int count)
|
||||
{
|
||||
if (0 <= f->check_fd && count) {
|
||||
unsigned char check_buffer[8192];
|
||||
ssize_t ret = read_in_full(f->check_fd, check_buffer, count);
|
||||
|
||||
if (ret < 0)
|
||||
die_errno("%s: sha1 file read error", f->name);
|
||||
if (ret < count)
|
||||
die("%s: sha1 file truncated", f->name);
|
||||
if (memcmp(buf, check_buffer, count))
|
||||
die("sha1 file '%s' validation error", f->name);
|
||||
}
|
||||
|
||||
for (;;) {
|
||||
int ret = xwrite(f->fd, buf, count);
|
||||
if (ret > 0) {
|
||||
@ -71,17 +59,6 @@ int sha1close(struct sha1file *f, unsigned char *result, unsigned int flags)
|
||||
fd = 0;
|
||||
} else
|
||||
fd = f->fd;
|
||||
if (0 <= f->check_fd) {
|
||||
char discard;
|
||||
int cnt = read_in_full(f->check_fd, &discard, 1);
|
||||
if (cnt < 0)
|
||||
die_errno("%s: error when reading the tail of sha1 file",
|
||||
f->name);
|
||||
if (cnt)
|
||||
die("%s: sha1 file has trailing garbage", f->name);
|
||||
if (close(f->check_fd))
|
||||
die_errno("%s: sha1 file error on close", f->name);
|
||||
}
|
||||
free(f);
|
||||
return fd;
|
||||
}
|
||||
@ -124,31 +101,10 @@ struct sha1file *sha1fd(int fd, const char *name)
|
||||
return sha1fd_throughput(fd, name, NULL);
|
||||
}
|
||||
|
||||
struct sha1file *sha1fd_check(const char *name)
|
||||
{
|
||||
int sink, check;
|
||||
struct sha1file *f;
|
||||
|
||||
sink = open("/dev/null", O_WRONLY);
|
||||
if (sink < 0)
|
||||
return NULL;
|
||||
check = open(name, O_RDONLY);
|
||||
if (check < 0) {
|
||||
int saved_errno = errno;
|
||||
close(sink);
|
||||
errno = saved_errno;
|
||||
return NULL;
|
||||
}
|
||||
f = sha1fd(sink, name);
|
||||
f->check_fd = check;
|
||||
return f;
|
||||
}
|
||||
|
||||
struct sha1file *sha1fd_throughput(int fd, const char *name, struct progress *tp)
|
||||
{
|
||||
struct sha1file *f = xmalloc(sizeof(*f));
|
||||
f->fd = fd;
|
||||
f->check_fd = -1;
|
||||
f->offset = 0;
|
||||
f->total = 0;
|
||||
f->tp = tp;
|
||||
|
@ -6,7 +6,6 @@ struct progress;
|
||||
/* A SHA1-protected file */
|
||||
struct sha1file {
|
||||
int fd;
|
||||
int check_fd;
|
||||
unsigned int offset;
|
||||
git_SHA_CTX ctx;
|
||||
off_t total;
|
||||
@ -22,7 +21,6 @@ struct sha1file {
|
||||
#define CSUM_FSYNC 2
|
||||
|
||||
extern struct sha1file *sha1fd(int fd, const char *name);
|
||||
extern struct sha1file *sha1fd_check(const char *name);
|
||||
extern struct sha1file *sha1fd_throughput(int fd, const char *name, struct progress *tp);
|
||||
extern int sha1close(struct sha1file *, unsigned char *, unsigned int);
|
||||
extern int sha1write(struct sha1file *, void *, unsigned int);
|
||||
|
71
diff-lib.c
71
diff-lib.c
@ -445,19 +445,20 @@ static int oneway_diff(struct cache_entry **src, struct unpack_trees_options *o)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int diff_cache(struct rev_info *revs,
|
||||
const unsigned char *tree_sha1,
|
||||
const char *tree_name,
|
||||
int cached)
|
||||
int run_diff_index(struct rev_info *revs, int cached)
|
||||
{
|
||||
struct object *ent;
|
||||
struct tree *tree;
|
||||
struct tree_desc t;
|
||||
const char *tree_name;
|
||||
struct unpack_trees_options opts;
|
||||
struct tree_desc t;
|
||||
|
||||
tree = parse_tree_indirect(tree_sha1);
|
||||
ent = revs->pending.objects[0].item;
|
||||
tree_name = revs->pending.objects[0].name;
|
||||
tree = parse_tree_indirect(ent->sha1);
|
||||
if (!tree)
|
||||
return error("bad tree object %s",
|
||||
tree_name ? tree_name : sha1_to_hex(tree_sha1));
|
||||
return error("bad tree object %s", tree_name);
|
||||
|
||||
memset(&opts, 0, sizeof(opts));
|
||||
opts.head_idx = 1;
|
||||
opts.index_only = cached;
|
||||
@ -470,15 +471,7 @@ static int diff_cache(struct rev_info *revs,
|
||||
opts.dst_index = NULL;
|
||||
|
||||
init_tree_desc(&t, tree->buffer, tree->size);
|
||||
return unpack_trees(1, &t, &opts);
|
||||
}
|
||||
|
||||
int run_diff_index(struct rev_info *revs, int cached)
|
||||
{
|
||||
struct object_array_entry *ent;
|
||||
|
||||
ent = revs->pending.objects;
|
||||
if (diff_cache(revs, ent->item->sha1, ent->name, cached))
|
||||
if (unpack_trees(1, &t, &opts))
|
||||
exit(128);
|
||||
|
||||
diff_set_mnemonic_prefix(&revs->diffopt, "c/", cached ? "i/" : "w/");
|
||||
@ -490,13 +483,53 @@ int run_diff_index(struct rev_info *revs, int cached)
|
||||
|
||||
int do_diff_cache(const unsigned char *tree_sha1, struct diff_options *opt)
|
||||
{
|
||||
struct tree *tree;
|
||||
struct rev_info revs;
|
||||
int i;
|
||||
struct cache_entry **dst;
|
||||
struct cache_entry *last = NULL;
|
||||
struct unpack_trees_options opts;
|
||||
struct tree_desc t;
|
||||
|
||||
/*
|
||||
* This is used by git-blame to run diff-cache internally;
|
||||
* it potentially needs to repeatedly run this, so we will
|
||||
* start by removing the higher order entries the last round
|
||||
* left behind.
|
||||
*/
|
||||
dst = active_cache;
|
||||
for (i = 0; i < active_nr; i++) {
|
||||
struct cache_entry *ce = active_cache[i];
|
||||
if (ce_stage(ce)) {
|
||||
if (last && !strcmp(ce->name, last->name))
|
||||
continue;
|
||||
cache_tree_invalidate_path(active_cache_tree,
|
||||
ce->name);
|
||||
last = ce;
|
||||
ce->ce_flags |= CE_REMOVE;
|
||||
}
|
||||
*dst++ = ce;
|
||||
}
|
||||
active_nr = dst - active_cache;
|
||||
|
||||
init_revisions(&revs, NULL);
|
||||
init_pathspec(&revs.prune_data, opt->pathspec.raw);
|
||||
revs.diffopt = *opt;
|
||||
tree = parse_tree_indirect(tree_sha1);
|
||||
if (!tree)
|
||||
die("bad tree object %s", sha1_to_hex(tree_sha1));
|
||||
|
||||
if (diff_cache(&revs, tree_sha1, NULL, 1))
|
||||
memset(&opts, 0, sizeof(opts));
|
||||
opts.head_idx = 1;
|
||||
opts.index_only = 1;
|
||||
opts.diff_index_cached = !DIFF_OPT_TST(opt, FIND_COPIES_HARDER);
|
||||
opts.merge = 1;
|
||||
opts.fn = oneway_diff;
|
||||
opts.unpack_data = &revs;
|
||||
opts.src_index = &the_index;
|
||||
opts.dst_index = &the_index;
|
||||
|
||||
init_tree_desc(&t, tree->buffer, tree->size);
|
||||
if (unpack_trees(1, &t, &opts))
|
||||
exit(128);
|
||||
return 0;
|
||||
}
|
||||
|
56
diff.c
56
diff.c
@ -1316,10 +1316,9 @@ static void show_stats(struct diffstat_t *data, struct diff_options *options)
|
||||
int i, len, add, del, adds = 0, dels = 0;
|
||||
uintmax_t max_change = 0, max_len = 0;
|
||||
int total_files = data->nr;
|
||||
int width, name_width, count;
|
||||
int width, name_width;
|
||||
const char *reset, *add_c, *del_c;
|
||||
const char *line_prefix = "";
|
||||
int extra_shown = 0;
|
||||
struct strbuf *msg = NULL;
|
||||
|
||||
if (data->nr == 0)
|
||||
@ -1332,7 +1331,6 @@ static void show_stats(struct diffstat_t *data, struct diff_options *options)
|
||||
|
||||
width = options->stat_width ? options->stat_width : 80;
|
||||
name_width = options->stat_name_width ? options->stat_name_width : 50;
|
||||
count = options->stat_count ? options->stat_count : data->nr;
|
||||
|
||||
/* Sanity: give at least 5 columns to the graph,
|
||||
* but leave at least 10 columns for the name.
|
||||
@ -1349,14 +1347,9 @@ static void show_stats(struct diffstat_t *data, struct diff_options *options)
|
||||
add_c = diff_get_color_opt(options, DIFF_FILE_NEW);
|
||||
del_c = diff_get_color_opt(options, DIFF_FILE_OLD);
|
||||
|
||||
for (i = 0; (i < count) && (i < data->nr); i++) {
|
||||
for (i = 0; i < data->nr; i++) {
|
||||
struct diffstat_file *file = data->files[i];
|
||||
uintmax_t change = file->added + file->deleted;
|
||||
if (!data->files[i]->is_renamed &&
|
||||
(change == 0)) {
|
||||
count++; /* not shown == room for one more */
|
||||
continue;
|
||||
}
|
||||
fill_print_name(file);
|
||||
len = strlen(file->print_name);
|
||||
if (max_len < len)
|
||||
@ -1367,7 +1360,6 @@ static void show_stats(struct diffstat_t *data, struct diff_options *options)
|
||||
if (max_change < change)
|
||||
max_change = change;
|
||||
}
|
||||
count = i; /* min(count, data->nr) */
|
||||
|
||||
/* Compute the width of the graph part;
|
||||
* 10 is for one blank at the beginning of the line plus
|
||||
@ -1382,18 +1374,13 @@ static void show_stats(struct diffstat_t *data, struct diff_options *options)
|
||||
else
|
||||
width = max_change;
|
||||
|
||||
for (i = 0; i < count; i++) {
|
||||
for (i = 0; i < data->nr; i++) {
|
||||
const char *prefix = "";
|
||||
char *name = data->files[i]->print_name;
|
||||
uintmax_t added = data->files[i]->added;
|
||||
uintmax_t deleted = data->files[i]->deleted;
|
||||
int name_len;
|
||||
|
||||
if (!data->files[i]->is_renamed &&
|
||||
(added + deleted == 0)) {
|
||||
total_files--;
|
||||
continue;
|
||||
}
|
||||
/*
|
||||
* "scale" the filename
|
||||
*/
|
||||
@ -1428,6 +1415,11 @@ static void show_stats(struct diffstat_t *data, struct diff_options *options)
|
||||
fprintf(options->file, " Unmerged\n");
|
||||
continue;
|
||||
}
|
||||
else if (!data->files[i]->is_renamed &&
|
||||
(added + deleted == 0)) {
|
||||
total_files--;
|
||||
continue;
|
||||
}
|
||||
|
||||
/*
|
||||
* scale the add/delete
|
||||
@ -1449,20 +1441,6 @@ static void show_stats(struct diffstat_t *data, struct diff_options *options)
|
||||
show_graph(options->file, '-', del, del_c, reset);
|
||||
fprintf(options->file, "\n");
|
||||
}
|
||||
for (i = count; i < data->nr; i++) {
|
||||
uintmax_t added = data->files[i]->added;
|
||||
uintmax_t deleted = data->files[i]->deleted;
|
||||
if (!data->files[i]->is_renamed &&
|
||||
(added + deleted == 0)) {
|
||||
total_files--;
|
||||
continue;
|
||||
}
|
||||
adds += added;
|
||||
dels += deleted;
|
||||
if (!extra_shown)
|
||||
fprintf(options->file, "%s ...\n", line_prefix);
|
||||
extra_shown = 1;
|
||||
}
|
||||
fprintf(options->file, "%s", line_prefix);
|
||||
fprintf(options->file,
|
||||
" %d files changed, %d insertions(+), %d deletions(-)\n",
|
||||
@ -3230,7 +3208,6 @@ static int stat_opt(struct diff_options *options, const char **av)
|
||||
char *end;
|
||||
int width = options->stat_width;
|
||||
int name_width = options->stat_name_width;
|
||||
int count = options->stat_count;
|
||||
int argcount = 1;
|
||||
|
||||
arg += strlen("--stat");
|
||||
@ -3258,24 +3235,12 @@ static int stat_opt(struct diff_options *options, const char **av)
|
||||
name_width = strtoul(av[1], &end, 10);
|
||||
argcount = 2;
|
||||
}
|
||||
} else if (!prefixcmp(arg, "-count")) {
|
||||
arg += strlen("-count");
|
||||
if (*arg == '=')
|
||||
count = strtoul(arg + 1, &end, 10);
|
||||
else if (!*arg && !av[1])
|
||||
die("Option '--stat-count' requires a value");
|
||||
else if (!*arg) {
|
||||
count = strtoul(av[1], &end, 10);
|
||||
argcount = 2;
|
||||
}
|
||||
}
|
||||
break;
|
||||
case '=':
|
||||
width = strtoul(arg+1, &end, 10);
|
||||
if (*end == ',')
|
||||
name_width = strtoul(end+1, &end, 10);
|
||||
if (*end == ',')
|
||||
count = strtoul(end+1, &end, 10);
|
||||
}
|
||||
|
||||
/* Important! This checks all the error cases! */
|
||||
@ -3284,7 +3249,6 @@ static int stat_opt(struct diff_options *options, const char **av)
|
||||
options->output_format |= DIFF_FORMAT_DIFFSTAT;
|
||||
options->stat_name_width = name_width;
|
||||
options->stat_width = width;
|
||||
options->stat_count = count;
|
||||
return argcount;
|
||||
}
|
||||
|
||||
@ -3349,7 +3313,7 @@ int diff_opt_parse(struct diff_options *options, const char **av, int ac)
|
||||
else if (!strcmp(arg, "-s"))
|
||||
options->output_format |= DIFF_FORMAT_NO_OUTPUT;
|
||||
else if (!prefixcmp(arg, "--stat"))
|
||||
/* --stat, --stat-width, --stat-name-width, or --stat-count */
|
||||
/* --stat, --stat-width, or --stat-name-width */
|
||||
return stat_opt(options, av);
|
||||
|
||||
/* renames options */
|
||||
@ -3393,8 +3357,6 @@ int diff_opt_parse(struct diff_options *options, const char **av, int ac)
|
||||
DIFF_XDL_SET(options, IGNORE_WHITESPACE_AT_EOL);
|
||||
else if (!strcmp(arg, "--patience"))
|
||||
DIFF_XDL_SET(options, PATIENCE_DIFF);
|
||||
else if (!strcmp(arg, "--histogram"))
|
||||
DIFF_XDL_SET(options, HISTOGRAM_DIFF);
|
||||
|
||||
/* flags options */
|
||||
else if (!strcmp(arg, "--binary")) {
|
||||
|
1
diff.h
1
diff.h
@ -125,7 +125,6 @@ struct diff_options {
|
||||
|
||||
int stat_width;
|
||||
int stat_name_width;
|
||||
int stat_count;
|
||||
const char *word_regex;
|
||||
enum diff_words_type word_diff;
|
||||
|
||||
|
116
entry.c
116
entry.c
@ -1,7 +1,6 @@
|
||||
#include "cache.h"
|
||||
#include "blob.h"
|
||||
#include "dir.h"
|
||||
#include "streaming.h"
|
||||
|
||||
static void create_directories(const char *path, int path_len,
|
||||
const struct checkout *state)
|
||||
@ -92,91 +91,6 @@ static void *read_blob_entry(struct cache_entry *ce, unsigned long *size)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int open_output_fd(char *path, struct cache_entry *ce, int to_tempfile)
|
||||
{
|
||||
int symlink = (ce->ce_mode & S_IFMT) != S_IFREG;
|
||||
if (to_tempfile) {
|
||||
strcpy(path, symlink
|
||||
? ".merge_link_XXXXXX" : ".merge_file_XXXXXX");
|
||||
return mkstemp(path);
|
||||
} else {
|
||||
return create_file(path, !symlink ? ce->ce_mode : 0666);
|
||||
}
|
||||
}
|
||||
|
||||
static int fstat_output(int fd, const struct checkout *state, struct stat *st)
|
||||
{
|
||||
/* use fstat() only when path == ce->name */
|
||||
if (fstat_is_reliable() &&
|
||||
state->refresh_cache && !state->base_dir_len) {
|
||||
fstat(fd, st);
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int streaming_write_entry(struct cache_entry *ce, char *path,
|
||||
struct stream_filter *filter,
|
||||
const struct checkout *state, int to_tempfile,
|
||||
int *fstat_done, struct stat *statbuf)
|
||||
{
|
||||
struct git_istream *st;
|
||||
enum object_type type;
|
||||
unsigned long sz;
|
||||
int result = -1;
|
||||
ssize_t kept = 0;
|
||||
int fd = -1;
|
||||
|
||||
st = open_istream(ce->sha1, &type, &sz, filter);
|
||||
if (!st)
|
||||
return -1;
|
||||
if (type != OBJ_BLOB)
|
||||
goto close_and_exit;
|
||||
|
||||
fd = open_output_fd(path, ce, to_tempfile);
|
||||
if (fd < 0)
|
||||
goto close_and_exit;
|
||||
|
||||
for (;;) {
|
||||
char buf[1024 * 16];
|
||||
ssize_t wrote, holeto;
|
||||
ssize_t readlen = read_istream(st, buf, sizeof(buf));
|
||||
|
||||
if (!readlen)
|
||||
break;
|
||||
if (sizeof(buf) == readlen) {
|
||||
for (holeto = 0; holeto < readlen; holeto++)
|
||||
if (buf[holeto])
|
||||
break;
|
||||
if (readlen == holeto) {
|
||||
kept += holeto;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if (kept && lseek(fd, kept, SEEK_CUR) == (off_t) -1)
|
||||
goto close_and_exit;
|
||||
else
|
||||
kept = 0;
|
||||
wrote = write_in_full(fd, buf, readlen);
|
||||
|
||||
if (wrote != readlen)
|
||||
goto close_and_exit;
|
||||
}
|
||||
if (kept && (lseek(fd, kept - 1, SEEK_CUR) == (off_t) -1 ||
|
||||
write(fd, "", 1) != 1))
|
||||
goto close_and_exit;
|
||||
*fstat_done = fstat_output(fd, state, statbuf);
|
||||
|
||||
close_and_exit:
|
||||
close_istream(st);
|
||||
if (0 <= fd)
|
||||
result = close(fd);
|
||||
if (result && 0 <= fd)
|
||||
unlink(path);
|
||||
return result;
|
||||
}
|
||||
|
||||
static int write_entry(struct cache_entry *ce, char *path, const struct checkout *state, int to_tempfile)
|
||||
{
|
||||
unsigned int ce_mode_s_ifmt = ce->ce_mode & S_IFMT;
|
||||
@ -187,15 +101,6 @@ static int write_entry(struct cache_entry *ce, char *path, const struct checkout
|
||||
size_t wrote, newsize = 0;
|
||||
struct stat st;
|
||||
|
||||
if (ce_mode_s_ifmt == S_IFREG) {
|
||||
struct stream_filter *filter = get_stream_filter(path, ce->sha1);
|
||||
if (filter &&
|
||||
!streaming_write_entry(ce, path, filter,
|
||||
state, to_tempfile,
|
||||
&fstat_done, &st))
|
||||
goto finish;
|
||||
}
|
||||
|
||||
switch (ce_mode_s_ifmt) {
|
||||
case S_IFREG:
|
||||
case S_IFLNK:
|
||||
@ -223,7 +128,17 @@ static int write_entry(struct cache_entry *ce, char *path, const struct checkout
|
||||
size = newsize;
|
||||
}
|
||||
|
||||
fd = open_output_fd(path, ce, to_tempfile);
|
||||
if (to_tempfile) {
|
||||
if (ce_mode_s_ifmt == S_IFREG)
|
||||
strcpy(path, ".merge_file_XXXXXX");
|
||||
else
|
||||
strcpy(path, ".merge_link_XXXXXX");
|
||||
fd = mkstemp(path);
|
||||
} else if (ce_mode_s_ifmt == S_IFREG) {
|
||||
fd = create_file(path, ce->ce_mode);
|
||||
} else {
|
||||
fd = create_file(path, 0666);
|
||||
}
|
||||
if (fd < 0) {
|
||||
free(new);
|
||||
return error("unable to create file %s (%s)",
|
||||
@ -231,8 +146,12 @@ static int write_entry(struct cache_entry *ce, char *path, const struct checkout
|
||||
}
|
||||
|
||||
wrote = write_in_full(fd, new, size);
|
||||
if (!to_tempfile)
|
||||
fstat_done = fstat_output(fd, state, &st);
|
||||
/* use fstat() only when path == ce->name */
|
||||
if (fstat_is_reliable() &&
|
||||
state->refresh_cache && !to_tempfile && !state->base_dir_len) {
|
||||
fstat(fd, &st);
|
||||
fstat_done = 1;
|
||||
}
|
||||
close(fd);
|
||||
free(new);
|
||||
if (wrote != size)
|
||||
@ -248,7 +167,6 @@ static int write_entry(struct cache_entry *ce, char *path, const struct checkout
|
||||
return error("unknown file mode for %s in index", path);
|
||||
}
|
||||
|
||||
finish:
|
||||
if (state->refresh_cache) {
|
||||
if (!fstat_done)
|
||||
lstat(ce->name, &st);
|
||||
|
@ -8,7 +8,6 @@
|
||||
* are.
|
||||
*/
|
||||
#include "cache.h"
|
||||
#include "refs.h"
|
||||
|
||||
char git_default_email[MAX_GITNAME];
|
||||
char git_default_name[MAX_GITNAME];
|
||||
@ -37,7 +36,6 @@ size_t packed_git_window_size = DEFAULT_PACKED_GIT_WINDOW_SIZE;
|
||||
size_t packed_git_limit = DEFAULT_PACKED_GIT_LIMIT;
|
||||
size_t delta_base_cache_limit = 16 * 1024 * 1024;
|
||||
unsigned long big_file_threshold = 512 * 1024 * 1024;
|
||||
const char *log_pack_access;
|
||||
const char *pager_program;
|
||||
int pager_use_color = 1;
|
||||
const char *editor_program;
|
||||
@ -67,9 +65,6 @@ int core_preload_index = 0;
|
||||
char *git_work_tree_cfg;
|
||||
static char *work_tree;
|
||||
|
||||
static const char *namespace;
|
||||
static size_t namespace_len;
|
||||
|
||||
static const char *git_dir;
|
||||
static char *git_object_dir, *git_index_file, *git_graft_file;
|
||||
|
||||
@ -91,33 +86,12 @@ const char * const local_repo_env[LOCAL_REPO_ENV_SIZE + 1] = {
|
||||
NULL
|
||||
};
|
||||
|
||||
static char *expand_namespace(const char *raw_namespace)
|
||||
{
|
||||
struct strbuf buf = STRBUF_INIT;
|
||||
struct strbuf **components, **c;
|
||||
|
||||
if (!raw_namespace || !*raw_namespace)
|
||||
return xstrdup("");
|
||||
|
||||
strbuf_addstr(&buf, raw_namespace);
|
||||
components = strbuf_split(&buf, '/');
|
||||
strbuf_reset(&buf);
|
||||
for (c = components; *c; c++)
|
||||
if (strcmp((*c)->buf, "/") != 0)
|
||||
strbuf_addf(&buf, "refs/namespaces/%s", (*c)->buf);
|
||||
strbuf_list_free(components);
|
||||
if (check_ref_format(buf.buf) != CHECK_REF_FORMAT_OK)
|
||||
die("bad git namespace path \"%s\"", raw_namespace);
|
||||
strbuf_addch(&buf, '/');
|
||||
return strbuf_detach(&buf, NULL);
|
||||
}
|
||||
|
||||
static void setup_git_env(void)
|
||||
{
|
||||
git_dir = getenv(GIT_DIR_ENVIRONMENT);
|
||||
git_dir = git_dir ? xstrdup(git_dir) : NULL;
|
||||
if (!git_dir) {
|
||||
git_dir = read_gitfile_gently(DEFAULT_GIT_DIR_ENVIRONMENT);
|
||||
git_dir = read_gitfile(DEFAULT_GIT_DIR_ENVIRONMENT);
|
||||
git_dir = git_dir ? xstrdup(git_dir) : NULL;
|
||||
}
|
||||
if (!git_dir)
|
||||
@ -137,8 +111,6 @@ static void setup_git_env(void)
|
||||
git_graft_file = git_pathdup("info/grafts");
|
||||
if (getenv(NO_REPLACE_OBJECTS_ENVIRONMENT))
|
||||
read_replace_refs = 0;
|
||||
namespace = expand_namespace(getenv(GIT_NAMESPACE_ENVIRONMENT));
|
||||
namespace_len = strlen(namespace);
|
||||
}
|
||||
|
||||
int is_bare_repository(void)
|
||||
@ -159,20 +131,6 @@ const char *get_git_dir(void)
|
||||
return git_dir;
|
||||
}
|
||||
|
||||
const char *get_git_namespace(void)
|
||||
{
|
||||
if (!namespace)
|
||||
setup_git_env();
|
||||
return namespace;
|
||||
}
|
||||
|
||||
const char *strip_namespace(const char *namespaced_ref)
|
||||
{
|
||||
if (prefixcmp(namespaced_ref, get_git_namespace()) != 0)
|
||||
return NULL;
|
||||
return namespaced_ref + namespace_len;
|
||||
}
|
||||
|
||||
static int git_work_tree_initialized;
|
||||
|
||||
/*
|
||||
|
@ -304,7 +304,6 @@ static unsigned int atom_cnt;
|
||||
static struct atom_str **atom_table;
|
||||
|
||||
/* The .pack file being generated */
|
||||
static struct pack_idx_option pack_idx_opts;
|
||||
static unsigned int pack_id;
|
||||
static struct sha1file *pack_file;
|
||||
static struct packed_git *pack_data;
|
||||
@ -355,7 +354,6 @@ static unsigned int cmd_save = 100;
|
||||
static uintmax_t next_mark;
|
||||
static struct strbuf new_data = STRBUF_INIT;
|
||||
static int seen_data_command;
|
||||
static int require_explicit_termination;
|
||||
|
||||
/* Signal handling */
|
||||
static volatile sig_atomic_t checkpoint_requested;
|
||||
@ -898,7 +896,7 @@ static const char *create_index(void)
|
||||
if (c != last)
|
||||
die("internal consistency error creating the index");
|
||||
|
||||
tmpfile = write_idx_file(NULL, idx, object_count, &pack_idx_opts, pack_data->sha1);
|
||||
tmpfile = write_idx_file(NULL, idx, object_count, pack_data->sha1);
|
||||
free(idx);
|
||||
return tmpfile;
|
||||
}
|
||||
@ -3141,8 +3139,6 @@ static int parse_one_feature(const char *feature, int from_stream)
|
||||
relative_marks_paths = 1;
|
||||
} else if (!strcmp(feature, "no-relative-marks")) {
|
||||
relative_marks_paths = 0;
|
||||
} else if (!strcmp(feature, "done")) {
|
||||
require_explicit_termination = 1;
|
||||
} else if (!strcmp(feature, "force")) {
|
||||
force_update = 1;
|
||||
} else if (!strcmp(feature, "notes") || !strcmp(feature, "ls")) {
|
||||
@ -3199,10 +3195,10 @@ static int git_pack_config(const char *k, const char *v, void *cb)
|
||||
return 0;
|
||||
}
|
||||
if (!strcmp(k, "pack.indexversion")) {
|
||||
pack_idx_opts.version = git_config_int(k, v);
|
||||
if (pack_idx_opts.version > 2)
|
||||
pack_idx_default_version = git_config_int(k, v);
|
||||
if (pack_idx_default_version > 2)
|
||||
die("bad pack.indexversion=%"PRIu32,
|
||||
pack_idx_opts.version);
|
||||
pack_idx_default_version);
|
||||
return 0;
|
||||
}
|
||||
if (!strcmp(k, "pack.packsizelimit")) {
|
||||
@ -3256,7 +3252,6 @@ int main(int argc, const char **argv)
|
||||
usage(fast_import_usage);
|
||||
|
||||
setup_git_directory();
|
||||
reset_pack_idx_option(&pack_idx_opts);
|
||||
git_config(git_pack_config, NULL);
|
||||
if (!pack_compression_seen && core_compression_seen)
|
||||
pack_compression_level = core_compression_level;
|
||||
@ -3293,8 +3288,6 @@ int main(int argc, const char **argv)
|
||||
parse_reset_branch();
|
||||
else if (!strcmp("checkpoint", command_buf.buf))
|
||||
parse_checkpoint();
|
||||
else if (!strcmp("done", command_buf.buf))
|
||||
break;
|
||||
else if (!prefixcmp(command_buf.buf, "progress "))
|
||||
parse_progress();
|
||||
else if (!prefixcmp(command_buf.buf, "feature "))
|
||||
@ -3314,9 +3307,6 @@ int main(int argc, const char **argv)
|
||||
if (!seen_data_command)
|
||||
parse_argv();
|
||||
|
||||
if (require_explicit_termination && feof(stdin))
|
||||
die("stream ends early");
|
||||
|
||||
end_packfile();
|
||||
|
||||
dump_branches();
|
||||
|
@ -15,8 +15,8 @@ do
|
||||
sed -n '
|
||||
/^NAME/,/git-'"$cmd"'/H
|
||||
${
|
||||
x
|
||||
s/.*git-'"$cmd"' - \(.*\)/ {"'"$cmd"'", "\1"},/
|
||||
x
|
||||
s/.*git-'"$cmd"' - \(.*\)/ {"'"$cmd"'", "\1"},/
|
||||
p
|
||||
}' "Documentation/git-$cmd.txt"
|
||||
done
|
||||
|
91
git-am.sh
91
git-am.sh
@ -22,7 +22,6 @@ whitespace= pass it through git-apply
|
||||
ignore-space-change pass it through git-apply
|
||||
ignore-whitespace pass it through git-apply
|
||||
directory= pass it through git-apply
|
||||
exclude= pass it through git-apply
|
||||
C= pass it through git-apply
|
||||
p= pass it through git-apply
|
||||
patch-format= format the patch(es) are in
|
||||
@ -38,14 +37,13 @@ rerere-autoupdate update the index with reused conflict resolution if possible
|
||||
rebasing* (internal use for git-rebase)"
|
||||
|
||||
. git-sh-setup
|
||||
. git-sh-i18n
|
||||
prefix=$(git rev-parse --show-prefix)
|
||||
set_reflog_action am
|
||||
require_work_tree
|
||||
cd_to_toplevel
|
||||
|
||||
git var GIT_COMMITTER_IDENT >/dev/null ||
|
||||
die "$(gettext "You need to set your committer info first")"
|
||||
die "You need to set your committer info first"
|
||||
|
||||
if git rev-parse --verify -q HEAD >/dev/null
|
||||
then
|
||||
@ -90,8 +88,8 @@ safe_to_abort () {
|
||||
then
|
||||
return 0
|
||||
fi
|
||||
gettextln "You seem to have moved HEAD since the last 'am' failure.
|
||||
Not rewinding to ORIG_HEAD" >&2
|
||||
echo >&2 "You seem to have moved HEAD since the last 'am' failure."
|
||||
echo >&2 "Not rewinding to ORIG_HEAD"
|
||||
return 1
|
||||
}
|
||||
|
||||
@ -100,9 +98,9 @@ stop_here_user_resolve () {
|
||||
printf '%s\n' "$resolvemsg"
|
||||
stop_here $1
|
||||
fi
|
||||
eval_gettextln "When you have resolved this problem run \"\$cmdline --resolved\".
|
||||
If you would prefer to skip this patch, instead run \"\$cmdline --skip\".
|
||||
To restore the original branch and stop patching run \"\$cmdline --abort\"."
|
||||
echo "When you have resolved this problem run \"$cmdline --resolved\"."
|
||||
echo "If you would prefer to skip this patch, instead run \"$cmdline --skip\"."
|
||||
echo "To restore the original branch and stop patching run \"$cmdline --abort\"."
|
||||
|
||||
stop_here $1
|
||||
}
|
||||
@ -116,7 +114,7 @@ go_next () {
|
||||
|
||||
cannot_fallback () {
|
||||
echo "$1"
|
||||
gettextln "Cannot fall back to three-way merge."
|
||||
echo "Cannot fall back to three-way merge."
|
||||
exit 1
|
||||
}
|
||||
|
||||
@ -131,7 +129,7 @@ fall_back_3way () {
|
||||
"$dotest/patch" &&
|
||||
GIT_INDEX_FILE="$dotest/patch-merge-tmp-index" \
|
||||
git write-tree >"$dotest/patch-merge-base+" ||
|
||||
cannot_fallback "$(gettext "Repository lacks necessary blobs to fall back on 3-way merge.")"
|
||||
cannot_fallback "Repository lacks necessary blobs to fall back on 3-way merge."
|
||||
|
||||
say Using index info to reconstruct a base tree...
|
||||
if GIT_INDEX_FILE="$dotest/patch-merge-tmp-index" \
|
||||
@ -140,8 +138,8 @@ fall_back_3way () {
|
||||
mv "$dotest/patch-merge-base+" "$dotest/patch-merge-base"
|
||||
mv "$dotest/patch-merge-tmp-index" "$dotest/patch-merge-index"
|
||||
else
|
||||
cannot_fallback "$(gettext "Did you hand edit your patch?
|
||||
It does not apply to blobs recorded in its index.")"
|
||||
cannot_fallback "Did you hand edit your patch?
|
||||
It does not apply to blobs recorded in its index."
|
||||
fi
|
||||
|
||||
test -f "$dotest/patch-merge-index" &&
|
||||
@ -149,7 +147,7 @@ It does not apply to blobs recorded in its index.")"
|
||||
orig_tree=$(cat "$dotest/patch-merge-base") &&
|
||||
rm -fr "$dotest"/patch-merge-* || exit 1
|
||||
|
||||
say "$(gettext "Falling back to patching base and 3-way merge...")"
|
||||
say Falling back to patching base and 3-way merge...
|
||||
|
||||
# This is not so wrong. Depending on which base we picked,
|
||||
# orig_tree may be wildly different from ours, but his_tree
|
||||
@ -194,15 +192,10 @@ check_patch_format () {
|
||||
return 0
|
||||
fi
|
||||
|
||||
# otherwise, check the first few non-blank lines of the first
|
||||
# patch to try to detect its format
|
||||
# otherwise, check the first few lines of the first patch to try
|
||||
# to detect its format
|
||||
{
|
||||
# Start from first line containing non-whitespace
|
||||
l1=
|
||||
while test -z "$l1"
|
||||
do
|
||||
read l1
|
||||
done
|
||||
read l1
|
||||
read l2
|
||||
read l3
|
||||
case "$l1" in
|
||||
@ -261,7 +254,7 @@ split_patches () {
|
||||
stgit-series)
|
||||
if test $# -ne 1
|
||||
then
|
||||
clean_abort "$(gettext "Only one StGIT patch series can be applied at once")"
|
||||
clean_abort "Only one StGIT patch series can be applied at once"
|
||||
fi
|
||||
series_dir=`dirname "$1"`
|
||||
series_file="$1"
|
||||
@ -312,10 +305,11 @@ split_patches () {
|
||||
msgnum=
|
||||
;;
|
||||
*)
|
||||
if test -n "$parse_patch" ; then
|
||||
clean_abort "$(eval_gettext "Patch format \$patch_format is not supported.")"
|
||||
if test -n "$patch_format"
|
||||
then
|
||||
clean_abort "Patch format $patch_format is not supported."
|
||||
else
|
||||
clean_abort "$(gettext "Patch format detection failed.")"
|
||||
clean_abort "Patch format detection failed."
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
@ -365,11 +359,11 @@ do
|
||||
--rebasing)
|
||||
rebasing=t threeway=t keep=t scissors=f no_inbody_headers=t ;;
|
||||
-d|--dotest)
|
||||
die "$(gettext "-d option is no longer supported. Do not use.")"
|
||||
die "-d option is no longer supported. Do not use."
|
||||
;;
|
||||
--resolvemsg)
|
||||
shift; resolvemsg=$1 ;;
|
||||
--whitespace|--directory|--exclude)
|
||||
--whitespace|--directory)
|
||||
git_apply_opt="$git_apply_opt $(sq "$1=$2")"; shift ;;
|
||||
-C|-p)
|
||||
git_apply_opt="$git_apply_opt $(sq "$1$2")"; shift ;;
|
||||
@ -428,12 +422,12 @@ then
|
||||
false
|
||||
;;
|
||||
esac ||
|
||||
die "$(eval_gettext "previous rebase directory \$dotest still exists but mbox given.")"
|
||||
die "previous rebase directory $dotest still exists but mbox given."
|
||||
resume=yes
|
||||
|
||||
case "$skip,$abort" in
|
||||
t,t)
|
||||
die "$(gettext "Please make up your mind. --skip or --abort?")"
|
||||
die "Please make up your mind. --skip or --abort?"
|
||||
;;
|
||||
t,)
|
||||
git rerere clear
|
||||
@ -460,7 +454,7 @@ then
|
||||
else
|
||||
# Make sure we are not given --skip, --resolved, nor --abort
|
||||
test "$skip$resolved$abort" = "" ||
|
||||
die "$(gettext "Resolve operation not in progress, we are not resuming.")"
|
||||
die "Resolve operation not in progress, we are not resuming."
|
||||
|
||||
# Start afresh.
|
||||
mkdir -p "$dotest" || exit
|
||||
@ -527,7 +521,7 @@ case "$resolved" in
|
||||
if test "$files"
|
||||
then
|
||||
test -n "$HAS_HEAD" && : >"$dotest/dirtyindex"
|
||||
die "$(eval_gettext "Dirty index: cannot apply patches (dirty: \$files)")"
|
||||
die "Dirty index: cannot apply patches (dirty: $files)"
|
||||
fi
|
||||
esac
|
||||
|
||||
@ -616,9 +610,9 @@ do
|
||||
go_next && continue
|
||||
|
||||
test -s "$dotest/patch" || {
|
||||
eval_gettextln "Patch is empty. Was it split wrong?
|
||||
If you would prefer to skip this patch, instead run \"\$cmdline --skip\".
|
||||
To restore the original branch and stop patching run \"\$cmdline --abort\"."
|
||||
echo "Patch is empty. Was it split wrong?"
|
||||
echo "If you would prefer to skip this patch, instead run \"$cmdline --skip\"."
|
||||
echo "To restore the original branch and stop patching run \"$cmdline --abort\"."
|
||||
stop_here $this
|
||||
}
|
||||
rm -f "$dotest/original-commit" "$dotest/author-script"
|
||||
@ -653,7 +647,7 @@ To restore the original branch and stop patching run \"\$cmdline --abort\"."
|
||||
|
||||
if test -z "$GIT_AUTHOR_EMAIL"
|
||||
then
|
||||
gettextln "Patch does not have a valid e-mail address."
|
||||
echo "Patch does not have a valid e-mail address."
|
||||
stop_here $this
|
||||
fi
|
||||
|
||||
@ -700,18 +694,15 @@ To restore the original branch and stop patching run \"\$cmdline --abort\"."
|
||||
if test "$interactive" = t
|
||||
then
|
||||
test -t 0 ||
|
||||
die "$(gettext "cannot be interactive without stdin connected to a terminal.")"
|
||||
die "cannot be interactive without stdin connected to a terminal."
|
||||
action=again
|
||||
while test "$action" = again
|
||||
do
|
||||
gettextln "Commit Body is:"
|
||||
echo "Commit Body is:"
|
||||
echo "--------------------------"
|
||||
cat "$dotest/final-commit"
|
||||
echo "--------------------------"
|
||||
# TRANSLATORS: Make sure to include [y], [n], [e], [v] and [a]
|
||||
# in your translation. The program will only accept English
|
||||
# input at this point.
|
||||
gettext "Apply? [y]es/[n]o/[e]dit/[v]iew patch/[a]ccept all "
|
||||
printf "Apply? [y]es/[n]o/[e]dit/[v]iew patch/[a]ccept all "
|
||||
read reply
|
||||
case "$reply" in
|
||||
[yY]*) action=yes ;;
|
||||
@ -747,7 +738,7 @@ To restore the original branch and stop patching run \"\$cmdline --abort\"."
|
||||
stop_here $this
|
||||
fi
|
||||
|
||||
say "$(eval_gettext "Applying: \$FIRSTLINE")"
|
||||
say "Applying: $FIRSTLINE"
|
||||
|
||||
case "$resolved" in
|
||||
'')
|
||||
@ -768,16 +759,16 @@ To restore the original branch and stop patching run \"\$cmdline --abort\"."
|
||||
# working tree.
|
||||
resolved=
|
||||
git diff-index --quiet --cached HEAD -- && {
|
||||
gettextln "No changes - did you forget to use 'git add'?
|
||||
If there is nothing left to stage, chances are that something else
|
||||
already introduced the same changes; you might want to skip this patch."
|
||||
echo "No changes - did you forget to use 'git add'?"
|
||||
echo "If there is nothing left to stage, chances are that something else"
|
||||
echo "already introduced the same changes; you might want to skip this patch."
|
||||
stop_here_user_resolve $this
|
||||
}
|
||||
unmerged=$(git ls-files -u)
|
||||
if test -n "$unmerged"
|
||||
then
|
||||
gettextln "You still have unmerged paths in your index
|
||||
did you forget to use 'git add'?"
|
||||
echo "You still have unmerged paths in your index"
|
||||
echo "did you forget to use 'git add'?"
|
||||
stop_here_user_resolve $this
|
||||
fi
|
||||
apply_status=0
|
||||
@ -792,7 +783,7 @@ did you forget to use 'git add'?"
|
||||
# Applying the patch to an earlier tree and merging the
|
||||
# result may have produced the same tree as ours.
|
||||
git diff-index --quiet --cached HEAD -- && {
|
||||
say "$(gettext "No changes -- Patch already applied.")"
|
||||
say No changes -- Patch already applied.
|
||||
go_next
|
||||
continue
|
||||
}
|
||||
@ -802,7 +793,7 @@ did you forget to use 'git add'?"
|
||||
fi
|
||||
if test $apply_status != 0
|
||||
then
|
||||
eval_gettextln 'Patch failed at $msgnum $FIRSTLINE'
|
||||
printf 'Patch failed at %s %s\n' "$msgnum" "$FIRSTLINE"
|
||||
stop_here_user_resolve $this
|
||||
fi
|
||||
|
||||
@ -818,7 +809,7 @@ did you forget to use 'git add'?"
|
||||
GIT_AUTHOR_DATE=
|
||||
fi
|
||||
parent=$(git rev-parse --verify -q HEAD) ||
|
||||
say >&2 "$(gettext "applying to an empty history")"
|
||||
say >&2 "applying to an empty history"
|
||||
|
||||
if test -n "$committer_date_is_author_date"
|
||||
then
|
||||
|
410
git-bisect.sh
410
git-bisect.sh
@ -2,59 +2,43 @@
|
||||
|
||||
USAGE='[help|start|bad|good|skip|next|reset|visualize|replay|log|run]'
|
||||
LONG_USAGE='git bisect help
|
||||
print this long help message.
|
||||
git bisect start [--no-checkout] [<bad> [<good>...]] [--] [<pathspec>...]
|
||||
reset bisect state and start bisection.
|
||||
print this long help message.
|
||||
git bisect start [<bad> [<good>...]] [--] [<pathspec>...]
|
||||
reset bisect state and start bisection.
|
||||
git bisect bad [<rev>]
|
||||
mark <rev> a known-bad revision.
|
||||
mark <rev> a known-bad revision.
|
||||
git bisect good [<rev>...]
|
||||
mark <rev>... known-good revisions.
|
||||
mark <rev>... known-good revisions.
|
||||
git bisect skip [(<rev>|<range>)...]
|
||||
mark <rev>... untestable revisions.
|
||||
mark <rev>... untestable revisions.
|
||||
git bisect next
|
||||
find next bisection to test and check it out.
|
||||
find next bisection to test and check it out.
|
||||
git bisect reset [<commit>]
|
||||
finish bisection search and go back to commit.
|
||||
finish bisection search and go back to commit.
|
||||
git bisect visualize
|
||||
show bisect status in gitk.
|
||||
show bisect status in gitk.
|
||||
git bisect replay <logfile>
|
||||
replay bisection log.
|
||||
replay bisection log.
|
||||
git bisect log
|
||||
show bisect log.
|
||||
show bisect log.
|
||||
git bisect run <cmd>...
|
||||
use <cmd>... to automatically bisect.
|
||||
use <cmd>... to automatically bisect.
|
||||
|
||||
Please use "git help bisect" to get the full man page.'
|
||||
|
||||
OPTIONS_SPEC=
|
||||
. git-sh-setup
|
||||
. git-sh-i18n
|
||||
require_work_tree
|
||||
|
||||
_x40='[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]'
|
||||
_x40="$_x40$_x40$_x40$_x40$_x40$_x40$_x40$_x40"
|
||||
|
||||
bisect_head()
|
||||
{
|
||||
if test -f "$GIT_DIR/BISECT_HEAD"
|
||||
then
|
||||
echo BISECT_HEAD
|
||||
else
|
||||
echo HEAD
|
||||
fi
|
||||
}
|
||||
|
||||
bisect_autostart() {
|
||||
test -s "$GIT_DIR/BISECT_START" || {
|
||||
(
|
||||
gettext "You need to start by \"git bisect start\"" &&
|
||||
echo
|
||||
) >&2
|
||||
echo >&2 'You need to start by "git bisect start"'
|
||||
if test -t 0
|
||||
then
|
||||
# TRANSLATORS: Make sure to include [Y] and [n] in your
|
||||
# translation. The program will only accept English input
|
||||
# at this point.
|
||||
gettext "Do you want me to do it for you [Y/n]? " >&2
|
||||
echo >&2 -n 'Do you want me to do it for you [Y/n]? '
|
||||
read yesno
|
||||
case "$yesno" in
|
||||
[Nn]*)
|
||||
@ -68,56 +52,12 @@ bisect_autostart() {
|
||||
}
|
||||
|
||||
bisect_start() {
|
||||
#
|
||||
# Check for one bad and then some good revisions.
|
||||
#
|
||||
has_double_dash=0
|
||||
for arg; do
|
||||
case "$arg" in --) has_double_dash=1; break ;; esac
|
||||
done
|
||||
orig_args=$(git rev-parse --sq-quote "$@")
|
||||
bad_seen=0
|
||||
eval=''
|
||||
if test "z$(git rev-parse --is-bare-repository)" != zfalse
|
||||
then
|
||||
mode=--no-checkout
|
||||
else
|
||||
mode=''
|
||||
fi
|
||||
while [ $# -gt 0 ]; do
|
||||
arg="$1"
|
||||
case "$arg" in
|
||||
--)
|
||||
shift
|
||||
break
|
||||
;;
|
||||
--no-checkout)
|
||||
mode=--no-checkout
|
||||
shift ;;
|
||||
--*)
|
||||
die "$(eval_gettext "unrecognised option: '\$arg'")" ;;
|
||||
*)
|
||||
rev=$(git rev-parse -q --verify "$arg^{commit}") || {
|
||||
test $has_double_dash -eq 1 &&
|
||||
die "$(eval_gettext "'\$arg' does not appear to be a valid revision")"
|
||||
break
|
||||
}
|
||||
case $bad_seen in
|
||||
0) state='bad' ; bad_seen=1 ;;
|
||||
*) state='good' ;;
|
||||
esac
|
||||
eval="$eval bisect_write '$state' '$rev' 'nolog' &&"
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
#
|
||||
# Verify HEAD.
|
||||
#
|
||||
head=$(GIT_DIR="$GIT_DIR" git symbolic-ref -q HEAD) ||
|
||||
head=$(GIT_DIR="$GIT_DIR" git rev-parse --verify HEAD) ||
|
||||
die "$(gettext "Bad HEAD - I need a HEAD")"
|
||||
die "Bad HEAD - I need a HEAD"
|
||||
|
||||
#
|
||||
# Check if we are bisecting.
|
||||
@ -127,10 +67,7 @@ bisect_start() {
|
||||
then
|
||||
# Reset to the rev from where we started.
|
||||
start_head=$(cat "$GIT_DIR/BISECT_START")
|
||||
if test "z$mode" != "z--no-checkout"
|
||||
then
|
||||
git checkout "$start_head" --
|
||||
fi
|
||||
git checkout "$start_head" -- || exit
|
||||
else
|
||||
# Get rev from where we start.
|
||||
case "$head" in
|
||||
@ -139,11 +76,11 @@ bisect_start() {
|
||||
# cogito usage, and cogito users should understand
|
||||
# it relates to cg-seek.
|
||||
[ -s "$GIT_DIR/head-name" ] &&
|
||||
die "$(gettext "won't bisect on seeked tree")"
|
||||
die "won't bisect on seeked tree"
|
||||
start_head="${head#refs/heads/}"
|
||||
;;
|
||||
*)
|
||||
die "$(gettext "Bad HEAD - strange symbolic ref")"
|
||||
die "Bad HEAD - strange symbolic ref"
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
@ -153,6 +90,39 @@ bisect_start() {
|
||||
#
|
||||
bisect_clean_state || exit
|
||||
|
||||
#
|
||||
# Check for one bad and then some good revisions.
|
||||
#
|
||||
has_double_dash=0
|
||||
for arg; do
|
||||
case "$arg" in --) has_double_dash=1; break ;; esac
|
||||
done
|
||||
orig_args=$(git rev-parse --sq-quote "$@")
|
||||
bad_seen=0
|
||||
eval=''
|
||||
while [ $# -gt 0 ]; do
|
||||
arg="$1"
|
||||
case "$arg" in
|
||||
--)
|
||||
shift
|
||||
break
|
||||
;;
|
||||
*)
|
||||
rev=$(git rev-parse -q --verify "$arg^{commit}") || {
|
||||
test $has_double_dash -eq 1 &&
|
||||
die "'$arg' does not appear to be a valid revision"
|
||||
break
|
||||
}
|
||||
case $bad_seen in
|
||||
0) state='bad' ; bad_seen=1 ;;
|
||||
*) state='good' ;;
|
||||
esac
|
||||
eval="$eval bisect_write '$state' '$rev' 'nolog'; "
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
#
|
||||
# Change state.
|
||||
# In case of mistaken revs or checkout error, or signals received,
|
||||
@ -166,12 +136,9 @@ bisect_start() {
|
||||
#
|
||||
# Write new start state.
|
||||
#
|
||||
echo "$start_head" >"$GIT_DIR/BISECT_START" && {
|
||||
test "z$mode" != "z--no-checkout" ||
|
||||
git update-ref --no-deref BISECT_HEAD "$start_head"
|
||||
} &&
|
||||
echo "$start_head" >"$GIT_DIR/BISECT_START" &&
|
||||
git rev-parse --sq-quote "$@" >"$GIT_DIR/BISECT_NAMES" &&
|
||||
eval "$eval true" &&
|
||||
eval "$eval" &&
|
||||
echo "git bisect start$orig_args" >>"$GIT_DIR/BISECT_LOG" || exit
|
||||
#
|
||||
# Check if we can proceed to the next bisect state.
|
||||
@ -188,7 +155,7 @@ bisect_write() {
|
||||
case "$state" in
|
||||
bad) tag="$state" ;;
|
||||
good|skip) tag="$state"-"$rev" ;;
|
||||
*) die "$(eval_gettext "Bad bisect_write argument: \$state")" ;;
|
||||
*) die "Bad bisect_write argument: $state" ;;
|
||||
esac
|
||||
git update-ref "refs/bisect/$tag" "$rev" || exit
|
||||
echo "# $state: $(git show-branch $rev)" >>"$GIT_DIR/BISECT_LOG"
|
||||
@ -202,8 +169,7 @@ is_expected_rev() {
|
||||
|
||||
check_expected_revs() {
|
||||
for _rev in "$@"; do
|
||||
if ! is_expected_rev "$_rev"
|
||||
then
|
||||
if ! is_expected_rev "$_rev"; then
|
||||
rm -f "$GIT_DIR/BISECT_ANCESTORS_OK"
|
||||
rm -f "$GIT_DIR/BISECT_EXPECTED_REV"
|
||||
return
|
||||
@ -212,18 +178,18 @@ check_expected_revs() {
|
||||
}
|
||||
|
||||
bisect_skip() {
|
||||
all=''
|
||||
all=''
|
||||
for arg in "$@"
|
||||
do
|
||||
case "$arg" in
|
||||
*..*)
|
||||
revs=$(git rev-list "$arg") || die "$(eval_gettext "Bad rev input: \$arg")" ;;
|
||||
*)
|
||||
revs=$(git rev-parse --sq-quote "$arg") ;;
|
||||
esac
|
||||
all="$all $revs"
|
||||
done
|
||||
eval bisect_state 'skip' $all
|
||||
case "$arg" in
|
||||
*..*)
|
||||
revs=$(git rev-list "$arg") || die "Bad rev input: $arg" ;;
|
||||
*)
|
||||
revs=$(git rev-parse --sq-quote "$arg") ;;
|
||||
esac
|
||||
all="$all $revs"
|
||||
done
|
||||
eval bisect_state 'skip' $all
|
||||
}
|
||||
|
||||
bisect_state() {
|
||||
@ -231,10 +197,10 @@ bisect_state() {
|
||||
state=$1
|
||||
case "$#,$state" in
|
||||
0,*)
|
||||
die "$(gettext "Please call 'bisect_state' with at least one argument.")" ;;
|
||||
die "Please call 'bisect_state' with at least one argument." ;;
|
||||
1,bad|1,good|1,skip)
|
||||
rev=$(git rev-parse --verify $(bisect_head)) ||
|
||||
die "$(gettext "Bad rev input: $(bisect_head)")"
|
||||
rev=$(git rev-parse --verify HEAD) ||
|
||||
die "Bad rev input: HEAD"
|
||||
bisect_write "$state" "$rev"
|
||||
check_expected_revs "$rev" ;;
|
||||
2,bad|*,good|*,skip)
|
||||
@ -243,13 +209,13 @@ bisect_state() {
|
||||
for rev in "$@"
|
||||
do
|
||||
sha=$(git rev-parse --verify "$rev^{commit}") ||
|
||||
die "$(eval_gettext "Bad rev input: \$rev")"
|
||||
die "Bad rev input: $rev"
|
||||
eval="$eval bisect_write '$state' '$sha'; "
|
||||
done
|
||||
eval "$eval"
|
||||
check_expected_revs "$@" ;;
|
||||
*,bad)
|
||||
die "$(gettext "'git bisect bad' can take only one argument.")" ;;
|
||||
die "'git bisect bad' can take only one argument." ;;
|
||||
*)
|
||||
usage ;;
|
||||
esac
|
||||
@ -272,38 +238,25 @@ bisect_next_check() {
|
||||
t,,good)
|
||||
# have bad but not good. we could bisect although
|
||||
# this is less optimum.
|
||||
(
|
||||
gettext "Warning: bisecting only with a bad commit." &&
|
||||
echo
|
||||
) >&2
|
||||
echo >&2 'Warning: bisecting only with a bad commit.'
|
||||
if test -t 0
|
||||
then
|
||||
# TRANSLATORS: Make sure to include [Y] and [n] in your
|
||||
# translation. The program will only accept English input
|
||||
# at this point.
|
||||
gettext "Are you sure [Y/n]? " >&2
|
||||
printf >&2 'Are you sure [Y/n]? '
|
||||
read yesno
|
||||
case "$yesno" in [Nn]*) exit 1 ;; esac
|
||||
fi
|
||||
: bisect without good...
|
||||
;;
|
||||
*)
|
||||
|
||||
if test -s "$GIT_DIR/BISECT_START"
|
||||
then
|
||||
(
|
||||
gettext "You need to give me at least one good and one bad revisions.
|
||||
(You can use \"git bisect bad\" and \"git bisect good\" for that.)" &&
|
||||
echo
|
||||
) >&2
|
||||
else
|
||||
(
|
||||
gettext "You need to start by \"git bisect start\".
|
||||
You then need to give me at least one good and one bad revisions.
|
||||
(You can use \"git bisect bad\" and \"git bisect good\" for that.)" &&
|
||||
echo
|
||||
) >&2
|
||||
fi
|
||||
THEN=''
|
||||
test -s "$GIT_DIR/BISECT_START" || {
|
||||
echo >&2 'You need to start by "git bisect start".'
|
||||
THEN='then '
|
||||
}
|
||||
echo >&2 'You '$THEN'need to give me at least one good' \
|
||||
'and one bad revisions.'
|
||||
echo >&2 '(You can use "git bisect bad" and' \
|
||||
'"git bisect good" for that.)'
|
||||
exit 1 ;;
|
||||
esac
|
||||
}
|
||||
@ -318,10 +271,10 @@ bisect_next() {
|
||||
bisect_next_check good
|
||||
|
||||
# Perform all bisection computation, display and checkout
|
||||
git bisect--helper --next-all $(test -f "$GIT_DIR/BISECT_HEAD" && echo --no-checkout)
|
||||
git bisect--helper --next-all
|
||||
res=$?
|
||||
|
||||
# Check if we should exit because bisection is finished
|
||||
# Check if we should exit because bisection is finished
|
||||
test $res -eq 10 && exit 0
|
||||
|
||||
# Check for an error in the bisection process
|
||||
@ -336,8 +289,7 @@ bisect_visualize() {
|
||||
if test $# = 0
|
||||
then
|
||||
if test -n "${DISPLAY+set}${SESSIONNAME+set}${MSYSTEM+set}${SECURITYSESSIONID+set}" &&
|
||||
type gitk >/dev/null 2>&1
|
||||
then
|
||||
type gitk >/dev/null 2>&1; then
|
||||
set gitk
|
||||
else
|
||||
set git log
|
||||
@ -355,26 +307,23 @@ bisect_visualize() {
|
||||
|
||||
bisect_reset() {
|
||||
test -s "$GIT_DIR/BISECT_START" || {
|
||||
gettext "We are not bisecting."; echo
|
||||
echo "We are not bisecting."
|
||||
return
|
||||
}
|
||||
case "$#" in
|
||||
0) branch=$(cat "$GIT_DIR/BISECT_START") ;;
|
||||
1) git rev-parse --quiet --verify "$1^{commit}" > /dev/null || {
|
||||
invalid="$1"
|
||||
die "$(eval_gettext "'\$invalid' is not a valid commit")"
|
||||
}
|
||||
branch="$1" ;;
|
||||
1) git rev-parse --quiet --verify "$1^{commit}" > /dev/null ||
|
||||
die "'$1' is not a valid commit"
|
||||
branch="$1" ;;
|
||||
*)
|
||||
usage ;;
|
||||
usage ;;
|
||||
esac
|
||||
|
||||
if ! test -f "$GIT_DIR/BISECT_HEAD" && ! git checkout "$branch" --
|
||||
then
|
||||
die "$(eval_gettext "Could not check out original HEAD '\$branch'.
|
||||
Try 'git bisect reset <commit>'.")"
|
||||
if git checkout "$branch" -- ; then
|
||||
bisect_clean_state
|
||||
else
|
||||
die "Could not check out original HEAD '$branch'." \
|
||||
"Try 'git bisect reset <commit>'."
|
||||
fi
|
||||
bisect_clean_state
|
||||
}
|
||||
|
||||
bisect_clean_state() {
|
||||
@ -391,21 +340,18 @@ bisect_clean_state() {
|
||||
rm -f "$GIT_DIR/BISECT_RUN" &&
|
||||
# Cleanup head-name if it got left by an old version of git-bisect
|
||||
rm -f "$GIT_DIR/head-name" &&
|
||||
git update-ref -d --no-deref BISECT_HEAD &&
|
||||
# clean up BISECT_START last
|
||||
|
||||
rm -f "$GIT_DIR/BISECT_START"
|
||||
}
|
||||
|
||||
bisect_replay () {
|
||||
file="$1"
|
||||
test "$#" -eq 1 || die "$(gettext "No logfile given")"
|
||||
test -r "$file" || die "$(eval_gettext "cannot read \$file for replaying")"
|
||||
test "$#" -eq 1 || die "No logfile given"
|
||||
test -r "$1" || die "cannot read $1 for replaying"
|
||||
bisect_reset
|
||||
while read git bisect command rev
|
||||
do
|
||||
test "$git $bisect" = "git bisect" -o "$git" = "git-bisect" || continue
|
||||
if test "$git" = "git-bisect"
|
||||
then
|
||||
if test "$git" = "git-bisect"; then
|
||||
rev="$command"
|
||||
command="$bisect"
|
||||
fi
|
||||
@ -416,114 +362,98 @@ bisect_replay () {
|
||||
good|bad|skip)
|
||||
bisect_write "$command" "$rev" ;;
|
||||
*)
|
||||
die "$(gettext "?? what are you talking about?")" ;;
|
||||
die "?? what are you talking about?" ;;
|
||||
esac
|
||||
done <"$file"
|
||||
done <"$1"
|
||||
bisect_auto_next
|
||||
}
|
||||
|
||||
bisect_run () {
|
||||
bisect_next_check fail
|
||||
bisect_next_check fail
|
||||
|
||||
while true
|
||||
do
|
||||
command="$@"
|
||||
eval_gettext "running \$command"; echo
|
||||
"$@"
|
||||
res=$?
|
||||
while true
|
||||
do
|
||||
echo "running $@"
|
||||
"$@"
|
||||
res=$?
|
||||
|
||||
# Check for really bad run error.
|
||||
if [ $res -lt 0 -o $res -ge 128 ]
|
||||
then
|
||||
(
|
||||
eval_gettext "bisect run failed:
|
||||
exit code \$res from '\$command' is < 0 or >= 128" &&
|
||||
echo
|
||||
) >&2
|
||||
exit $res
|
||||
fi
|
||||
# Check for really bad run error.
|
||||
if [ $res -lt 0 -o $res -ge 128 ]; then
|
||||
echo >&2 "bisect run failed:"
|
||||
echo >&2 "exit code $res from '$@' is < 0 or >= 128"
|
||||
exit $res
|
||||
fi
|
||||
|
||||
# Find current state depending on run success or failure.
|
||||
# A special exit code of 125 means cannot test.
|
||||
if [ $res -eq 125 ]
|
||||
then
|
||||
state='skip'
|
||||
elif [ $res -gt 0 ]
|
||||
then
|
||||
state='bad'
|
||||
else
|
||||
state='good'
|
||||
fi
|
||||
# Find current state depending on run success or failure.
|
||||
# A special exit code of 125 means cannot test.
|
||||
if [ $res -eq 125 ]; then
|
||||
state='skip'
|
||||
elif [ $res -gt 0 ]; then
|
||||
state='bad'
|
||||
else
|
||||
state='good'
|
||||
fi
|
||||
|
||||
# We have to use a subshell because "bisect_state" can exit.
|
||||
( bisect_state $state > "$GIT_DIR/BISECT_RUN" )
|
||||
res=$?
|
||||
# We have to use a subshell because "bisect_state" can exit.
|
||||
( bisect_state $state > "$GIT_DIR/BISECT_RUN" )
|
||||
res=$?
|
||||
|
||||
cat "$GIT_DIR/BISECT_RUN"
|
||||
cat "$GIT_DIR/BISECT_RUN"
|
||||
|
||||
if sane_grep "first bad commit could be any of" "$GIT_DIR/BISECT_RUN" \
|
||||
> /dev/null
|
||||
then
|
||||
(
|
||||
gettext "bisect run cannot continue any more" &&
|
||||
echo
|
||||
) >&2
|
||||
exit $res
|
||||
fi
|
||||
if sane_grep "first bad commit could be any of" "$GIT_DIR/BISECT_RUN" \
|
||||
> /dev/null; then
|
||||
echo >&2 "bisect run cannot continue any more"
|
||||
exit $res
|
||||
fi
|
||||
|
||||
if [ $res -ne 0 ]
|
||||
then
|
||||
(
|
||||
eval_gettext "bisect run failed:
|
||||
'bisect_state \$state' exited with error code \$res" &&
|
||||
echo
|
||||
) >&2
|
||||
exit $res
|
||||
fi
|
||||
if [ $res -ne 0 ]; then
|
||||
echo >&2 "bisect run failed:"
|
||||
echo >&2 "'bisect_state $state' exited with error code $res"
|
||||
exit $res
|
||||
fi
|
||||
|
||||
if sane_grep "is the first bad commit" "$GIT_DIR/BISECT_RUN" > /dev/null
|
||||
then
|
||||
gettext "bisect run success"; echo
|
||||
exit 0;
|
||||
fi
|
||||
if sane_grep "is the first bad commit" "$GIT_DIR/BISECT_RUN" > /dev/null; then
|
||||
echo "bisect run success"
|
||||
exit 0;
|
||||
fi
|
||||
|
||||
done
|
||||
done
|
||||
}
|
||||
|
||||
bisect_log () {
|
||||
test -s "$GIT_DIR/BISECT_LOG" || die "$(gettext "We are not bisecting.")"
|
||||
test -s "$GIT_DIR/BISECT_LOG" || die "We are not bisecting."
|
||||
cat "$GIT_DIR/BISECT_LOG"
|
||||
}
|
||||
|
||||
case "$#" in
|
||||
0)
|
||||
usage ;;
|
||||
usage ;;
|
||||
*)
|
||||
cmd="$1"
|
||||
shift
|
||||
case "$cmd" in
|
||||
help)
|
||||
git bisect -h ;;
|
||||
start)
|
||||
bisect_start "$@" ;;
|
||||
bad|good)
|
||||
bisect_state "$cmd" "$@" ;;
|
||||
skip)
|
||||
bisect_skip "$@" ;;
|
||||
next)
|
||||
# Not sure we want "next" at the UI level anymore.
|
||||
bisect_next "$@" ;;
|
||||
visualize|view)
|
||||
bisect_visualize "$@" ;;
|
||||
reset)
|
||||
bisect_reset "$@" ;;
|
||||
replay)
|
||||
bisect_replay "$@" ;;
|
||||
log)
|
||||
bisect_log ;;
|
||||
run)
|
||||
bisect_run "$@" ;;
|
||||
*)
|
||||
usage ;;
|
||||
esac
|
||||
cmd="$1"
|
||||
shift
|
||||
case "$cmd" in
|
||||
help)
|
||||
git bisect -h ;;
|
||||
start)
|
||||
bisect_start "$@" ;;
|
||||
bad|good)
|
||||
bisect_state "$cmd" "$@" ;;
|
||||
skip)
|
||||
bisect_skip "$@" ;;
|
||||
next)
|
||||
# Not sure we want "next" at the UI level anymore.
|
||||
bisect_next "$@" ;;
|
||||
visualize|view)
|
||||
bisect_visualize "$@" ;;
|
||||
reset)
|
||||
bisect_reset "$@" ;;
|
||||
replay)
|
||||
bisect_replay "$@" ;;
|
||||
log)
|
||||
bisect_log ;;
|
||||
run)
|
||||
bisect_run "$@" ;;
|
||||
*)
|
||||
usage ;;
|
||||
esac
|
||||
esac
|
||||
|
@ -215,14 +215,10 @@ extern char *gitbasename(char *);
|
||||
#define is_dir_sep(c) ((c) == '/')
|
||||
#endif
|
||||
|
||||
#ifndef find_last_dir_sep
|
||||
#define find_last_dir_sep(path) strrchr(path, '/')
|
||||
#endif
|
||||
|
||||
#if __HP_cc >= 61000
|
||||
#define NORETURN __attribute__((noreturn))
|
||||
#define NORETURN_PTR
|
||||
#elif defined(__GNUC__) && !defined(NO_NORETURN)
|
||||
#elif defined(__GNUC__)
|
||||
#define NORETURN __attribute__((__noreturn__))
|
||||
#define NORETURN_PTR __attribute__((__noreturn__))
|
||||
#elif defined(_MSC_VER)
|
||||
|
@ -13,8 +13,7 @@ TOOL_MODE=diff
|
||||
should_prompt () {
|
||||
prompt_merge=$(git config --bool mergetool.prompt || echo true)
|
||||
prompt=$(git config --bool difftool.prompt || echo $prompt_merge)
|
||||
if test "$prompt" = true
|
||||
then
|
||||
if test "$prompt" = true; then
|
||||
test -z "$GIT_DIFFTOOL_NO_PROMPT"
|
||||
else
|
||||
test -n "$GIT_DIFFTOOL_PROMPT"
|
||||
@ -38,11 +37,9 @@ launch_merge_tool () {
|
||||
|
||||
# $LOCAL and $REMOTE are temporary files so prompt
|
||||
# the user with the real $MERGED name before launching $merge_tool.
|
||||
if should_prompt
|
||||
then
|
||||
if should_prompt; then
|
||||
printf "\nViewing: '$MERGED'\n"
|
||||
if use_ext_cmd
|
||||
then
|
||||
if use_ext_cmd; then
|
||||
printf "Hit return to launch '%s': " \
|
||||
"$GIT_DIFFTOOL_EXTCMD"
|
||||
else
|
||||
@ -51,8 +48,7 @@ launch_merge_tool () {
|
||||
read ans
|
||||
fi
|
||||
|
||||
if use_ext_cmd
|
||||
then
|
||||
if use_ext_cmd; then
|
||||
export BASE
|
||||
eval $GIT_DIFFTOOL_EXTCMD '"$LOCAL"' '"$REMOTE"'
|
||||
else
|
||||
@ -60,10 +56,8 @@ launch_merge_tool () {
|
||||
fi
|
||||
}
|
||||
|
||||
if ! use_ext_cmd
|
||||
then
|
||||
if test -n "$GIT_DIFF_TOOL"
|
||||
then
|
||||
if ! use_ext_cmd; then
|
||||
if test -n "$GIT_DIFF_TOOL"; then
|
||||
merge_tool="$GIT_DIFF_TOOL"
|
||||
else
|
||||
merge_tool="$(get_merge_tool)" || exit
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
functions=$(cat << \EOF
|
||||
warn () {
|
||||
echo "$*" >&2
|
||||
echo "$*" >&2
|
||||
}
|
||||
|
||||
map()
|
||||
@ -98,11 +98,11 @@ set_ident () {
|
||||
}
|
||||
|
||||
USAGE="[--env-filter <command>] [--tree-filter <command>]
|
||||
[--index-filter <command>] [--parent-filter <command>]
|
||||
[--msg-filter <command>] [--commit-filter <command>]
|
||||
[--tag-name-filter <command>] [--subdirectory-filter <directory>]
|
||||
[--original <namespace>] [-d <directory>] [-f | --force]
|
||||
[<rev-list options>...]"
|
||||
[--index-filter <command>] [--parent-filter <command>]
|
||||
[--msg-filter <command>] [--commit-filter <command>]
|
||||
[--tag-name-filter <command>] [--subdirectory-filter <directory>]
|
||||
[--original <namespace>] [-d <directory>] [-f | --force]
|
||||
[<rev-list options>...]"
|
||||
|
||||
OPTIONS_SPEC=
|
||||
. git-sh-setup
|
||||
|
@ -27,7 +27,6 @@ httpd="$(git config --get instaweb.httpd)"
|
||||
root="$(git config --get instaweb.gitwebdir)"
|
||||
port=$(git config --get instaweb.port)
|
||||
module_path="$(git config --get instaweb.modulepath)"
|
||||
action="browse"
|
||||
|
||||
conf="$GIT_DIR/gitweb/httpd.conf"
|
||||
|
||||
@ -99,18 +98,12 @@ start_httpd () {
|
||||
|
||||
# here $httpd should have a meaningful value
|
||||
resolve_full_httpd
|
||||
mkdir -p "$fqgitdir/gitweb/$httpd_only"
|
||||
conf="$fqgitdir/gitweb/$httpd_only.conf"
|
||||
|
||||
# generate correct config file if it doesn't exist
|
||||
test -f "$conf" || configure_httpd
|
||||
test -f "$fqgitdir/gitweb/gitweb_config.perl" || gitweb_conf
|
||||
|
||||
# don't quote $full_httpd, there can be arguments to it (-f)
|
||||
case "$httpd" in
|
||||
*mongoose*|*plackup*)
|
||||
#These servers don't have a daemon mode so we'll have to fork it
|
||||
$full_httpd "$conf" &
|
||||
$full_httpd "$fqgitdir/gitweb/httpd.conf" &
|
||||
#Save the pid before doing anything else (we'll print it later)
|
||||
pid=$!
|
||||
|
||||
@ -124,7 +117,7 @@ $pid
|
||||
EOF
|
||||
;;
|
||||
*)
|
||||
$full_httpd "$conf"
|
||||
$full_httpd "$fqgitdir/gitweb/httpd.conf"
|
||||
if test $? != 0; then
|
||||
echo "Could not execute http daemon $httpd."
|
||||
exit 1
|
||||
@ -155,13 +148,17 @@ while test $# != 0
|
||||
do
|
||||
case "$1" in
|
||||
--stop|stop)
|
||||
action="stop"
|
||||
stop_httpd
|
||||
exit 0
|
||||
;;
|
||||
--start|start)
|
||||
action="start"
|
||||
start_httpd
|
||||
exit 0
|
||||
;;
|
||||
--restart|restart)
|
||||
action="restart"
|
||||
stop_httpd
|
||||
start_httpd
|
||||
exit 0
|
||||
;;
|
||||
-l|--local)
|
||||
local=true
|
||||
@ -590,53 +587,32 @@ our \$projects_list = \$projectroot;
|
||||
EOF
|
||||
}
|
||||
|
||||
configure_httpd() {
|
||||
case "$httpd" in
|
||||
*lighttpd*)
|
||||
lighttpd_conf
|
||||
;;
|
||||
*apache2*|*httpd*)
|
||||
apache2_conf
|
||||
;;
|
||||
webrick)
|
||||
webrick_conf
|
||||
;;
|
||||
*mongoose*)
|
||||
mongoose_conf
|
||||
;;
|
||||
*plackup*)
|
||||
plackup_conf
|
||||
;;
|
||||
*)
|
||||
echo "Unknown httpd specified: $httpd"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
case "$action" in
|
||||
stop)
|
||||
stop_httpd
|
||||
exit 0
|
||||
;;
|
||||
start)
|
||||
start_httpd
|
||||
exit 0
|
||||
;;
|
||||
restart)
|
||||
stop_httpd
|
||||
start_httpd
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
|
||||
gitweb_conf
|
||||
|
||||
resolve_full_httpd
|
||||
mkdir -p "$fqgitdir/gitweb/$httpd_only"
|
||||
conf="$fqgitdir/gitweb/$httpd_only.conf"
|
||||
|
||||
configure_httpd
|
||||
case "$httpd" in
|
||||
*lighttpd*)
|
||||
lighttpd_conf
|
||||
;;
|
||||
*apache2*|*httpd*)
|
||||
apache2_conf
|
||||
;;
|
||||
webrick)
|
||||
webrick_conf
|
||||
;;
|
||||
*mongoose*)
|
||||
mongoose_conf
|
||||
;;
|
||||
*plackup*)
|
||||
plackup_conf
|
||||
;;
|
||||
*)
|
||||
echo "Unknown httpd specified: $httpd"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
start_httpd
|
||||
url=http://127.0.0.1:$port
|
||||
|
@ -9,19 +9,36 @@ merge_mode() {
|
||||
}
|
||||
|
||||
translate_merge_tool_path () {
|
||||
echo "$1"
|
||||
case "$1" in
|
||||
araxis)
|
||||
echo compare
|
||||
;;
|
||||
bc3)
|
||||
echo bcompare
|
||||
;;
|
||||
emerge)
|
||||
echo emacs
|
||||
;;
|
||||
gvimdiff|gvimdiff2)
|
||||
echo gvim
|
||||
;;
|
||||
vimdiff|vimdiff2)
|
||||
echo vim
|
||||
;;
|
||||
*)
|
||||
echo "$1"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
check_unchanged () {
|
||||
if test "$MERGED" -nt "$BACKUP"
|
||||
then
|
||||
if test "$MERGED" -nt "$BACKUP"; then
|
||||
status=0
|
||||
else
|
||||
while true
|
||||
do
|
||||
while true; do
|
||||
echo "$MERGED seems unchanged."
|
||||
printf "Was the merge successful? [y/n] "
|
||||
read answer
|
||||
read answer || return 1
|
||||
case "$answer" in
|
||||
y*|Y*) status=0; break ;;
|
||||
n*|N*) status=1; break ;;
|
||||
@ -30,98 +47,312 @@ check_unchanged () {
|
||||
fi
|
||||
}
|
||||
|
||||
valid_tool_config () {
|
||||
if test -n "$(get_merge_tool_cmd "$1")"
|
||||
then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
valid_tool () {
|
||||
setup_tool "$1" || valid_tool_config "$1"
|
||||
}
|
||||
|
||||
setup_tool () {
|
||||
case "$1" in
|
||||
vim*|gvim*)
|
||||
tool=vim
|
||||
araxis | bc3 | diffuse | ecmerge | emerge | gvimdiff | gvimdiff2 | \
|
||||
kdiff3 | meld | opendiff | p4merge | tkdiff | vimdiff | vimdiff2 | xxdiff)
|
||||
;; # happy
|
||||
kompare)
|
||||
if ! diff_mode; then
|
||||
return 1
|
||||
fi
|
||||
;;
|
||||
tortoisemerge)
|
||||
if ! merge_mode; then
|
||||
return 1
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
tool="$1"
|
||||
if test -z "$(get_merge_tool_cmd "$1")"; then
|
||||
return 1
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
mergetools="$(git --exec-path)/mergetools"
|
||||
|
||||
# Load the default definitions
|
||||
. "$mergetools/defaults"
|
||||
if ! test -f "$mergetools/$tool"
|
||||
then
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Load the redefined functions
|
||||
. "$mergetools/$tool"
|
||||
|
||||
if merge_mode && ! can_merge
|
||||
then
|
||||
echo "error: '$tool' can not be used to resolve merges" >&2
|
||||
exit 1
|
||||
elif diff_mode && ! can_diff
|
||||
then
|
||||
echo "error: '$tool' can only be used to resolve merges" >&2
|
||||
exit 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
get_merge_tool_cmd () {
|
||||
# Prints the custom command for a merge tool
|
||||
merge_tool="$1"
|
||||
if diff_mode
|
||||
then
|
||||
if test -n "$1"; then
|
||||
merge_tool="$1"
|
||||
else
|
||||
merge_tool="$(get_merge_tool)"
|
||||
fi
|
||||
if diff_mode; then
|
||||
echo "$(git config difftool.$merge_tool.cmd ||
|
||||
git config mergetool.$merge_tool.cmd)"
|
||||
git config mergetool.$merge_tool.cmd)"
|
||||
else
|
||||
echo "$(git config mergetool.$merge_tool.cmd)"
|
||||
fi
|
||||
}
|
||||
|
||||
# Entry point for running tools
|
||||
run_merge_tool () {
|
||||
# If GIT_PREFIX is empty then we cannot use it in tools
|
||||
# that expect to be able to chdir() to its value.
|
||||
GIT_PREFIX=${GIT_PREFIX:-.}
|
||||
export GIT_PREFIX
|
||||
|
||||
merge_tool_path="$(get_merge_tool_path "$1")" || exit
|
||||
base_present="$2"
|
||||
status=0
|
||||
|
||||
# Bring tool-specific functions into scope
|
||||
setup_tool "$1"
|
||||
|
||||
if merge_mode
|
||||
then
|
||||
merge_cmd "$1"
|
||||
else
|
||||
diff_cmd "$1"
|
||||
fi
|
||||
case "$1" in
|
||||
araxis)
|
||||
if merge_mode; then
|
||||
touch "$BACKUP"
|
||||
if $base_present; then
|
||||
"$merge_tool_path" -wait -merge -3 -a1 \
|
||||
"$BASE" "$LOCAL" "$REMOTE" "$MERGED" \
|
||||
>/dev/null 2>&1
|
||||
else
|
||||
"$merge_tool_path" -wait -2 \
|
||||
"$LOCAL" "$REMOTE" "$MERGED" \
|
||||
>/dev/null 2>&1
|
||||
fi
|
||||
check_unchanged
|
||||
else
|
||||
"$merge_tool_path" -wait -2 "$LOCAL" "$REMOTE" \
|
||||
>/dev/null 2>&1
|
||||
fi
|
||||
;;
|
||||
bc3)
|
||||
if merge_mode; then
|
||||
touch "$BACKUP"
|
||||
if $base_present; then
|
||||
"$merge_tool_path" "$LOCAL" "$REMOTE" "$BASE" \
|
||||
-mergeoutput="$MERGED"
|
||||
else
|
||||
"$merge_tool_path" "$LOCAL" "$REMOTE" \
|
||||
-mergeoutput="$MERGED"
|
||||
fi
|
||||
check_unchanged
|
||||
else
|
||||
"$merge_tool_path" "$LOCAL" "$REMOTE"
|
||||
fi
|
||||
;;
|
||||
diffuse)
|
||||
if merge_mode; then
|
||||
touch "$BACKUP"
|
||||
if $base_present; then
|
||||
"$merge_tool_path" \
|
||||
"$LOCAL" "$MERGED" "$REMOTE" \
|
||||
"$BASE" | cat
|
||||
else
|
||||
"$merge_tool_path" \
|
||||
"$LOCAL" "$MERGED" "$REMOTE" | cat
|
||||
fi
|
||||
check_unchanged
|
||||
else
|
||||
"$merge_tool_path" "$LOCAL" "$REMOTE" | cat
|
||||
fi
|
||||
;;
|
||||
ecmerge)
|
||||
if merge_mode; then
|
||||
touch "$BACKUP"
|
||||
if $base_present; then
|
||||
"$merge_tool_path" "$BASE" "$LOCAL" "$REMOTE" \
|
||||
--default --mode=merge3 --to="$MERGED"
|
||||
else
|
||||
"$merge_tool_path" "$LOCAL" "$REMOTE" \
|
||||
--default --mode=merge2 --to="$MERGED"
|
||||
fi
|
||||
check_unchanged
|
||||
else
|
||||
"$merge_tool_path" --default --mode=diff2 \
|
||||
"$LOCAL" "$REMOTE"
|
||||
fi
|
||||
;;
|
||||
emerge)
|
||||
if merge_mode; then
|
||||
if $base_present; then
|
||||
"$merge_tool_path" \
|
||||
-f emerge-files-with-ancestor-command \
|
||||
"$LOCAL" "$REMOTE" "$BASE" \
|
||||
"$(basename "$MERGED")"
|
||||
else
|
||||
"$merge_tool_path" \
|
||||
-f emerge-files-command \
|
||||
"$LOCAL" "$REMOTE" \
|
||||
"$(basename "$MERGED")"
|
||||
fi
|
||||
status=$?
|
||||
else
|
||||
"$merge_tool_path" -f emerge-files-command \
|
||||
"$LOCAL" "$REMOTE"
|
||||
fi
|
||||
;;
|
||||
gvimdiff|vimdiff)
|
||||
if merge_mode; then
|
||||
touch "$BACKUP"
|
||||
if $base_present; then
|
||||
"$merge_tool_path" -f -d -c "wincmd J" \
|
||||
"$MERGED" "$LOCAL" "$BASE" "$REMOTE"
|
||||
else
|
||||
"$merge_tool_path" -f -d -c "wincmd l" \
|
||||
"$LOCAL" "$MERGED" "$REMOTE"
|
||||
fi
|
||||
check_unchanged
|
||||
else
|
||||
"$merge_tool_path" -R -f -d -c "wincmd l" \
|
||||
"$LOCAL" "$REMOTE"
|
||||
fi
|
||||
;;
|
||||
gvimdiff2|vimdiff2)
|
||||
if merge_mode; then
|
||||
touch "$BACKUP"
|
||||
"$merge_tool_path" -f -d -c "wincmd l" \
|
||||
"$LOCAL" "$MERGED" "$REMOTE"
|
||||
check_unchanged
|
||||
else
|
||||
"$merge_tool_path" -R -f -d -c "wincmd l" \
|
||||
"$LOCAL" "$REMOTE"
|
||||
fi
|
||||
;;
|
||||
kdiff3)
|
||||
if merge_mode; then
|
||||
if $base_present; then
|
||||
("$merge_tool_path" --auto \
|
||||
--L1 "$MERGED (Base)" \
|
||||
--L2 "$MERGED (Local)" \
|
||||
--L3 "$MERGED (Remote)" \
|
||||
-o "$MERGED" \
|
||||
"$BASE" "$LOCAL" "$REMOTE" \
|
||||
> /dev/null 2>&1)
|
||||
else
|
||||
("$merge_tool_path" --auto \
|
||||
--L1 "$MERGED (Local)" \
|
||||
--L2 "$MERGED (Remote)" \
|
||||
-o "$MERGED" \
|
||||
"$LOCAL" "$REMOTE" \
|
||||
> /dev/null 2>&1)
|
||||
fi
|
||||
status=$?
|
||||
else
|
||||
("$merge_tool_path" --auto \
|
||||
--L1 "$MERGED (A)" \
|
||||
--L2 "$MERGED (B)" "$LOCAL" "$REMOTE" \
|
||||
> /dev/null 2>&1)
|
||||
fi
|
||||
;;
|
||||
kompare)
|
||||
"$merge_tool_path" "$LOCAL" "$REMOTE"
|
||||
;;
|
||||
meld)
|
||||
if merge_mode; then
|
||||
touch "$BACKUP"
|
||||
"$merge_tool_path" "$LOCAL" "$MERGED" "$REMOTE"
|
||||
check_unchanged
|
||||
else
|
||||
"$merge_tool_path" "$LOCAL" "$REMOTE"
|
||||
fi
|
||||
;;
|
||||
opendiff)
|
||||
if merge_mode; then
|
||||
touch "$BACKUP"
|
||||
if $base_present; then
|
||||
"$merge_tool_path" "$LOCAL" "$REMOTE" \
|
||||
-ancestor "$BASE" \
|
||||
-merge "$MERGED" | cat
|
||||
else
|
||||
"$merge_tool_path" "$LOCAL" "$REMOTE" \
|
||||
-merge "$MERGED" | cat
|
||||
fi
|
||||
check_unchanged
|
||||
else
|
||||
"$merge_tool_path" "$LOCAL" "$REMOTE" | cat
|
||||
fi
|
||||
;;
|
||||
p4merge)
|
||||
if merge_mode; then
|
||||
touch "$BACKUP"
|
||||
$base_present || >"$BASE"
|
||||
"$merge_tool_path" "$BASE" "$LOCAL" "$REMOTE" "$MERGED"
|
||||
check_unchanged
|
||||
else
|
||||
"$merge_tool_path" "$LOCAL" "$REMOTE"
|
||||
fi
|
||||
;;
|
||||
tkdiff)
|
||||
if merge_mode; then
|
||||
if $base_present; then
|
||||
"$merge_tool_path" -a "$BASE" \
|
||||
-o "$MERGED" "$LOCAL" "$REMOTE"
|
||||
else
|
||||
"$merge_tool_path" \
|
||||
-o "$MERGED" "$LOCAL" "$REMOTE"
|
||||
fi
|
||||
status=$?
|
||||
else
|
||||
"$merge_tool_path" "$LOCAL" "$REMOTE"
|
||||
fi
|
||||
;;
|
||||
tortoisemerge)
|
||||
if $base_present; then
|
||||
touch "$BACKUP"
|
||||
"$merge_tool_path" \
|
||||
-base:"$BASE" -mine:"$LOCAL" \
|
||||
-theirs:"$REMOTE" -merged:"$MERGED"
|
||||
check_unchanged
|
||||
else
|
||||
echo "TortoiseMerge cannot be used without a base" 1>&2
|
||||
status=1
|
||||
fi
|
||||
;;
|
||||
xxdiff)
|
||||
if merge_mode; then
|
||||
touch "$BACKUP"
|
||||
if $base_present; then
|
||||
"$merge_tool_path" -X --show-merged-pane \
|
||||
-R 'Accel.SaveAsMerged: "Ctrl-S"' \
|
||||
-R 'Accel.Search: "Ctrl+F"' \
|
||||
-R 'Accel.SearchForward: "Ctrl-G"' \
|
||||
--merged-file "$MERGED" \
|
||||
"$LOCAL" "$BASE" "$REMOTE"
|
||||
else
|
||||
"$merge_tool_path" -X $extra \
|
||||
-R 'Accel.SaveAsMerged: "Ctrl-S"' \
|
||||
-R 'Accel.Search: "Ctrl+F"' \
|
||||
-R 'Accel.SearchForward: "Ctrl-G"' \
|
||||
--merged-file "$MERGED" \
|
||||
"$LOCAL" "$REMOTE"
|
||||
fi
|
||||
check_unchanged
|
||||
else
|
||||
"$merge_tool_path" \
|
||||
-R 'Accel.Search: "Ctrl+F"' \
|
||||
-R 'Accel.SearchForward: "Ctrl-G"' \
|
||||
"$LOCAL" "$REMOTE"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
merge_tool_cmd="$(get_merge_tool_cmd "$1")"
|
||||
if test -z "$merge_tool_cmd"; then
|
||||
if merge_mode; then
|
||||
status=1
|
||||
fi
|
||||
break
|
||||
fi
|
||||
if merge_mode; then
|
||||
trust_exit_code="$(git config --bool \
|
||||
mergetool."$1".trustExitCode || echo false)"
|
||||
if test "$trust_exit_code" = "false"; then
|
||||
touch "$BACKUP"
|
||||
( eval $merge_tool_cmd )
|
||||
check_unchanged
|
||||
else
|
||||
( eval $merge_tool_cmd )
|
||||
status=$?
|
||||
fi
|
||||
else
|
||||
( eval $merge_tool_cmd )
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
return $status
|
||||
}
|
||||
|
||||
guess_merge_tool () {
|
||||
if merge_mode
|
||||
then
|
||||
if merge_mode; then
|
||||
tools="tortoisemerge"
|
||||
else
|
||||
tools="kompare"
|
||||
fi
|
||||
if test -n "$DISPLAY"
|
||||
then
|
||||
if test -n "$GNOME_DESKTOP_SESSION_ID"
|
||||
then
|
||||
if test -n "$DISPLAY"; then
|
||||
if test -n "$GNOME_DESKTOP_SESSION_ID" ; then
|
||||
tools="meld opendiff kdiff3 tkdiff xxdiff $tools"
|
||||
else
|
||||
tools="opendiff kdiff3 tkdiff xxdiff meld $tools"
|
||||
@ -142,8 +373,7 @@ guess_merge_tool () {
|
||||
for i in $tools
|
||||
do
|
||||
merge_tool_path="$(translate_merge_tool_path "$i")"
|
||||
if type "$merge_tool_path" >/dev/null 2>&1
|
||||
then
|
||||
if type "$merge_tool_path" > /dev/null 2>&1; then
|
||||
echo "$i"
|
||||
return 0
|
||||
fi
|
||||
@ -156,14 +386,12 @@ guess_merge_tool () {
|
||||
get_configured_merge_tool () {
|
||||
# Diff mode first tries diff.tool and falls back to merge.tool.
|
||||
# Merge mode only checks merge.tool
|
||||
if diff_mode
|
||||
then
|
||||
if diff_mode; then
|
||||
merge_tool=$(git config diff.tool || git config merge.tool)
|
||||
else
|
||||
merge_tool=$(git config merge.tool)
|
||||
fi
|
||||
if test -n "$merge_tool" && ! valid_tool "$merge_tool"
|
||||
then
|
||||
if test -n "$merge_tool" && ! valid_tool "$merge_tool"; then
|
||||
echo >&2 "git config option $TOOL_MODE.tool set to unknown tool: $merge_tool"
|
||||
echo >&2 "Resetting to default..."
|
||||
return 1
|
||||
@ -173,28 +401,28 @@ get_configured_merge_tool () {
|
||||
|
||||
get_merge_tool_path () {
|
||||
# A merge tool has been set, so verify that it's valid.
|
||||
merge_tool="$1"
|
||||
if ! valid_tool "$merge_tool"
|
||||
then
|
||||
if test -n "$1"; then
|
||||
merge_tool="$1"
|
||||
else
|
||||
merge_tool="$(get_merge_tool)"
|
||||
fi
|
||||
if ! valid_tool "$merge_tool"; then
|
||||
echo >&2 "Unknown merge tool $merge_tool"
|
||||
exit 1
|
||||
fi
|
||||
if diff_mode
|
||||
then
|
||||
if diff_mode; then
|
||||
merge_tool_path=$(git config difftool."$merge_tool".path ||
|
||||
git config mergetool."$merge_tool".path)
|
||||
git config mergetool."$merge_tool".path)
|
||||
else
|
||||
merge_tool_path=$(git config mergetool."$merge_tool".path)
|
||||
fi
|
||||
if test -z "$merge_tool_path"
|
||||
then
|
||||
if test -z "$merge_tool_path"; then
|
||||
merge_tool_path="$(translate_merge_tool_path "$merge_tool")"
|
||||
fi
|
||||
if test -z "$(get_merge_tool_cmd "$merge_tool")" &&
|
||||
! type "$merge_tool_path" >/dev/null 2>&1
|
||||
then
|
||||
! type "$merge_tool_path" > /dev/null 2>&1; then
|
||||
echo >&2 "The $TOOL_MODE tool $merge_tool is not available as"\
|
||||
"'$merge_tool_path'"
|
||||
"'$merge_tool_path'"
|
||||
exit 1
|
||||
fi
|
||||
echo "$merge_tool_path"
|
||||
@ -202,10 +430,9 @@ get_merge_tool_path () {
|
||||
|
||||
get_merge_tool () {
|
||||
# Check if a merge tool has been configured
|
||||
merge_tool="$(get_configured_merge_tool)"
|
||||
merge_tool=$(get_configured_merge_tool)
|
||||
# Try to guess an appropriate merge tool if no tool has been set.
|
||||
if test -z "$merge_tool"
|
||||
then
|
||||
if test -z "$merge_tool"; then
|
||||
merge_tool="$(guess_merge_tool)" || exit
|
||||
fi
|
||||
echo "$merge_tool"
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user