Compare commits

...

51 Commits

Author SHA1 Message Date
72fdfb50f7 Use sigaction and SA_RESTART in read-tree.c; add option in Makefile.
Might as well ape the sigaction change in read-tree.c to avoid
the same potential problems.  The fprintf status output will
be overwritten in a second, so don't bother guarding it.  Do
move the fputc after disabling SIGALRM to ensure we go to the
next line, though.

Also add a NO_SA_RESTART option in the Makefile in case someone
doesn't have SA_RESTART but does restart (maybe older HP/UX?).
We want the builder to chose this specifically in case the
system both lacks SA_RESTART and does not restart stdio calls;
a compat #define in git-compat-utils.h would silently allow
broken systems.

Signed-off-by: Jason Riedy <ejr@cs.berkeley.edu>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-03 23:42:25 -07:00
687dd75c95 safe_fgets() - even more anal fgets()
This is from Linus -- the previous round forgot to clear error
after EINTR case.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-03 23:42:25 -07:00
da93d12b00 pack-objects: be incredibly anal about stdio semantics
This is the "letter of the law" version of using fgets() properly in the
face of incredibly broken stdio implementations.  We can work around the
Solaris breakage with SA_RESTART, but in case anybody else is ever that
stupid, here's the "safe" (read: "insanely anal") way to use fgets.

It probably goes without saying that I'm not terribly impressed by
Solaris libc.

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-02 13:46:27 -07:00
fb7a6531e6 Fix Solaris stdio signal handling stupidities
This uses sigaction() to install the SIGALRM handler with SA_RESTART, so
that Solaris stdio doesn't break completely when a signal interrupts a
read.

Thanks to Jason Riedy for confirming the silly Solaris signal behaviour.

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-02 13:41:56 -07:00
473d404b53 tar-tree: file/dirmode fix.
This fixes two bugs introduced when we switched to generic tree
traversal code.

 (1) directory mode recorded silently became 0755, not 0777

 (2) if passed a tree object (not a commit), it emitted an
     alarming error message (but proceeded anyway).

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-03-03 21:36:38 -08:00
1142038098 read-tree --aggressive: remove deleted entry from the working tree.
When both heads deleted, or our side deleted while the other
side did not touch, we did not have to update the working tree.

However, we forgot to remove existing working tree file when we
did not touch and the other side did.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-03-01 23:20:01 -08:00
e1a0c8b148 Merge branch 'lt/fix-apply' into maint
* lt/fix-apply:
  git-am: --whitespace=x option.
  git-apply: war on whitespace -- finishing touches.
  git-apply --whitespace=nowarn
  apply --whitespace: configuration option.
  apply: squelch excessive errors and --whitespace=error-all
  apply --whitespace fixes and enhancements.
  The war on trailing whitespace
2006-03-01 17:06:12 -08:00
9e7c73de0b git-mv: fixes for path handling
Moving a directory ending in a slash was not working as the
destination was not calculated correctly.
E.g. in the git repo,

 git-mv t/ Documentation

gave the error

 Error: destination 'Documentation' already exists

To get rid of this problem, strip trailing slashes from all arguments.
The comment in cg-mv made me curious about this issue; Pasky, thanks!
As result, the workaround in cg-mv is not needed any more.

Also, another bug was shown by cg-mv. When moving files outside of
a subdirectory, it typically calls git-mv with something like

 git-mv Documentation/git.txt Documentation/../git-mv.txt

which triggers the following error from git-update-index:

 Ignoring path Documentation/../git-mv.txt

The result is a moved file, removed from git revisioning, but not
added again. To fix this, the paths have to be normalized not have ".."
in the middle. This was already done in git-mv, but only for
a better visual appearance :(

Signed-off-by: Josef Weidendorfer <Josef.Weidendorfer@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-03-01 12:13:46 -08:00
5e6f85f6c1 git-mv: Allow -h without repo & fix error message
This fixes "git-mv -h" to output the usage without the need
to be in a git repository.
Additionally:
- fix confusing error message when only one arg was given
- fix typo in error message

Signed-off-by: Josef Weidendorfer <Josef.Weidendorfer@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-03-01 12:13:44 -08:00
573464319f Allow git-mv to accept ./ in paths.
Signed-off-by: Junio C Hamano <junkio@cox.net>
(cherry picked from 9a0e6731c6 commit)
2006-03-01 12:12:53 -08:00
feffaddce0 combine-diff: Honour -z option correctly.
Combined diffs don't null terminate things in the same way as standard
diffs.  This is presumably wrong.

Signed-off-by: Mark Wooding <mdw@distorted.org.uk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
(cherry picked from 6baf0484ef commit)
2006-03-01 04:09:41 -08:00
b9003c06a8 combine-diff: Honour --full-index.
For some reason, combined diffs don't honour the --full-index flag when
emitting patches.  Fix this.

Signed-off-by: Mark Wooding <mdw@distorted.org.uk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
(cherry picked from e70c6b3574 commit)
2006-03-01 04:09:40 -08:00
a64dd34d8c diffcore-break: micro-optimize by avoiding delta between identical files.
We did not check if we have the same file on both sides when
computing break score.  This is usually not a problem, but if
the user said --find-copies-harde with -B, we ended up trying a
delta between the same data even when we know the SHA1 hash of
both sides match.

Signed-off-by: Junio C Hamano <junkio@cox.net>
(cherry picked from aeecd23ae2 commit)
2006-03-01 04:08:12 -08:00
12cbbdc40b git-am: --whitespace=x option.
This is passed down to git-apply to override the built-in
default and per-repository configuration at runtime.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-28 22:38:40 -08:00
56248c5a5c git-apply: war on whitespace -- finishing touches.
This changes the default --whitespace policy to nowarn when we
are only getting --stat, --summary etc. IOW when not applying
the patch.  When applying the patch, the default is warn (spit
out warning message but apply the patch).

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-28 01:17:14 -08:00
5c0d46eb3d git-apply --whitespace=nowarn
Andrew insists --whitespace=warn should be the default, and I
tend to agree.  This introduces --whitespace=warn, so if your
project policy is more lenient, you can squelch them by having
apply.whitespace=nowarn in your configuration file.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-27 17:36:00 -08:00
383e20b614 apply --whitespace: configuration option.
The new configuration option apply.whitespace can take one of
"warn", "error", "error-all", or "strip".  When git-apply is run
to apply the patch to the index, they are used as the default
value if there is no command line --whitespace option.

Andrew can now tell people who feed him git trees to update to
this version and say:

	git repo-config apply.whitespace error

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-27 17:36:00 -08:00
59aa256204 apply: squelch excessive errors and --whitespace=error-all
This by default makes --whitespace=warn, error, and strip to
warn only the first 5 additions of trailing whitespaces.  A new
option --whitespace=error-all can be used to view all of them
before applying.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-27 17:35:59 -08:00
5c7b580c94 apply --whitespace fixes and enhancements.
In addition to fixing obvious command line parsing bugs in the
previous round, this changes the following:

 * Adds "--whitespace=strip".  This applies after stripping the
   new trailing whitespaces introduced to the patch.

 * The output error message format is changed to say
   "patch-filename:linenumber:contents of the line".  This makes
   it similar to typical compiler error message format, and
   helps C-x ` (next-error) in Emacs compilation buffer.

 * --whitespace=error and --whitespace=warn do not stop at the
   first error.  We might want to limit the output to say first
   20 such lines to prevent cluttering, but on the other hand if
   you are willing to hand-fix after inspecting them, getting
   everything with a single run might be easier to work with.
   After all, somebody has to do the clean-up work somewhere.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-27 17:35:59 -08:00
1187df57c2 The war on trailing whitespace
On Sat, 25 Feb 2006, Andrew Morton wrote:
>
> I'd suggest a) git will simply refuse to apply such a patch unless given a
> special `forcing' flag, b) even when thus forced, it will still warn and c)
> with a different flag, it will strip-then-apply, without generating a
> warning.

This doesn't do the "strip-then-apply" thing, but it allows you to make
git-apply generate a warning or error on extraneous whitespace.

Use --whitespace=warn to warn, and (surprise, surprise) --whitespace=error
to make it a fatal error to have whitespace at the end.

Totally untested, of course. But it compiles, so it must be fine.

HOWEVER! Note that this literally will check every single patch-line with
"+" at the beginning. Which means that if you fix a simple typo, and the
line had a space at the end before, and you didn't remove it, that's still
considered a "new line with whitespace at the end", even though obviously
the line wasn't really new.

I assume this is what you wanted, and there isn't really any sane
alternatives (you could make the warning activate only for _pure_
additions with no deletions at all in that hunk, but that sounds a bit
insane).

		Linus
2006-02-27 17:35:59 -08:00
a204756a45 sample hooks template.
These two sample hooks try to detect and use the corresponding
commit hook from the same repository.  However, they forgot to
set up GIT_DIR for their own use, so was not in effect.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-26 15:16:41 -08:00
6d5129ac09 Merge branch 'fix' into maint
* fix:
  git-am: do not allow empty commits by mistake.
2006-02-24 02:21:00 -08:00
7bd1527d2d Merge branches 'jc/fix-co-candy', 'jc/fix-rename-leak' and 'ar/fix-win' into maint
* jc/fix-co-candy:
  checkout - eye candy.

* jc/fix-rename-leak:
  diffcore-rename: plug memory leak.

* ar/fix-win:
  fix t5600-clone-fail-cleanup.sh on windows
2006-02-23 22:25:32 -08:00
6d28644d69 git-am: do not allow empty commits by mistake.
Running "git-am --resolved" without doing anything can create an empty
commit. Prevent it.

Thanks for Eric W. Biederman for spotting this.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-23 22:14:47 -08:00
edd3ebfe27 fix t5600-clone-fail-cleanup.sh on windows
In windows you cannot remove current or opened directory,
an opened file, a running program, a loaded library, etc...

[jc: signoffs?  With a minor quoting fix.]

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-23 03:47:15 -08:00
09a5d72d8e diffcore-rename: plug memory leak.
Spotted by Nicolas Pitre.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 19:45:48 -08:00
bd2afde8a3 Give no terminating LF to error() function.
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 19:10:26 -08:00
744633cbf2 checkout - eye candy.
This implements "eye candy" similar to the pack-object/unpack-object
to entertain users while a large tree is being checked out after
a clone or a pull.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 19:04:06 -08:00
6dc78e696b git-fetch: follow tag only when tracking remote branch.
Unless --no-tags flag was given, git-fetch tried to always
follow remote tags that point at the commits we picked up.

It is not very useful to pick up tags from remote unless storing
the fetched branch head in a local tracking branch.  This is
especially true if the fetch is done to merge the remote branch
into our current branch as one-shot basis (i.e. "please pull"),
and is even harmful if the remote repository has many irrelevant
tags.

This proposed update disables the automated tag following unless
we are storing the a fetched branch head in a local tracking
branch.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 16:04:08 -08:00
183bdb2ccc pack-objects eye-candy: finishing touches.
This updates the progress output to match "every one second or
every percent whichever comes early" used by unpack-objects, as
discussed on the list.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 16:02:59 -08:00
5e8dc750ee also adds progress when actually writing a pack
If that pack is big, it takes significant time to write and might
benefit from some more eye candies as well.  This is however disabled
when the pack is written to stdout since in that case the output is
usually piped into unpack_objects which already does its own progress
reporting.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 14:51:58 -08:00
b2504a0d2f nicer eye candies for pack-objects
This provides a stable and simpler progress reporting mechanism that
updates progress as often as possible but accurately not updating more
than once a second.  The deltification phase is also made more
interesting to watch (since repacking a big repository and only seeing a
dot appear once every many seconds is rather boring and doesn't provide
much food for anticipation).

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 13:15:26 -08:00
d64e6b0429 Keep Porcelainish from failing by broken ident after making changes.
"empty ident not allowed" error makes commit-tree fail, so we
are already safer in that we would not end up with commit
objects that have bogus names on the author or committer fields.
However, before commit-tree is called there are already changes
made to the index file and the working tree.  The operation can
be resumed after fixing the environment problem, but when this
triggers to a newcomer with unusable gecos, the first question
becomes "what did I lose and how would I recover".

This patch modifies some Porcelainish commands to verify
GIT_COMMITTER_IDENT as soon as we know we are going to make some
commits before doing much damage to prevent confusion.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 13:14:57 -08:00
589e4f93c7 Delay "empty ident" errors until they really matter.
Previous one warned people upfront to encourage fixing their
environment early, but some people just use repositories and git
tools read-only without making any changes, and in such a case
there is not much point insisting on them having a usable ident.

This round attempts to move the error until either "git-var"
asks for the ident explicitly or "commit-tree" wants to use it.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 13:14:57 -08:00
2fb4a21074 Make "empty ident" error message a bit more helpful.
It appears that some people who did not care about having bogus
names in their own commit messages are bitten by the recent
change to require a sane environment [*1*].

While it was a good idea to prevent people from using bogus
names to create commits and doing sign-offs, the error message
is not very informative.  This patch attempts to warn things
upfront and hint people how to fix their environments.

[Footnote]

*1* The thread is this one.

    http://marc.theaimsgroup.com/?t=113868084800004

    Especially this message.

    http://marc.theaimsgroup.com/?m=113932830015032

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 13:14:57 -08:00
15b4d577ae pack-objects: avoid delta chains that are too long.
This tries to rework the solution for the excess delta chain
problem. An earlier commit worked it around ``cheaply'', but
repeated repacking risks unbound growth of delta chains.

This version counts the length of delta chain we are reusing
from the existing pack, and makes sure a base object that has
sufficiently long delta chain does not get deltified.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 13:14:57 -08:00
4181bda156 git-repack: allow passing a couple of flags to pack-objects.
A new flag -q makes underlying pack-objects less chatty.
A new flag -f forces delta to be recomputed from scratch.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 13:14:57 -08:00
ab7cd7bb8c pack-objects: finishing touches.
This introduces --no-reuse-delta option to disable reusing of
existing delta, which is a large part of the optimization
introduced by this series.  This may become necessary if
repeated repacking makes delta chain too long.  With this, the
output of the command becomes identical to that of the older
implementation.  But the performance suffers greatly.

It still allows reusing non-deltified representations; there is
no point uncompressing and recompressing the whole text.

It also adds a couple more statistics output, while squelching
it under -q flag, which the last round forgot to do.

  $ time old-git-pack-objects --stdout >/dev/null <RL
  Generating pack...
  Done counting 184141 objects.
  Packing 184141 objects....................
  real    12m8.530s       user    11m1.450s       sys     0m57.920s
  $ time git-pack-objects --stdout >/dev/null <RL
  Generating pack...
  Done counting 184141 objects.
  Packing 184141 objects.....................
  Total 184141, written 184141 (delta 138297), reused 178833 (delta 134081)
  real    0m59.549s       user    0m56.670s       sys     0m2.400s
  $ time git-pack-objects --stdout --no-reuse-delta >/dev/null <RL
  Generating pack...
  Done counting 184141 objects.
  Packing 184141 objects.....................
  Total 184141, written 184141 (delta 134833), reused 47904 (delta 0)
  real    11m13.830s      user    9m45.240s       sys     0m44.330s

There is one remaining issue when --no-reuse-delta option is not
used.  It can create delta chains that are deeper than specified.

    A<--B<--C<--D   E   F   G

Suppose we have a delta chain A to D (A is stored in full either
in a pack or as a loose object. B is depth1 delta relative to A,
C is depth2 delta relative to B...) with loose objects E, F, G.
And we are going to pack all of them.

B, C and D are left as delta against A, B and C respectively.
So A, E, F, and G are examined for deltification, and let's say
we decided to keep E expanded, and store the rest as deltas like
this:

    E<--F<--G<--A

Oops.  We ended up making D a bit too deep, didn't we?  B, C and
D form a chain on top of A!

This is because we did not know what the final depth of A would
be, when we checked objects and decided to keep the existing
delta.  Unfortunately, deferring the decision until just before
the deltification is not an option.  To be able to make B, C,
and D candidates for deltification with the rest, we need to
know the type and final unexpanded size of them, but the major
part of the optimization comes from the fact that we do not read
the delta data to do so -- getting the final size is quite an
expensive operation.

To prevent this from happening, we should keep A from being
deltified.  But how would we tell that, cheaply?

To do this most precisely, after check_object() runs, each
object that is used as the base object of some existing delta
needs to be marked with the maximum depth of the objects we
decided to keep deltified (in this case, D is depth 3 relative
to A, so if no other delta chain that is longer than 3 based on
A exists, mark A with 3).  Then when attempting to deltify A, we
would take that number into account to see if the final delta
chain that leads to D becomes too deep.

However, this is a bit cumbersome to compute, so we would cheat
and reduce the maximum depth for A arbitrarily to depth/4 in
this implementation.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 13:14:57 -08:00
3f9ac8d259 pack-objects: reuse data from existing packs.
When generating a new pack, notice if we have already needed
objects in existing packs.  If an object is stored deltified,
and its base object is also what we are going to pack, then
reuse the existing deltified representation unconditionally,
bypassing all the expensive find_deltas() and try_deltas()
calls.

Also, notice if what we are going to write out exactly match
what is already in an existing pack (either deltified or just
compressed).  In such a case, we can just copy it instead of
going through the usual uncompressing & recompressing cycle.

Without this patch, in linux-2.6 repository with about 1500
loose objects and a single mega pack:

    $ git-rev-list --objects v2.6.16-rc3 >RL
    $ wc -l RL
    184141 RL
    $ time git-pack-objects p <RL
    Generating pack...
    Done counting 184141 objects.
    Packing 184141 objects....................
    a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2

    real    12m4.323s
    user    11m2.560s
    sys     0m55.950s

With this patch, the same input:

    $ time ../git.junio/git-pack-objects q <RL
    Generating pack...
    Done counting 184141 objects.
    Packing 184141 objects.....................
    a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
    Total 184141, written 184141, reused 182441

    real    1m2.608s
    user    0m55.090s
    sys     0m1.830s

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 13:14:56 -08:00
26125f6b9b detect broken alternates.
The real problem triggered an earlier fix was that an alternate
entry was pointing at a removed directory.  Complaining on
object/pack directory that cannot be opendir-ed produces noise
in an ancient repository that does not have object/pack
directory and has never been packed.

Detect the real user error and report it.  Also if opendir
failed for other reasons (e.g. no read permissions), report that
as well.

Spotted by Andrew Vasquez <andrew.vasquez@qlogic.com>.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 11:16:38 -08:00
aa064743fa git-push: Update documentation to describe the no-refspec behavior.
It turns out that the git-push documentation didn't describe what it
would do when not given a refspec, (not on the command line, nor in a
remotes file). This is fairly important for the user who is trying to
understand operations such as:

	git clone git://something/some/where
	# hack, hack, hack
	git push origin

I tracked the mystery behavior down to git-send-pack and lifted the
relevant portion of its documentation up to git-push, (namely that all
refs existing both locally and remotely are updated).

Signed-off-by: Carl Worth <cworth@cworth.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-21 22:11:50 -08:00
fab5de7936 format-patch: pretty-print timestamp correctly.
Perl is not C and does not truncate the division result.  Arghh!

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-21 18:13:32 -08:00
60ace8790f git-add: Add support for --, documentation, and test.
This adds support to git-add to allow the common -- to separate
command-line options and file names. It adds documentation and a new
git-add test case as well.

[jc: this should apply to 1.2.X maintenance series, so I reworked
 git-ls-files --error-unmatch test. ]

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-21 17:33:43 -08:00
39ba7d5464 Fix retries in git-cvsimport
Fixed a couple of bugs in recovering from broken connections:

The _line() method now returns undef correctly when the connection
is broken instead of falling off the function and returning garbage.

Retries are now reported to stderr and the eventual partially
downloaded file is discarded instead of being appended to.

The "Server gone away" test has been removed, because it was
reachable only if the garbage return bug bit.

Signed-off-by: Martin Mares <mj@ucw.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-18 16:19:00 -08:00
3ff903bfb9 archimport: remove files from the index before adding/updating
This fixes a bug when importing where a directory gets removed/renamed
but is immediately replaced by a file of the same name in the same
changeset.

This fix only applies to the accurate (default) strategy the moment.

This patch should also fix the fast strategy if/when it is updated
to handle the cases that would've triggered this bug.

This bug was originally found in git-svn, but I remembered I did the
same thing with archimport as well.

Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-18 11:21:16 -08:00
772d8a3b63 Make git-reset delete empty directories
When git-reset --hard is used and a subdirectory becomes
empty (as it contains no tracked files in the target tree)
the empty subdirectory should be removed.  This matches
the behavior of git-checkout-index and git-read-tree -m
which would not have created the subdirectory or would
have deleted it when updating the working directory.

Subdirectories which are not empty will be left behind.
This may happen if the subdirectory still contains object
files from the user's build process (for example).

[jc: simplified the logic a bit, while keeping the test script.]
2006-02-17 23:52:57 -08:00
735d80b3bf Document --short and --git-dir in git-rev-parse(1)
Signed-off-by: Jonas Fonseca <fonseca@diku.dk>
2006-02-17 17:33:12 -08:00
44de0da4f9 git-rev-parse: Fix --short= option parsing
Signed-off-by: Jonas Fonseca <fonseca@diku.dk>
2006-02-17 17:33:11 -08:00
b5b16990f8 Prevent git-upload-pack segfault if object cannot be found
Signed-off-by: Carl Worth <cworth@cworth.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-17 16:20:51 -08:00
eedf8f97e5 Abstract test_create_repo out for use in tests.
Signed-off-by: Carl Worth <cworth@cworth.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-17 16:16:53 -08:00
41ff7a1076 Trap exit to clean up created directory if clone fails.
Signed-off-by: Carl Worth <cworth@cworth.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-17 16:16:49 -08:00
43 changed files with 1072 additions and 223 deletions

View File

@ -7,7 +7,7 @@ git-add - Add files to the index file.
SYNOPSIS
--------
'git-add' [-n] [-v] <file>...
'git-add' [-n] [-v] [--] <file>...
DESCRIPTION
-----------
@ -26,6 +26,11 @@ OPTIONS
-v::
Be verbose.
--::
This option can be used to separate command-line options from
the list of files, (useful when filenames might be mistaken
for command-line options).
DISCUSSION
----------

View File

@ -8,7 +8,10 @@ git-pack-objects - Create a packed archive of objects.
SYNOPSIS
--------
'git-pack-objects' [--non-empty] [--local] [--incremental] [--window=N] [--depth=N] {--stdout | base-name} < object-list
[verse]
'git-pack-objects' [-q] [--no-reuse-delta] [--non-empty]
[--local] [--incremental] [--window=N] [--depth=N]
{--stdout | base-name} < object-list
DESCRIPTION
@ -32,6 +35,10 @@ Placing both in the pack/ subdirectory of $GIT_OBJECT_DIRECTORY (or
any of the directories on $GIT_ALTERNATE_OBJECT_DIRECTORIES)
enables git to read from such an archive.
In a packed archive, an object is either stored as a compressed
whole, or as a difference from some other object. The latter is
often called a delta.
OPTIONS
-------
@ -74,6 +81,18 @@ base-name::
Only create a packed archive if it would contain at
least one object.
-q::
This flag makes the command not to report its progress
on the standard error stream.
--no-reuse-delta::
When creating a packed archive in a repository that
has existing packs, the command reuses existing deltas.
This sometimes results in a slightly suboptimal pack.
This flag tells the command not to reuse existing deltas
but compute them from scratch.
Author
------
Written by Linus Torvalds <torvalds@osdl.org>

View File

@ -43,6 +43,12 @@ to fast forward the remote ref that matches <dst>. If
the optional plus `+` is used, the remote ref is updated
even if it does not result in a fast forward update.
+
Note: If no explicit refspec is found, (that is neither
on the command line nor in any Push line of the
corresponding remotes file---see below), then all the
refs that exist both on the local side and on the remote
side are updated.
+
Some short-cut notations are also supported.
+
* `tag <tag>` means the same as `refs/tags/<tag>:refs/tags/<tag>`.

View File

@ -9,7 +9,7 @@ objects into pack files.
SYNOPSIS
--------
'git-repack' [-a] [-d] [-l] [-n]
'git-repack' [-a] [-d] [-f] [-l] [-n] [-q]
DESCRIPTION
-----------
@ -43,6 +43,14 @@ OPTIONS
Pass the `--local` option to `git pack-objects`, see
gitlink:git-pack-objects[1].
-f::
Pass the `--no-reuse-delta` option to `git pack-objects`, see
gitlink:git-pack-objects[1].
-q::
Pass the `-q` option to `git pack-objects`, see
gitlink:git-pack-objects[1].
-n::
Do not update the server information with
`git update-server-info`.

View File

@ -77,6 +77,14 @@ OPTIONS
path of the top-level directory relative to the current
directory (typically a sequence of "../", or an empty string).
--git-dir::
Show `$GIT_DIR` if defined else show the path to the .git directory.
--short, short=number::
Instead of outputting the full SHA1 values of object names try to
abbriviate them to a shorter unique name. When no length is specified
7 is used. The minimum length is 4.
--since=datestring, --after=datestring::
Parses the date string, and outputs corresponding
--max-age= parameter for git-rev-list command.

158
apply.c
View File

@ -34,6 +34,56 @@ static int line_termination = '\n';
static const char apply_usage[] =
"git-apply [--stat] [--numstat] [--summary] [--check] [--index] [--apply] [--no-add] [--index-info] [--allow-binary-replacement] [-z] [-pNUM] <patch>...";
static enum whitespace_eol {
nowarn_whitespace,
warn_on_whitespace,
error_on_whitespace,
strip_whitespace,
} new_whitespace = warn_on_whitespace;
static int whitespace_error = 0;
static int squelch_whitespace_errors = 5;
static int applied_after_stripping = 0;
static const char *patch_input_file = NULL;
static void parse_whitespace_option(const char *option)
{
if (!option) {
new_whitespace = warn_on_whitespace;
return;
}
if (!strcmp(option, "warn")) {
new_whitespace = warn_on_whitespace;
return;
}
if (!strcmp(option, "nowarn")) {
new_whitespace = nowarn_whitespace;
return;
}
if (!strcmp(option, "error")) {
new_whitespace = error_on_whitespace;
return;
}
if (!strcmp(option, "error-all")) {
new_whitespace = error_on_whitespace;
squelch_whitespace_errors = 0;
return;
}
if (!strcmp(option, "strip")) {
new_whitespace = strip_whitespace;
return;
}
die("unrecognized whitespace option '%s'", option);
}
static void set_default_whitespace_mode(const char *whitespace_option)
{
if (!whitespace_option && !apply_default_whitespace) {
new_whitespace = (apply
? warn_on_whitespace
: nowarn_whitespace);
}
}
/*
* For "diff-stat" like behaviour, we keep track of the biggest change
* we've seen, and the longest filename. That allows us to do simple
@ -815,6 +865,25 @@ static int parse_fragment(char *line, unsigned long size, struct patch *patch, s
oldlines--;
break;
case '+':
/*
* We know len is at least two, since we have a '+' and
* we checked that the last character was a '\n' above.
* That is, an addition of an empty line would check
* the '+' here. Sneaky...
*/
if ((new_whitespace != nowarn_whitespace) &&
isspace(line[len-2])) {
whitespace_error++;
if (squelch_whitespace_errors &&
squelch_whitespace_errors <
whitespace_error)
;
else {
fprintf(stderr, "Adds trailing whitespace.\n%s:%d:%.*s\n",
patch_input_file,
linenr, len-2, line+1);
}
}
added++;
newlines--;
break;
@ -1092,6 +1161,28 @@ struct buffer_desc {
unsigned long alloc;
};
static int apply_line(char *output, const char *patch, int plen)
{
/* plen is number of bytes to be copied from patch,
* starting at patch+1 (patch[0] is '+'). Typically
* patch[plen] is '\n'.
*/
int add_nl_to_tail = 0;
if ((new_whitespace == strip_whitespace) &&
1 < plen && isspace(patch[plen-1])) {
if (patch[plen] == '\n')
add_nl_to_tail = 1;
plen--;
while (0 < plen && isspace(patch[plen]))
plen--;
applied_after_stripping++;
}
memcpy(output, patch + 1, plen);
if (add_nl_to_tail)
output[plen++] = '\n';
return plen;
}
static int apply_one_fragment(struct buffer_desc *desc, struct fragment *frag)
{
char *buf = desc->buffer;
@ -1127,10 +1218,9 @@ static int apply_one_fragment(struct buffer_desc *desc, struct fragment *frag)
break;
/* Fall-through for ' ' */
case '+':
if (*patch != '+' || !no_add) {
memcpy(new + newsize, patch + 1, plen);
newsize += plen;
}
if (*patch != '+' || !no_add)
newsize += apply_line(new + newsize, patch,
plen);
break;
case '@': case '\\':
/* Ignore it, we already handled it */
@ -1691,7 +1781,7 @@ static int use_patch(struct patch *p)
return 1;
}
static int apply_patch(int fd)
static int apply_patch(int fd, const char *filename)
{
int newfd;
unsigned long offset, size;
@ -1699,6 +1789,7 @@ static int apply_patch(int fd)
struct patch *list = NULL, **listp = &list;
int skipped_patch = 0;
patch_input_file = filename;
if (!buffer)
return -1;
offset = 0;
@ -1725,6 +1816,9 @@ static int apply_patch(int fd)
}
newfd = -1;
if (whitespace_error && (new_whitespace == error_on_whitespace))
apply = 0;
write_index = check_index && apply;
if (write_index)
newfd = hold_index_file_for_update(&cache_file, get_index_file());
@ -1761,17 +1855,28 @@ static int apply_patch(int fd)
return 0;
}
static int git_apply_config(const char *var, const char *value)
{
if (!strcmp(var, "apply.whitespace")) {
apply_default_whitespace = strdup(value);
return 0;
}
return git_default_config(var, value);
}
int main(int argc, char **argv)
{
int i;
int read_stdin = 1;
const char *whitespace_option = NULL;
for (i = 1; i < argc; i++) {
const char *arg = argv[i];
int fd;
if (!strcmp(arg, "-")) {
apply_patch(0);
apply_patch(0, "<stdin>");
read_stdin = 0;
continue;
}
@ -1831,11 +1936,18 @@ int main(int argc, char **argv)
line_termination = 0;
continue;
}
if (!strncmp(arg, "--whitespace=", 13)) {
whitespace_option = arg + 13;
parse_whitespace_option(arg + 13);
continue;
}
if (check_index && prefix_length < 0) {
prefix = setup_git_directory();
prefix_length = prefix ? strlen(prefix) : 0;
git_config(git_default_config);
git_config(git_apply_config);
if (!whitespace_option && apply_default_whitespace)
parse_whitespace_option(apply_default_whitespace);
}
if (0 < prefix_length)
arg = prefix_filename(prefix, prefix_length, arg);
@ -1844,10 +1956,38 @@ int main(int argc, char **argv)
if (fd < 0)
usage(apply_usage);
read_stdin = 0;
apply_patch(fd);
set_default_whitespace_mode(whitespace_option);
apply_patch(fd, arg);
close(fd);
}
set_default_whitespace_mode(whitespace_option);
if (read_stdin)
apply_patch(0);
apply_patch(0, "<stdin>");
if (whitespace_error) {
if (squelch_whitespace_errors &&
squelch_whitespace_errors < whitespace_error) {
int squelched =
whitespace_error - squelch_whitespace_errors;
fprintf(stderr, "warning: squelched %d whitespace error%s\n",
squelched,
squelched == 1 ? "" : "s");
}
if (new_whitespace == error_on_whitespace)
die("%d line%s add%s trailing whitespaces.",
whitespace_error,
whitespace_error == 1 ? "" : "s",
whitespace_error == 1 ? "s" : "");
if (applied_after_stripping)
fprintf(stderr, "warning: %d line%s applied after"
" stripping trailing whitespaces.\n",
applied_after_stripping,
applied_after_stripping == 1 ? "" : "s");
else if (whitespace_error)
fprintf(stderr, "warning: %d line%s add%s trailing"
" whitespaces.\n",
whitespace_error,
whitespace_error == 1 ? "" : "s",
whitespace_error == 1 ? "s" : "");
}
return 0;
}

View File

@ -160,10 +160,12 @@ extern int hold_index_file_for_update(struct cache_file *, const char *path);
extern int commit_index_file(struct cache_file *);
extern void rollback_index_file(struct cache_file *);
/* Environment bits from configuration mechanism */
extern int trust_executable_bit;
extern int only_use_symrefs;
extern int diff_rename_limit_default;
extern int shared_repository;
extern const char *apply_default_whitespace;
#define GIT_REPO_VERSION 0
extern int repository_format_version;
@ -246,8 +248,8 @@ void datestamp(char *buf, int bufsize);
unsigned long approxidate(const char *);
extern int setup_ident(void);
extern const char *git_author_info(void);
extern const char *git_committer_info(void);
extern const char *git_author_info(int);
extern const char *git_committer_info(int);
struct checkout {
const char *base_dir;

View File

@ -621,7 +621,8 @@ static void reuse_combine_diff(struct sline *sline, unsigned long cnt,
}
static int show_patch_diff(struct combine_diff_path *elem, int num_parent,
int dense, const char *header)
int dense, const char *header,
struct diff_options *opt)
{
unsigned long size, cnt, lno;
char *result, *cp, *ep;
@ -631,6 +632,7 @@ static int show_patch_diff(struct combine_diff_path *elem, int num_parent,
char ourtmp_buf[TMPPATHLEN];
char *ourtmp = ourtmp_buf;
int working_tree_file = !memcmp(elem->sha1, null_sha1, 20);
int abbrev = opt->full_index ? 40 : DEFAULT_ABBREV;
/* Read the result of merge first */
if (!working_tree_file) {
@ -724,7 +726,7 @@ static int show_patch_diff(struct combine_diff_path *elem, int num_parent,
if (header) {
shown_header++;
puts(header);
printf("%s%c", header, opt->line_termination);
}
printf("diff --%s ", dense ? "cc" : "combined");
if (quote_c_style(elem->path, NULL, NULL, 0))
@ -735,10 +737,10 @@ static int show_patch_diff(struct combine_diff_path *elem, int num_parent,
printf("index ");
for (i = 0; i < num_parent; i++) {
abb = find_unique_abbrev(elem->parent[i].sha1,
DEFAULT_ABBREV);
abbrev);
printf("%s%s", i ? "," : "", abb);
}
abb = find_unique_abbrev(elem->sha1, DEFAULT_ABBREV);
abb = find_unique_abbrev(elem->sha1, abbrev);
printf("..%s\n", abb);
if (mode_differs) {
@ -797,7 +799,7 @@ static void show_raw_diff(struct combine_diff_path *p, int num_parent, const cha
inter_name_termination = 0;
if (header)
puts(header);
printf("%s%c", header, line_termination);
for (i = 0; i < num_parent; i++) {
if (p->parent[i].mode)
@ -862,7 +864,7 @@ int show_combined_diff(struct combine_diff_path *p,
default:
case DIFF_FORMAT_PATCH:
return show_patch_diff(p, num_parent, dense, header);
return show_patch_diff(p, num_parent, dense, header, opt);
}
}

View File

@ -118,8 +118,8 @@ int main(int argc, char **argv)
add_buffer(&buffer, &size, "parent %s\n", sha1_to_hex(parent_sha1[i]));
/* Person/date information */
add_buffer(&buffer, &size, "author %s\n", git_author_info());
add_buffer(&buffer, &size, "committer %s\n\n", git_committer_info());
add_buffer(&buffer, &size, "author %s\n", git_author_info(1));
add_buffer(&buffer, &size, "committer %s\n\n", git_committer_info(1));
/* And add the comment */
while (fgets(comment, sizeof(comment), stdin) != NULL)

View File

@ -212,7 +212,8 @@ int parse_commit_buffer(struct commit *item, void *buffer, unsigned long size)
if (memcmp(bufptr, "tree ", 5))
return error("bogus commit object %s", sha1_to_hex(item->object.sha1));
if (get_sha1_hex(bufptr + 5, parent) < 0)
return error("bad tree pointer in commit %s\n", sha1_to_hex(item->object.sha1));
return error("bad tree pointer in commit %s",
sha1_to_hex(item->object.sha1));
item->tree = lookup_tree(parent);
if (item->tree)
n_refs++;

View File

@ -58,6 +58,10 @@ static int should_break(struct diff_filespec *src,
if (!S_ISREG(src->mode) || !S_ISREG(dst->mode))
return 0; /* leave symlink rename alone */
if (src->sha1_valid && dst->sha1_valid &&
!memcmp(src->sha1, dst->sha1, 20))
return 0; /* they are the same */
if (diff_populate_filespec(src, 0) || diff_populate_filespec(dst, 0))
return 0; /* error but caught downstream */

View File

@ -176,8 +176,10 @@ static int estimate_similarity(struct diff_filespec *src,
/* A delta that has a lot of literal additions would have
* big delta_size no matter what else it does.
*/
if (base_size * (MAX_SCORE-minimum_score) < delta_size * MAX_SCORE)
if (base_size * (MAX_SCORE-minimum_score) < delta_size * MAX_SCORE) {
free(delta);
return 0;
}
/* Estimate the edit size by interpreting delta. */
if (count_delta(delta, delta_size, &src_copied, &literal_added)) {

View File

@ -16,6 +16,7 @@ int only_use_symrefs = 0;
int repository_format_version = 0;
char git_commit_encoding[MAX_ENCODING_LENGTH] = "utf-8";
int shared_repository = 0;
const char *apply_default_whitespace = NULL;
static char *git_dir, *git_object_dir, *git_index_file, *git_refs_dir,
*git_graft_file;

View File

@ -14,6 +14,10 @@ while : ; do
-v)
verbose=--verbose
;;
--)
shift
break
;;
-*)
usage
;;

View File

@ -1,11 +1,13 @@
#!/bin/sh
#
#
# Copyright (c) 2005, 2006 Junio C Hamano
USAGE='[--signoff] [--dotest=<dir>] [--utf8] [--binary] [--3way] <mbox>
or, when resuming [--skip | --resolved]'
. git-sh-setup
git var GIT_COMMITTER_IDENT >/dev/null || exit
stop_here () {
echo "$1" >"$dotest/next"
exit 1
@ -98,7 +100,7 @@ fall_back_3way () {
}
prec=4
dotest=.dotest sign= utf8= keep= skip= interactive= resolved= binary=
dotest=.dotest sign= utf8= keep= skip= interactive= resolved= binary= ws=
while case "$#" in 0) break;; esac
do
@ -131,6 +133,9 @@ do
--sk|--ski|--skip)
skip=t; shift ;;
--whitespace=*)
ws=$1; shift ;;
--)
shift; break ;;
-*)
@ -169,10 +174,11 @@ else
exit 1
}
# -b, -s, -u and -k flags are kept for the resuming session after
# a patch failure.
# -b, -s, -u, -k and --whitespace flags are kept for the
# resuming session after a patch failure.
# -3 and -i can and must be given when resuming.
echo "$binary" >"$dotest/binary"
echo " $ws" >"$dotest/whitespace"
echo "$sign" >"$dotest/sign"
echo "$utf8" >"$dotest/utf8"
echo "$keep" >"$dotest/keep"
@ -200,6 +206,7 @@ if test "$(cat "$dotest/keep")" = t
then
keep=-k
fi
ws=`cat "$dotest/whitespace"`
if test "$(cat "$dotest/sign")" = t
then
SIGNOFF=`git-var GIT_COMMITTER_IDENT | sed -e '
@ -298,7 +305,7 @@ do
} >"$dotest/final-commit"
;;
*)
case "$resolved,$interactive" in
case "$resolved$interactive" in
tt)
# This is used only for interactive view option.
git-diff-index -p --cached HEAD >"$dotest/patch"
@ -353,7 +360,7 @@ do
case "$resolved" in
'')
git-apply $binary --index "$dotest/patch"
git-apply $binary --index $ws "$dotest/patch"
apply_status=$?
;;
t)
@ -362,6 +369,12 @@ do
# trust what the user has in the index file and the
# working tree.
resolved=
changed="$(git-diff-index --cached --name-only HEAD)"
if test '' = "$changed"
then
echo "No changes - did you forget update-index?"
stop_here $this
fi
apply_status=0
;;
esac
@ -372,7 +385,7 @@ do
then
# Applying the patch to an earlier tree and merging the
# result may have produced the same tree as ours.
changed="$(git-diff-index --cached --name-only -z HEAD)"
changed="$(git-diff-index --cached --name-only HEAD)"
if test '' = "$changed"
then
echo No changes -- Patch already applied.

View File

@ -21,6 +21,8 @@
USAGE='[-u] [-k] [-q] [-m] (-c .dotest/<num> | mbox) [signoff]'
. git-sh-setup
git var GIT_COMMITTER_IDENT >/dev/null || exit
keep_subject= query_apply= continue= utf8= resume=t
while case "$#" in 0) break ;; esac
do

View File

@ -346,12 +346,10 @@ sub process_patchset_accurate {
}
# update the index with all the changes we got
system('git-diff-files --name-only -z | '.
'git-update-index --remove -z --stdin') == 0 or die "$! $?\n";
system('git-ls-files --others -z | '.
'git-update-index --add -z --stdin') == 0 or die "$! $?\n";
system('git-ls-files --deleted -z | '.
'git-update-index --remove -z --stdin') == 0 or die "$! $?\n";
system('git-ls-files -z | '.
'git-update-index -z --stdin') == 0 or die "$! $?\n";
return 1;
}
@ -416,22 +414,14 @@ sub process_patchset_fast {
# imports don't give us good info
# on added files. Shame on them
if ($ps->{type} eq 'i' || $ps->{type} eq 't') {
system('git-ls-files --others -z | '.
'git-update-index --add -z --stdin') == 0 or die "$! $?\n";
system('git-ls-files --deleted -z | '.
'git-update-index --remove -z --stdin') == 0 or die "$! $?\n";
system('git-ls-files --others -z | '.
'git-update-index --add -z --stdin') == 0 or die "$! $?\n";
}
# TODO: handle removed_directories and renamed_directories:
if (my $add = $ps->{new_files}) {
while (@$add) {
my @slice = splice(@$add, 0, 100);
system('git-update-index','--add','--',@slice) == 0 or
die "Error in git-update-index --add: $! $?\n";
}
}
if (my $del = $ps->{removed_files}) {
unlink @$del;
while (@$del) {
@ -462,6 +452,14 @@ sub process_patchset_fast {
}
}
if (my $add = $ps->{new_files}) {
while (@$add) {
my @slice = splice(@$add, 0, 100);
system('git-update-index','--add','--',@slice) == 0 or
die "Error in git-update-index --add: $! $?\n";
}
}
if (my $mod = $ps->{modified_files}) {
while (@$mod) {
my @slice = splice(@$mod, 0, 100);

View File

@ -118,6 +118,7 @@ dir="$2"
[ -e "$dir" ] && echo "$dir already exists." && usage
mkdir -p "$dir" &&
D=$(cd "$dir" && pwd) &&
trap 'err=$?; cd ..; rm -r "$D"; exit $err' exit
case "$bare" in
yes) GIT_DIR="$D" ;;
*) GIT_DIR="$D/.git" ;;
@ -252,6 +253,9 @@ Pull: $head_points_at:$origin" &&
case "$no_checkout" in
'')
git checkout
git-read-tree -m -u -v HEAD HEAD
esac
fi
trap - exit

View File

@ -361,6 +361,7 @@ sub _line {
}
}
}
return undef;
}
sub file {
my($self,$fn,$rev) = @_;
@ -372,19 +373,15 @@ sub file {
$self->_file($fn,$rev) and $res = $self->_line($fh);
if (!defined $res) {
# retry
print STDERR "Server has gone away while fetching $fn $rev, retrying...\n";
truncate $fh, 0;
$self->conn();
$self->_file($fn,$rev)
or die "No file command send\n";
$self->_file($fn,$rev) or die "No file command send";
$res = $self->_line($fh);
die "No input: $fn $rev\n" unless defined $res;
die "Retry failed" unless defined $res;
}
close ($fh);
if ($res eq '') {
die "Looks like the server has gone away while fetching $fn $rev -- exiting!";
}
return ($name, $res);
}

View File

@ -368,20 +368,25 @@ fetch_main "$reflist"
# automated tag following
case "$no_tags$tags" in
'')
taglist=$(IFS=" " &&
git-ls-remote $upload_pack --tags "$remote" |
sed -ne 's|^\([0-9a-f]*\)[ ]\(refs/tags/.*\)^{}$|\1 \2|p' |
while read sha1 name
do
test -f "$GIT_DIR/$name" && continue
git-check-ref-format "$name" || {
echo >&2 "warning: tag ${name} ignored"
continue
}
git-cat-file -t "$sha1" >/dev/null 2>&1 || continue
echo >&2 "Auto-following $name"
echo ".${name}:${name}"
done)
case "$reflist" in
*:refs/*)
# effective only when we are following remote branch
# using local tracking branch.
taglist=$(IFS=" " &&
git-ls-remote $upload_pack --tags "$remote" |
sed -ne 's|^\([0-9a-f]*\)[ ]\(refs/tags/.*\)^{}$|\1 \2|p' |
while read sha1 name
do
test -f "$GIT_DIR/$name" && continue
git-check-ref-format "$name" || {
echo >&2 "warning: tag ${name} ignored"
continue
}
git-cat-file -t "$sha1" >/dev/null 2>&1 || continue
echo >&2 "Auto-following $name"
echo ".${name}:${name}"
done)
esac
case "$taglist" in
'') ;;
?*)

View File

@ -189,7 +189,7 @@ my @month_names = qw(Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec);
sub show_date {
my ($time, $tz) = @_;
my $minutes = abs($tz);
$minutes = ($minutes / 100) * 60 + ($minutes % 100);
$minutes = int($minutes / 100) * 60 + ($minutes % 100);
if ($tz < 0) {
$minutes = -$minutes;
}

View File

@ -130,7 +130,7 @@ case "$#,$common,$no_commit" in
echo "Updating from $head to $1."
git-update-index --refresh 2>/dev/null
new_head=$(git-rev-parse --verify "$1^0") &&
git-read-tree -u -m $head "$new_head" &&
git-read-tree -u -v -m $head "$new_head" &&
finish "$new_head" "Fast forward"
dropsave
exit 0
@ -142,9 +142,11 @@ case "$#,$common,$no_commit" in
1,*,)
# We are not doing octopus, not fast forward, and have only
# one common. See if it is really trivial.
git var GIT_COMMITTER_IDENT >/dev/null || exit
echo "Trying really trivial in-index merge..."
git-update-index --refresh 2>/dev/null
if git-read-tree --trivial -m -u $common $head "$1" &&
if git-read-tree --trivial -m -u -v $common $head "$1" &&
result_tree=$(git-write-tree)
then
echo "Wonderful."
@ -179,6 +181,9 @@ case "$#,$common,$no_commit" in
;;
esac
# We are going to make a new commit.
git var GIT_COMMITTER_IDENT >/dev/null || exit
case "$use_strategies" in
'')
case "$#" in

View File

@ -19,25 +19,26 @@ EOT
exit(1);
}
my $GIT_DIR = `git rev-parse --git-dir`;
exit 1 if $?; # rev-parse would have given "not a git dir" message.
chomp($GIT_DIR);
our ($opt_n, $opt_f, $opt_h, $opt_k, $opt_v);
getopts("hnfkv") || usage;
usage() if $opt_h;
@ARGV >= 1 or usage;
my $GIT_DIR = `git rev-parse --git-dir`;
exit 1 if $?; # rev-parse would have given "not a git dir" message.
chomp($GIT_DIR);
my (@srcArgs, @dstArgs, @srcs, @dsts);
my ($src, $dst, $base, $dstDir);
# remove any trailing slash in arguments
for (@ARGV) { s/\/*$//; }
my $argCount = scalar @ARGV;
if (-d $ARGV[$argCount-1]) {
$dstDir = $ARGV[$argCount-1];
# remove any trailing slash
$dstDir =~ s/\/$//;
@srcArgs = @ARGV[0..$argCount-2];
foreach $src (@srcArgs) {
$base = $src;
$base =~ s/^.*\///;
@ -46,10 +47,14 @@ if (-d $ARGV[$argCount-1]) {
}
}
else {
if ($argCount != 2) {
if ($argCount < 2) {
print "Error: need at least two arguments\n";
exit(1);
}
if ($argCount > 2) {
print "Error: moving to directory '"
. $ARGV[$argCount-1]
. "' not possible; not exisiting\n";
. "' not possible; not existing\n";
exit(1);
}
@srcArgs = ($ARGV[0]);
@ -57,6 +62,16 @@ else {
$dstDir = "";
}
# normalize paths, needed to compare against versioned files and update-index
# also, this is nicer to end-users by doing ".//a/./b/.//./c" ==> "a/b/c"
for (@srcArgs, @dstArgs) {
s|^\./||;
s|/\./|/| while (m|/\./|);
s|//+|/|g;
# Also "a/b/../c" ==> "a/c"
1 while (s,(^|/)[^/]+/\.\./,$1,);
}
my (@allfiles,@srcfiles,@dstfiles);
my $safesrc;
my (%overwritten, %srcForDst);

View File

@ -3,17 +3,20 @@
# Copyright (c) 2005 Linus Torvalds
#
USAGE='[-a] [-d] [-l] [-n]'
USAGE='[-a] [-d] [-f] [-l] [-n] [-q]'
. git-sh-setup
no_update_info= all_into_one= remove_redundant= local=
no_update_info= all_into_one= remove_redundant=
local= quiet= no_reuse_delta=
while case "$#" in 0) break ;; esac
do
case "$1" in
-n) no_update_info=t ;;
-a) all_into_one=t ;;
-d) remove_redundant=t ;;
-l) local=t ;;
-q) quiet=-q ;;
-f) no_reuse_delta=--no-reuse-delta ;;
-l) local=--local ;;
*) usage ;;
esac
shift
@ -39,9 +42,7 @@ case ",$all_into_one," in
find . -type f \( -name '*.pack' -o -name '*.idx' \) -print`
;;
esac
if [ "$local" ]; then
pack_objects="$pack_objects --local"
fi
pack_objects="$pack_objects $local $quiet $no_reuse_delta"
name=$(git-rev-list --objects $rev_list $(git-rev-parse $rev_parse) 2>&1 |
git-pack-objects --non-empty $pack_objects .tmp-pack) ||
exit 1

View File

@ -88,6 +88,9 @@ case "$reset_type" in
# it is ok if this fails -- it may already
# have been culled by checkout-index.
unlink $_;
while (s|/[^/]*$||) {
rmdir($_) or last;
}
}
}
' $tmp-exists

View File

@ -50,6 +50,9 @@ case "$common" in
;;
esac
# We are going to make a new commit.
git var GIT_COMMITTER_IDENT >/dev/null || exit
# Find an optimum merge base if there are more than one candidates.
LF='
'

View File

@ -130,7 +130,7 @@ static void start_object_request(struct object_request *obj_req)
if (obj_req->local < 0) {
obj_req->state = ABORTED;
error("Couldn't create temporary file %s for %s: %s\n",
error("Couldn't create temporary file %s for %s: %s",
obj_req->tmpfile, obj_req->filename, strerror(errno));
return;
}
@ -830,9 +830,9 @@ static int fetch_object(struct alt_base *repo, unsigned char *sha1)
obj_req->errorstr, obj_req->curl_result,
obj_req->http_code, hex);
} else if (obj_req->zret != Z_STREAM_END) {
ret = error("File %s (%s) corrupt\n", hex, obj_req->url);
ret = error("File %s (%s) corrupt", hex, obj_req->url);
} else if (memcmp(obj_req->sha1, obj_req->real_sha1, 20)) {
ret = error("File %s has bad hash\n", hex);
ret = error("File %s has bad hash", hex);
} else if (obj_req->rename < 0) {
ret = error("unable to write sha1 filename %s",
obj_req->filename);
@ -854,7 +854,7 @@ int fetch(unsigned char *sha1)
fetch_alternates(alt->base);
altbase = altbase->next;
}
return error("Unable to find %s under %s\n", sha1_to_hex(sha1),
return error("Unable to find %s under %s", sha1_to_hex(sha1),
alt->base);
}

33
ident.c
View File

@ -156,8 +156,18 @@ static int copy(char *buf, int size, int offset, const char *src)
return offset;
}
static const char au_env[] = "GIT_AUTHOR_NAME";
static const char co_env[] = "GIT_COMMITTER_NAME";
static const char *env_hint =
"\n*** Environment problem:\n"
"*** Your name cannot be determined from your system services (gecos).\n"
"*** You would need to set %s and %s\n"
"*** environment variables; otherwise you won't be able to perform\n"
"*** certain operations because of \"empty ident\" errors.\n"
"*** Alternatively, you can use user.name configuration variable.\n\n";
static const char *get_ident(const char *name, const char *email,
const char *date_str)
const char *date_str, int error_on_no_name)
{
static char buffer[1000];
char date[50];
@ -168,9 +178,14 @@ static const char *get_ident(const char *name, const char *email,
if (!email)
email = git_default_email;
if (!*name || !*email)
die("empty ident %s <%s> not allowed",
name, email);
if (!*name) {
if (name == git_default_name && env_hint) {
fprintf(stderr, env_hint, au_env, co_env);
env_hint = NULL; /* warn only once, for "git-var -l" */
}
if (error_on_no_name)
die("empty ident %s <%s> not allowed", name, email);
}
strcpy(date, git_default_date);
if (date_str)
@ -187,16 +202,18 @@ static const char *get_ident(const char *name, const char *email,
return buffer;
}
const char *git_author_info(void)
const char *git_author_info(int error_on_no_name)
{
return get_ident(getenv("GIT_AUTHOR_NAME"),
getenv("GIT_AUTHOR_EMAIL"),
getenv("GIT_AUTHOR_DATE"));
getenv("GIT_AUTHOR_DATE"),
error_on_no_name);
}
const char *git_committer_info(void)
const char *git_committer_info(int error_on_no_name)
{
return get_ident(getenv("GIT_COMMITTER_NAME"),
getenv("GIT_COMMITTER_EMAIL"),
getenv("GIT_COMMITTER_DATE"));
getenv("GIT_COMMITTER_DATE"),
error_on_no_name);
}

View File

@ -4,22 +4,47 @@
#include "pack.h"
#include "csum-file.h"
#include <sys/time.h>
#include <signal.h>
static const char pack_usage[] = "git-pack-objects [-q] [--non-empty] [--local] [--incremental] [--window=N] [--depth=N] {--stdout | base-name} < object-list";
static const char pack_usage[] = "git-pack-objects [-q] [--no-reuse-delta] [--non-empty] [--local] [--incremental] [--window=N] [--depth=N] {--stdout | base-name} < object-list";
struct object_entry {
unsigned char sha1[20];
unsigned long size;
unsigned long offset;
unsigned int depth;
unsigned int hash;
unsigned long size; /* uncompressed size */
unsigned long offset; /* offset into the final pack file;
* nonzero if already written.
*/
unsigned int depth; /* delta depth */
unsigned int delta_limit; /* base adjustment for in-pack delta */
unsigned int hash; /* name hint hash */
enum object_type type;
unsigned long delta_size;
struct object_entry *delta;
enum object_type in_pack_type; /* could be delta */
unsigned long delta_size; /* delta data size (uncompressed) */
struct object_entry *delta; /* delta base object */
struct packed_git *in_pack; /* already in pack */
unsigned int in_pack_offset;
struct object_entry *delta_child; /* delitified objects who bases me */
struct object_entry *delta_sibling; /* other deltified objects who
* uses the same base as me
*/
};
/*
* Objects we are going to pack are colected in objects array (dynamically
* expanded). nr_objects & nr_alloc controls this array. They are stored
* in the order we see -- typically rev-list --objects order that gives us
* nice "minimum seek" order.
*
* sorted-by-sha ans sorted-by-type are arrays of pointers that point at
* elements in the objects array. The former is used to build the pack
* index (lists object names in the ascending order to help offset lookup),
* and the latter is used to group similar things together by try_delta()
* heuristics.
*/
static unsigned char object_list_sha1[20];
static int non_empty = 0;
static int no_reuse_delta = 0;
static int local = 0;
static int incremental = 0;
static struct object_entry **sorted_by_sha, **sorted_by_type;
@ -28,6 +53,138 @@ static int nr_objects = 0, nr_alloc = 0;
static const char *base_name;
static unsigned char pack_file_sha1[20];
static int progress = 1;
static volatile sig_atomic_t progress_update = 0;
/*
* The object names in objects array are hashed with this hashtable,
* to help looking up the entry by object name. Binary search from
* sorted_by_sha is also possible but this was easier to code and faster.
* This hashtable is built after all the objects are seen.
*/
static int *object_ix = NULL;
static int object_ix_hashsz = 0;
/*
* Pack index for existing packs give us easy access to the offsets into
* corresponding pack file where each object's data starts, but the entries
* do not store the size of the compressed representation (uncompressed
* size is easily available by examining the pack entry header). We build
* a hashtable of existing packs (pack_revindex), and keep reverse index
* here -- pack index file is sorted by object name mapping to offset; this
* pack_revindex[].revindex array is an ordered list of offsets, so if you
* know the offset of an object, next offset is where its packed
* representation ends.
*/
struct pack_revindex {
struct packed_git *p;
unsigned long *revindex;
} *pack_revindex = NULL;
static int pack_revindex_hashsz = 0;
/*
* stats
*/
static int written = 0;
static int written_delta = 0;
static int reused = 0;
static int reused_delta = 0;
static int pack_revindex_ix(struct packed_git *p)
{
unsigned int ui = (unsigned int) p;
int i;
ui = ui ^ (ui >> 16); /* defeat structure alignment */
i = (int)(ui % pack_revindex_hashsz);
while (pack_revindex[i].p) {
if (pack_revindex[i].p == p)
return i;
if (++i == pack_revindex_hashsz)
i = 0;
}
return -1 - i;
}
static void prepare_pack_ix(void)
{
int num;
struct packed_git *p;
for (num = 0, p = packed_git; p; p = p->next)
num++;
if (!num)
return;
pack_revindex_hashsz = num * 11;
pack_revindex = xcalloc(sizeof(*pack_revindex), pack_revindex_hashsz);
for (p = packed_git; p; p = p->next) {
num = pack_revindex_ix(p);
num = - 1 - num;
pack_revindex[num].p = p;
}
/* revindex elements are lazily initialized */
}
static int cmp_offset(const void *a_, const void *b_)
{
unsigned long a = *(unsigned long *) a_;
unsigned long b = *(unsigned long *) b_;
if (a < b)
return -1;
else if (a == b)
return 0;
else
return 1;
}
/*
* Ordered list of offsets of objects in the pack.
*/
static void prepare_pack_revindex(struct pack_revindex *rix)
{
struct packed_git *p = rix->p;
int num_ent = num_packed_objects(p);
int i;
void *index = p->index_base + 256;
rix->revindex = xmalloc(sizeof(unsigned long) * (num_ent + 1));
for (i = 0; i < num_ent; i++) {
long hl = *((long *)(index + 24 * i));
rix->revindex[i] = ntohl(hl);
}
/* This knows the pack format -- the 20-byte trailer
* follows immediately after the last object data.
*/
rix->revindex[num_ent] = p->pack_size - 20;
qsort(rix->revindex, num_ent, sizeof(unsigned long), cmp_offset);
}
static unsigned long find_packed_object_size(struct packed_git *p,
unsigned long ofs)
{
int num;
int lo, hi;
struct pack_revindex *rix;
unsigned long *revindex;
num = pack_revindex_ix(p);
if (num < 0)
die("internal error: pack revindex uninitialized");
rix = &pack_revindex[num];
if (!rix->revindex)
prepare_pack_revindex(rix);
revindex = rix->revindex;
lo = 0;
hi = num_packed_objects(p) + 1;
do {
int mi = (lo + hi) / 2;
if (revindex[mi] == ofs) {
return revindex[mi+1] - ofs;
}
else if (ofs < revindex[mi])
hi = mi;
else
lo = mi + 1;
} while (lo < hi);
die("internal error: pack revindex corrupt");
}
static void *delta_against(void *buf, unsigned long size, struct object_entry *entry)
{
@ -78,35 +235,69 @@ static unsigned long write_object(struct sha1file *f, struct object_entry *entry
{
unsigned long size;
char type[10];
void *buf = read_sha1_file(entry->sha1, type, &size);
void *buf;
unsigned char header[10];
unsigned hdrlen, datalen;
enum object_type obj_type;
int to_reuse = 0;
if (!buf)
die("unable to read %s", sha1_to_hex(entry->sha1));
if (size != entry->size)
die("object %s size inconsistency (%lu vs %lu)", sha1_to_hex(entry->sha1), size, entry->size);
/*
* The object header is a byte of 'type' followed by zero or
* more bytes of length. For deltas, the 20 bytes of delta sha1
* follows that.
*/
obj_type = entry->type;
if (entry->delta) {
buf = delta_against(buf, size, entry);
size = entry->delta_size;
obj_type = OBJ_DELTA;
if (! entry->in_pack)
to_reuse = 0; /* can't reuse what we don't have */
else if (obj_type == OBJ_DELTA)
to_reuse = 1; /* check_object() decided it for us */
else if (obj_type != entry->in_pack_type)
to_reuse = 0; /* pack has delta which is unusable */
else if (entry->delta)
to_reuse = 0; /* we want to pack afresh */
else
to_reuse = 1; /* we have it in-pack undeltified,
* and we do not need to deltify it.
*/
if (! to_reuse) {
buf = read_sha1_file(entry->sha1, type, &size);
if (!buf)
die("unable to read %s", sha1_to_hex(entry->sha1));
if (size != entry->size)
die("object %s size inconsistency (%lu vs %lu)",
sha1_to_hex(entry->sha1), size, entry->size);
if (entry->delta) {
buf = delta_against(buf, size, entry);
size = entry->delta_size;
obj_type = OBJ_DELTA;
}
/*
* The object header is a byte of 'type' followed by zero or
* more bytes of length. For deltas, the 20 bytes of delta
* sha1 follows that.
*/
hdrlen = encode_header(obj_type, size, header);
sha1write(f, header, hdrlen);
if (entry->delta) {
sha1write(f, entry->delta, 20);
hdrlen += 20;
}
datalen = sha1write_compressed(f, buf, size);
free(buf);
}
hdrlen = encode_header(obj_type, size, header);
sha1write(f, header, hdrlen);
if (entry->delta) {
sha1write(f, entry->delta, 20);
hdrlen += 20;
else {
struct packed_git *p = entry->in_pack;
use_packed_git(p);
datalen = find_packed_object_size(p, entry->in_pack_offset);
buf = p->pack_base + entry->in_pack_offset;
sha1write(f, buf, datalen);
unuse_packed_git(p);
hdrlen = 0; /* not really */
if (obj_type == OBJ_DELTA)
reused_delta++;
reused++;
}
datalen = sha1write_compressed(f, buf, size);
free(buf);
if (obj_type == OBJ_DELTA)
written_delta++;
written++;
return hdrlen + datalen;
}
@ -132,24 +323,41 @@ static void write_pack_file(void)
int i;
struct sha1file *f;
unsigned long offset;
unsigned long mb;
struct pack_header hdr;
unsigned last_percent = 999;
int do_progress = 0;
if (!base_name)
f = sha1fd(1, "<stdout>");
else
f = sha1create("%s-%s.%s", base_name, sha1_to_hex(object_list_sha1), "pack");
else {
f = sha1create("%s-%s.%s", base_name,
sha1_to_hex(object_list_sha1), "pack");
do_progress = progress;
}
if (do_progress)
fprintf(stderr, "Writing %d objects.\n", nr_objects);
hdr.hdr_signature = htonl(PACK_SIGNATURE);
hdr.hdr_version = htonl(PACK_VERSION);
hdr.hdr_entries = htonl(nr_objects);
sha1write(f, &hdr, sizeof(hdr));
offset = sizeof(hdr);
for (i = 0; i < nr_objects; i++)
for (i = 0; i < nr_objects; i++) {
offset = write_one(f, objects + i, offset);
if (do_progress) {
unsigned percent = written * 100 / nr_objects;
if (progress_update || percent != last_percent) {
fprintf(stderr, "%4u%% (%u/%u) done\r",
percent, written, nr_objects);
progress_update = 0;
last_percent = percent;
}
}
}
if (do_progress)
fputc('\n', stderr);
sha1close(f, pack_file_sha1, 1);
mb = offset >> 20;
offset &= 0xfffff;
}
static void write_index_file(void)
@ -196,18 +404,20 @@ static int add_object_entry(unsigned char *sha1, unsigned int hash)
{
unsigned int idx = nr_objects;
struct object_entry *entry;
struct packed_git *p;
unsigned int found_offset = 0;
struct packed_git *found_pack = NULL;
if (incremental || local) {
struct packed_git *p;
for (p = packed_git; p; p = p->next) {
struct pack_entry e;
if (find_pack_entry_one(sha1, &e, p)) {
if (incremental)
return 0;
if (local && !p->pack_local)
return 0;
for (p = packed_git; p; p = p->next) {
struct pack_entry e;
if (find_pack_entry_one(sha1, &e, p)) {
if (incremental)
return 0;
if (local && !p->pack_local)
return 0;
if (!found_pack) {
found_offset = e.offset;
found_pack = e.p;
}
}
}
@ -221,39 +431,143 @@ static int add_object_entry(unsigned char *sha1, unsigned int hash)
memset(entry, 0, sizeof(*entry));
memcpy(entry->sha1, sha1, 20);
entry->hash = hash;
if (found_pack) {
entry->in_pack = found_pack;
entry->in_pack_offset = found_offset;
}
nr_objects = idx+1;
return 1;
}
static int locate_object_entry_hash(unsigned char *sha1)
{
int i;
unsigned int ui;
memcpy(&ui, sha1, sizeof(unsigned int));
i = ui % object_ix_hashsz;
while (0 < object_ix[i]) {
if (!memcmp(sha1, objects[object_ix[i]-1].sha1, 20))
return i;
if (++i == object_ix_hashsz)
i = 0;
}
return -1 - i;
}
static struct object_entry *locate_object_entry(unsigned char *sha1)
{
int i = locate_object_entry_hash(sha1);
if (0 <= i)
return &objects[object_ix[i]-1];
return NULL;
}
static void check_object(struct object_entry *entry)
{
char type[20];
if (!sha1_object_info(entry->sha1, type, &entry->size)) {
if (!strcmp(type, "commit")) {
entry->type = OBJ_COMMIT;
} else if (!strcmp(type, "tree")) {
entry->type = OBJ_TREE;
} else if (!strcmp(type, "blob")) {
entry->type = OBJ_BLOB;
} else if (!strcmp(type, "tag")) {
entry->type = OBJ_TAG;
} else
die("unable to pack object %s of type %s",
sha1_to_hex(entry->sha1), type);
if (entry->in_pack) {
unsigned char base[20];
unsigned long size;
struct object_entry *base_entry;
/* We want in_pack_type even if we do not reuse delta.
* There is no point not reusing non-delta representations.
*/
check_reuse_pack_delta(entry->in_pack,
entry->in_pack_offset,
base, &size,
&entry->in_pack_type);
/* Check if it is delta, and the base is also an object
* we are going to pack. If so we will reuse the existing
* delta.
*/
if (!no_reuse_delta &&
entry->in_pack_type == OBJ_DELTA &&
(base_entry = locate_object_entry(base))) {
/* Depth value does not matter - find_deltas()
* will never consider reused delta as the
* base object to deltify other objects
* against, in order to avoid circular deltas.
*/
/* uncompressed size of the delta data */
entry->size = entry->delta_size = size;
entry->delta = base_entry;
entry->type = OBJ_DELTA;
entry->delta_sibling = base_entry->delta_child;
base_entry->delta_child = entry;
return;
}
/* Otherwise we would do the usual */
}
else
if (sha1_object_info(entry->sha1, type, &entry->size))
die("unable to get type of object %s",
sha1_to_hex(entry->sha1));
if (!strcmp(type, "commit")) {
entry->type = OBJ_COMMIT;
} else if (!strcmp(type, "tree")) {
entry->type = OBJ_TREE;
} else if (!strcmp(type, "blob")) {
entry->type = OBJ_BLOB;
} else if (!strcmp(type, "tag")) {
entry->type = OBJ_TAG;
} else
die("unable to pack object %s of type %s",
sha1_to_hex(entry->sha1), type);
}
static void hash_objects(void)
{
int i;
struct object_entry *oe;
object_ix_hashsz = nr_objects * 2;
object_ix = xcalloc(sizeof(int), object_ix_hashsz);
for (i = 0, oe = objects; i < nr_objects; i++, oe++) {
int ix = locate_object_entry_hash(oe->sha1);
if (0 <= ix) {
error("the same object '%s' added twice",
sha1_to_hex(oe->sha1));
continue;
}
ix = -1 - ix;
object_ix[ix] = i + 1;
}
}
static unsigned int check_delta_limit(struct object_entry *me, unsigned int n)
{
struct object_entry *child = me->delta_child;
unsigned int m = n;
while (child) {
unsigned int c = check_delta_limit(child, n + 1);
if (m < c)
m = c;
child = child->delta_sibling;
}
return m;
}
static void get_object_details(void)
{
int i;
struct object_entry *entry = objects;
struct object_entry *entry;
for (i = 0; i < nr_objects; i++)
check_object(entry++);
hash_objects();
prepare_pack_ix();
for (i = 0, entry = objects; i < nr_objects; i++, entry++)
check_object(entry);
for (i = 0, entry = objects; i < nr_objects; i++, entry++)
if (!entry->delta && entry->delta_child)
entry->delta_limit =
check_delta_limit(entry, 1);
}
typedef int (*entry_sort_t)(const struct object_entry *, const struct object_entry *);
@ -326,6 +640,16 @@ static int try_delta(struct unpacked *cur, struct unpacked *old, unsigned max_de
if (cur_entry->type != old_entry->type)
return -1;
/* If the current object is at edge, take the depth the objects
* that depend on the current object into account -- otherwise
* they would become too deep.
*/
if (cur_entry->delta_child) {
if (max_depth <= cur_entry->delta_limit)
return 0;
max_depth -= cur_entry->delta_limit;
}
size = cur_entry->size;
if (size < 50)
return -1;
@ -359,17 +683,24 @@ static int try_delta(struct unpacked *cur, struct unpacked *old, unsigned max_de
return 0;
}
static void progress_interval(int signum)
{
progress_update = 1;
}
static void find_deltas(struct object_entry **list, int window, int depth)
{
int i, idx;
unsigned int array_size = window * sizeof(struct unpacked);
struct unpacked *array = xmalloc(array_size);
int eye_candy;
unsigned processed = 0;
unsigned last_percent = 999;
memset(array, 0, array_size);
i = nr_objects;
idx = 0;
eye_candy = i - (nr_objects / 20);
if (progress)
fprintf(stderr, "Deltifying %d objects.\n", nr_objects);
while (--i >= 0) {
struct object_entry *entry = list[i];
@ -378,15 +709,29 @@ static void find_deltas(struct object_entry **list, int window, int depth)
char type[10];
int j;
if (progress && i <= eye_candy) {
eye_candy -= nr_objects / 20;
fputc('.', stderr);
processed++;
if (progress) {
unsigned percent = processed * 100 / nr_objects;
if (percent != last_percent || progress_update) {
fprintf(stderr, "%4u%% (%u/%u) done\r",
percent, processed, nr_objects);
progress_update = 0;
last_percent = percent;
}
}
if (entry->delta)
/* This happens if we decided to reuse existing
* delta from a pack. "!no_reuse_delta &&" is implied.
*/
continue;
free(n->data);
n->entry = entry;
n->data = read_sha1_file(entry->sha1, type, &size);
if (size != entry->size)
die("object %s inconsistent object length (%lu vs %lu)", sha1_to_hex(entry->sha1), size, entry->size);
j = window;
while (--j > 0) {
unsigned int other_idx = idx + j;
@ -404,6 +749,9 @@ static void find_deltas(struct object_entry **list, int window, int depth)
idx = 0;
}
if (progress)
fputc('\n', stderr);
for (i = 0; i < window; ++i)
free(array[i].data);
free(array);
@ -412,15 +760,9 @@ static void find_deltas(struct object_entry **list, int window, int depth)
static void prepare_pack(int window, int depth)
{
get_object_details();
if (progress)
fprintf(stderr, "Packing %d objects", nr_objects);
sorted_by_type = create_sorted_list(type_size_sort);
if (window && depth)
find_deltas(sorted_by_type, window+1, depth);
if (progress)
fputc('\n', stderr);
write_pack_file();
}
static int reuse_cached_pack(unsigned char *sha1, int pack_to_stdout)
@ -443,8 +785,9 @@ static int reuse_cached_pack(unsigned char *sha1, int pack_to_stdout)
}
}
fprintf(stderr, "Reusing %d objects pack %s\n", nr_objects,
sha1_to_hex(sha1));
if (progress)
fprintf(stderr, "Reusing %d objects pack %s\n", nr_objects,
sha1_to_hex(sha1));
if (pack_to_stdout) {
if (copy_fd(ifd, 1))
@ -476,6 +819,23 @@ static int reuse_cached_pack(unsigned char *sha1, int pack_to_stdout)
return 1;
}
static void setup_progress_signal(void)
{
struct sigaction sa;
struct itimerval v;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = progress_interval;
sigemptyset(&sa.sa_mask);
sa.sa_flags = SA_RESTART;
sigaction(SIGALRM, &sa, NULL);
v.it_interval.tv_sec = 1;
v.it_interval.tv_usec = 0;
v.it_value = v.it_interval;
setitimer(ITIMER_REAL, &v, NULL);
}
int main(int argc, char **argv)
{
SHA_CTX ctx;
@ -483,10 +843,6 @@ int main(int argc, char **argv)
int window = 10, depth = 10, pack_to_stdout = 0;
struct object_entry **list;
int i;
struct timeval prev_tv;
int eye_candy = 0;
int eye_candy_incr = 500;
setup_git_directory();
@ -524,6 +880,10 @@ int main(int argc, char **argv)
progress = 0;
continue;
}
if (!strcmp("--no-reuse-delta", arg)) {
no_reuse_delta = 1;
continue;
}
if (!strcmp("--stdout", arg)) {
pack_to_stdout = 1;
continue;
@ -539,30 +899,31 @@ int main(int argc, char **argv)
usage(pack_usage);
prepare_packed_git();
if (progress) {
fprintf(stderr, "Generating pack...\n");
gettimeofday(&prev_tv, NULL);
setup_progress_signal();
}
while (fgets(line, sizeof(line), stdin) != NULL) {
for (;;) {
unsigned int hash;
char *p;
unsigned char sha1[20];
if (progress && (eye_candy <= nr_objects)) {
if (!fgets(line, sizeof(line), stdin)) {
if (feof(stdin))
break;
if (!ferror(stdin))
die("fgets returned NULL, not EOF, not error!");
if (errno != EINTR)
die("fgets: %s", strerror(errno));
clearerr(stdin);
continue;
}
if (progress_update) {
fprintf(stderr, "Counting objects...%d\r", nr_objects);
if (eye_candy && (50 <= eye_candy_incr)) {
struct timeval tv;
int time_diff;
gettimeofday(&tv, NULL);
time_diff = (tv.tv_sec - prev_tv.tv_sec);
time_diff <<= 10;
time_diff += (tv.tv_usec - prev_tv.tv_usec);
if ((1 << 9) < time_diff)
eye_candy_incr += 50;
else if (50 < eye_candy_incr)
eye_candy_incr -= 50;
}
eye_candy += eye_candy_incr;
progress_update = 0;
}
if (get_sha1_hex(line, sha1))
die("expected sha1, got garbage:\n %s", line);
@ -594,10 +955,21 @@ int main(int argc, char **argv)
;
else {
prepare_pack(window, depth);
if (progress && pack_to_stdout) {
/* the other end usually displays progress itself */
struct itimerval v = {{0,},};
setitimer(ITIMER_REAL, &v, NULL);
signal(SIGALRM, SIG_IGN );
progress_update = 0;
}
write_pack_file();
if (!pack_to_stdout) {
write_index_file();
puts(sha1_to_hex(object_list_sha1));
}
}
if (progress)
fprintf(stderr, "Total %d, written %d (delta %d), reused %d (delta %d)\n",
nr_objects, written, written_delta, reused, reused_delta);
return 0;
}

4
pack.h
View File

@ -29,5 +29,7 @@ struct pack_header {
};
extern int verify_pack(struct packed_git *, int);
extern int check_reuse_pack_delta(struct packed_git *, unsigned long,
unsigned char *, unsigned long *,
enum object_type *);
#endif

View File

@ -9,6 +9,8 @@
#include "object.h"
#include "tree.h"
#include <sys/time.h>
#include <signal.h>
static int merge = 0;
static int update = 0;
@ -16,6 +18,8 @@ static int index_only = 0;
static int nontrivial_merge = 0;
static int trivial_merges_only = 0;
static int aggressive = 0;
static int verbose_update = 0;
static volatile int progress_update = 0;
static int head_idx = -1;
static int merge_size = 0;
@ -267,6 +271,28 @@ static void unlink_entry(char *name)
}
}
static void progress_interval(int signum)
{
progress_update = 1;
}
static void setup_progress_signal(void)
{
struct sigaction sa;
struct itimerval v;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = progress_interval;
sigemptyset(&sa.sa_mask);
sa.sa_flags = SA_RESTART;
sigaction(SIGALRM, &sa, NULL);
v.it_interval.tv_sec = 1;
v.it_interval.tv_usec = 0;
v.it_value = v.it_interval;
setitimer(ITIMER_REAL, &v, NULL);
}
static void check_updates(struct cache_entry **src, int nr)
{
static struct checkout state = {
@ -276,8 +302,43 @@ static void check_updates(struct cache_entry **src, int nr)
.refresh_cache = 1,
};
unsigned short mask = htons(CE_UPDATE);
unsigned last_percent = 200, cnt = 0, total = 0;
if (update && verbose_update) {
for (total = cnt = 0; cnt < nr; cnt++) {
struct cache_entry *ce = src[cnt];
if (!ce->ce_mode || ce->ce_flags & mask)
total++;
}
/* Don't bother doing this for very small updates */
if (total < 250)
total = 0;
if (total) {
fprintf(stderr, "Checking files out...\n");
setup_progress_signal();
progress_update = 1;
}
cnt = 0;
}
while (nr--) {
struct cache_entry *ce = *src++;
if (total) {
if (!ce->ce_mode || ce->ce_flags & mask) {
unsigned percent;
cnt++;
percent = (cnt * 100) / total;
if (percent != last_percent ||
progress_update) {
fprintf(stderr, "%4u%% (%u/%u) done\r",
percent, cnt, total);
last_percent = percent;
}
}
}
if (!ce->ce_mode) {
if (update)
unlink_entry(ce->name);
@ -289,6 +350,10 @@ static void check_updates(struct cache_entry **src, int nr)
checkout_entry(ce, &state);
}
}
if (total) {
signal(SIGALRM, SIG_IGN);
fputc('\n', stderr);
}
}
static int unpack_trees(merge_fn_t fn)
@ -505,9 +570,11 @@ static int threeway_merge(struct cache_entry **stages)
*/
if ((head_deleted && remote_deleted) ||
(head_deleted && remote && remote_match) ||
(remote_deleted && head && head_match))
(remote_deleted && head && head_match)) {
if (index)
return deleted_entry(index, index);
return 0;
}
/*
* Added in both, identically.
*/
@ -564,7 +631,7 @@ static int twoway_merge(struct cache_entry **src)
struct cache_entry *oldtree = src[1], *newtree = src[2];
if (merge_size != 2)
return error("Cannot do a twoway merge of %d trees\n",
return error("Cannot do a twoway merge of %d trees",
merge_size);
if (current) {
@ -616,7 +683,7 @@ static int oneway_merge(struct cache_entry **src)
struct cache_entry *a = src[1];
if (merge_size != 1)
return error("Cannot do a oneway merge of %d trees\n",
return error("Cannot do a oneway merge of %d trees",
merge_size);
if (!a)
@ -680,6 +747,11 @@ int main(int argc, char **argv)
continue;
}
if (!strcmp(arg, "-v")) {
verbose_update = 1;
continue;
}
/* "-i" means "index only", meaning that a merge will
* not even look at the working tree.
*/

View File

@ -92,7 +92,7 @@ static int run_update_hook(const char *refname,
case -ERR_RUN_COMMAND_WAITPID_WRONG_PID:
return error("waitpid is confused");
case -ERR_RUN_COMMAND_WAITPID_SIGNAL:
return error("%s died of signal\n", update_hook);
return error("%s died of signal", update_hook);
case -ERR_RUN_COMMAND_WAITPID_NOEXIT:
return error("%s died strangely", update_hook);
default:
@ -158,7 +158,7 @@ static int update(struct command *cmd)
if (run_update_hook(name, old_hex, new_hex)) {
unlink(lock_name);
cmd->error_string = "hook declined";
return error("hook declined to update %s\n", name);
return error("hook declined to update %s", name);
}
else if (rename(lock_name, name) < 0) {
unlink(lock_name);

2
refs.c
View File

@ -268,7 +268,7 @@ static int write_ref_file(const char *filename,
char term = '\n';
if (write(fd, hex, 40) < 40 ||
write(fd, &term, 1) < 1) {
error("Couldn't write %s\n", filename);
error("Couldn't write %s", filename);
close(fd);
return -1;
}

View File

@ -225,12 +225,12 @@ int main(int argc, char **argv)
continue;
}
if (!strcmp(arg, "--short") ||
!strncmp(arg, "--short=", 9)) {
!strncmp(arg, "--short=", 8)) {
filter &= ~(DO_FLAGS|DO_NOREV);
verify = 1;
abbrev = DEFAULT_ABBREV;
if (arg[8] == '=')
abbrev = strtoul(arg + 9, NULL, 10);
if (arg[7] == '=')
abbrev = strtoul(arg + 8, NULL, 10);
if (abbrev < MINIMUM_ABBREV)
abbrev = MINIMUM_ABBREV;
else if (40 <= abbrev)

View File

@ -247,6 +247,7 @@ static void link_alt_odb_entries(const char *alt, const char *ep, int sep,
for ( ; cp < ep && *cp != sep; cp++)
;
if (last != cp) {
struct stat st;
struct alternate_object_database *alt;
/* 43 = 40-byte + 2 '/' + terminating NUL */
int pfxlen = cp - last;
@ -269,9 +270,19 @@ static void link_alt_odb_entries(const char *alt, const char *ep, int sep,
}
else
memcpy(ent->base, last, pfxlen);
ent->name = ent->base + pfxlen + 1;
ent->base[pfxlen] = ent->base[pfxlen + 3] = '/';
ent->base[entlen-1] = 0;
ent->base[pfxlen + 3] = '/';
ent->base[pfxlen] = ent->base[entlen-1] = 0;
/* Detect cases where alternate disappeared */
if (stat(ent->base, &st) || !S_ISDIR(st.st_mode)) {
error("object directory %s does not exist; "
"check .git/objects/info/alternates.",
ent->base);
goto bad;
}
ent->base[pfxlen] = '/';
/* Prevent the common mistake of listing the same
* thing twice, or object directory itself.
@ -551,8 +562,12 @@ static void prepare_packed_git_one(char *objdir, int local)
sprintf(path, "%s/pack", objdir);
len = strlen(path);
dir = opendir(path);
if (!dir)
if (!dir) {
if (errno != ENOENT)
error("unable to open object pack directory: %s: %s",
path, strerror(errno));
return;
}
path[len++] = '/';
while ((de = readdir(dir)) != NULL) {
int namelen = strlen(de->d_name);
@ -826,6 +841,25 @@ static unsigned long unpack_object_header(struct packed_git *p, unsigned long of
return offset;
}
int check_reuse_pack_delta(struct packed_git *p, unsigned long offset,
unsigned char *base, unsigned long *sizep,
enum object_type *kindp)
{
unsigned long ptr;
int status = -1;
use_packed_git(p);
ptr = offset;
ptr = unpack_object_header(p, ptr, kindp, sizep);
if (*kindp != OBJ_DELTA)
goto done;
memcpy(base, p->pack_base + ptr, 20);
status = 0;
done:
unuse_packed_git(p);
return status;
}
void packed_object_info_detail(struct pack_entry *e,
char *type,
unsigned long *size,
@ -1479,7 +1513,8 @@ int write_sha1_from_fd(const unsigned char *sha1, int fd, char *buffer,
local = mkstemp(tmpfile);
if (local < 0)
return error("Couldn't open %s for %s\n", tmpfile, sha1_to_hex(sha1));
return error("Couldn't open %s for %s",
tmpfile, sha1_to_hex(sha1));
memset(&stream, 0, sizeof(stream));
@ -1527,7 +1562,7 @@ int write_sha1_from_fd(const unsigned char *sha1, int fd, char *buffer,
}
if (memcmp(sha1, real_sha1, 20)) {
unlink(tmpfile);
return error("File %s has bad hash\n", sha1_to_hex(sha1));
return error("File %s has bad hash", sha1_to_hex(sha1));
}
return move_temp_to_file(tmpfile, sha1_file_name(sha1));

22
t/t3700-add.sh Executable file
View File

@ -0,0 +1,22 @@
#!/bin/sh
#
# Copyright (c) 2006 Carl D. Worth
#
test_description='Test of git-add, including the -- option.'
. ./test-lib.sh
test_expect_success \
'Test of git-add' \
'touch foo && git-add foo'
test_expect_success \
'Post-check that foo is in the index' \
'git-ls-files foo | grep foo'
test_expect_success \
'Test that "git-add -- -q" works' \
'touch -- -q && git-add -- -q'
test_done

63
t/t7101-reset.sh Executable file
View File

@ -0,0 +1,63 @@
#!/bin/sh
#
# Copyright (c) 2006 Shawn Pearce
#
test_description='git-reset should cull empty subdirs'
. ./test-lib.sh
test_expect_success \
'creating initial files' \
'mkdir path0 &&
cp ../../COPYING path0/COPYING &&
git-add path0/COPYING &&
git-commit -m add -a'
test_expect_success \
'creating second files' \
'mkdir path1 &&
mkdir path1/path2 &&
cp ../../COPYING path1/path2/COPYING &&
cp ../../COPYING path1/COPYING &&
cp ../../COPYING COPYING &&
cp ../../COPYING path0/COPYING-TOO &&
git-add path1/path2/COPYING &&
git-add path1/COPYING &&
git-add COPYING &&
git-add path0/COPYING-TOO &&
git-commit -m change -a'
test_expect_success \
'resetting tree HEAD^' \
'git-reset --hard HEAD^'
test_expect_success \
'checking initial files exist after rewind' \
'test -d path0 &&
test -f path0/COPYING'
test_expect_failure \
'checking lack of path1/path2/COPYING' \
'test -f path1/path2/COPYING'
test_expect_failure \
'checking lack of path1/COPYING' \
'test -f path1/COPYING'
test_expect_failure \
'checking lack of COPYING' \
'test -f COPYING'
test_expect_failure \
'checking checking lack of path1/COPYING-TOO' \
'test -f path0/COPYING-TOO'
test_expect_failure \
'checking lack of path1/path2' \
'test -d path1/path2'
test_expect_failure \
'checking lack of path1' \
'test -d path1'
test_done

View File

@ -149,6 +149,21 @@ test_expect_code () {
fi
}
# Most tests can use the created repository, but some amy need to create more.
# Usage: test_create_repo <directory>
test_create_repo () {
test "$#" = 1 ||
error "bug in the test script: not 1 parameter to test-create-repo"
owd=`pwd`
repo="$1"
mkdir "$repo"
cd "$repo" || error "Cannot setup test environment"
"$GIT_EXEC_PATH/git" init-db --template=$GIT_EXEC_PATH/templates/blt/ 2>/dev/null ||
error "cannot run git init-db -- have you built things yet?"
mv .git/hooks .git/hooks-disabled
cd "$owd"
}
test_done () {
trap - exit
case "$test_failure" in
@ -196,9 +211,5 @@ test -d ../templates/blt || {
# Test repository
test=trash
rm -fr "$test"
mkdir "$test"
cd "$test" || error "Cannot setup test environment"
"$GIT_EXEC_PATH/git" init-db --template=../../templates/blt/ 2>/dev/null ||
error "cannot run git init-db -- have you built things yet?"
mv .git/hooks .git/hooks-disabled
test_create_repo $test
cd "$test"

View File

@ -304,9 +304,11 @@ static void write_header(const unsigned char *sha1, char typeflag, const char *b
}
if (S_ISDIR(mode))
mode |= 0755; /* GIT doesn't store permissions of dirs */
if (S_ISLNK(mode))
mode |= 0777; /* ... nor of symlinks */
mode |= 0777;
else if (S_ISREG(mode))
mode |= (mode & 0100) ? 0777 : 0666;
else if (S_ISLNK(mode))
mode |= 0777;
sprintf(&header[100], "%07o", mode & 07777);
/* XXX: should we provide more meaningful info here? */
@ -391,7 +393,7 @@ int main(int argc, char **argv)
usage(tar_tree_usage);
}
commit = lookup_commit_reference(sha1);
commit = lookup_commit_reference_gently(sha1, 1);
if (commit) {
write_global_extended_header(commit->object.sha1);
archive_time = commit->date;

View File

@ -9,6 +9,7 @@
#
# To enable this hook, make this file executable.
. git-sh-setup
test -x "$GIT_DIR/hooks/commit-msg" &&
exec "$GIT_DIR/hooks/commit-msg" ${1+"$@"}
:

View File

@ -8,6 +8,7 @@
#
# To enable this hook, make this file executable.
. git-sh-setup
test -x "$GIT_DIR/hooks/pre-commit" &&
exec "$GIT_DIR/hooks/pre-commit" ${1+"$@"}
:

View File

@ -216,6 +216,9 @@ static int send_ref(const char *refname, const unsigned char *sha1)
static char *capabilities = "multi_ack";
struct object *o = parse_object(sha1);
if (!o)
die("git-upload-pack: cannot find object %s:", sha1_to_hex(sha1));
if (capabilities)
packet_write(1, "%s %s%c%s\n", sha1_to_hex(sha1), refname,
0, capabilities);

6
var.c
View File

@ -12,7 +12,7 @@ static const char var_usage[] = "git-var [-l | <variable>]";
struct git_var {
const char *name;
const char *(*read)(void);
const char *(*read)(int);
};
static struct git_var git_vars[] = {
{ "GIT_COMMITTER_IDENT", git_committer_info },
@ -24,7 +24,7 @@ static void list_vars(void)
{
struct git_var *ptr;
for(ptr = git_vars; ptr->read; ptr++) {
printf("%s=%s\n", ptr->name, ptr->read());
printf("%s=%s\n", ptr->name, ptr->read(0));
}
}
@ -35,7 +35,7 @@ static const char *read_var(const char *var)
val = NULL;
for(ptr = git_vars; ptr->read; ptr++) {
if (strcmp(var, ptr->name) == 0) {
val = ptr->read();
val = ptr->read(1);
break;
}
}